

On a semi-related note, I suspect we’re gonna see a pushback against automation in general at some point, especially in places where “shitty automation”.
On a semi-related note, I suspect we’re gonna see a pushback against automation in general at some point, especially in places where “shitty automation”.
In other news, Brian Merchant’s going full-time on Blood in the Machine.
Did notice a passage in the annoucement which caught my eye:
Meanwhile, the Valley has doubled down on a grow-at-all-costs approach to AI, sinking hundreds of billions into a technology that will automate millions of jobs if it works, might kneecap the economy if it doesn’t, and will coat the internet in slop and misinformation either way.
I’m not sure if its just me, but it strikes me as telling about how AI’s changed the cultural zeitgeist that Merchant’s happily presenting automation as a bad thing without getting backlash (at least in this context).
With the wide-ranging theft that AI bros have perpetrated, AI corps’ abuse of the research exception, AI’s ability to directly compete with original work (Exhibit A) and the myriad other things autoplag has unleashed on us, I suspect we’re gonna see a significant weakening of fair use.
Giving a concrete prediction, the research exception’s gonna be at high risk of being repealed. OpenAI et al crossed the Rubicon when they abused it to launder their “research data” into making their autoplags - any future research case will have to contend with allegations of being a for-profit operation in disguise.
On a more cultural front, if BlueSky’s partnering with ROOST and the shitshow it kicked off is any indication, any use (if not mention) of AI is gonna lead to immediate accusations of theft. Additionally, to pull up an old comment of mine, FOSS licenses are likely gonna dive in popularity as people come to view any form of open-source as asking for AI bros to steal your code.
To my knowledge, its standard ML. I doubt that’ll help ROOST (the nonprofit in question), considering a lot of the AI stench has rubbed off on ML as well.
In other news, all hell’s broken loose at BlueSky: https://bsky.app/profile/culturecrave.co/post/3lhv35la2pk2h
The “AI company” in question is a nonprofit that focuses on open-source safety tools which recently launched at the Paris AI Action Summit, but that was enough to cause things to go nuclear, especially given people initially flocked to BSky to get away from AI.
Thinking I should make this into a full post.
To reference a previous sidenote, DeepSeek gives corps and randos a means to shove an LLM into their shit for dirt-cheap, so I expect they’re gonna blow up in popularity.
People have worked out how to cram DeepSeek onto a Raspberry Pi
Anyways, here’s a quasi-related sidenote:
Part of me suspects DeepSeek is gonna quickly carve out a good chunk of the market for itself - for SaaS services looking for spicy autocomplete or a slop generator to bolt on to their products, DeepSeek’s high efficiency gives them a way to do it that doesn’t immediately blow a massive hole in their finances.
I do feel like active anti-scraping measures could go somewhat further, though - the obvious route in my eyes would be to try to actively feed complete garbage to scrapers instead - whether by sticking a bunch of garbage on webpages to mislead scrapers or by trying to prompt inject the shit out of the AIs themselves.
Me, predicting how anti-scraping efforts would evolve
(I have nothing more to add, I just find this whole development pretty vindicating)
AHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA
AHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA
AI is useless, shut the fuck up
On another somewhat orthogonal point, I suspect AI has likely soured the public on any kinda tech-forward vision for the foreseeable future.
Both directly and indirectly, the AI slop-nami has caused a lot of bad shit for the general public - from plagiarism to misinformation, from shit-tier AI art to screwing human artists, the public has come to view AI as an active blight on society, and use of AI as a virtual “Kick Me” sign.
Sidenote: AFAIK, even with this pardon, Ulbricht still ended up spending more time in prison than if he took a plea deal he was reportedly offered:
He was offered a plea deal, which would have likely given him a decade-long sentence, with the ability to get out early on good behavior. Worst-case scenario, he would have spent five years in a medium-security prison and been freed.
Gotta say, this whole situation’s reminding me of SBF - both of them thought they could outsmart the Feds, and both received much harsher sentences than rich white collar criminals usually get as a result.
If we did start apportioning political power to whoever can execute a basic strategy while clicking as fast as possible I think we’d all be bowing down to God-Emperor Flash or something.
Misread that and thought the joke was about bowing down to people who are good with Adobe Flash.
Admittedly, being good with Adobe Flash would be more impressive than being good at a video game - at the bare minimum, it implies you’ve got some art skills.
Also, a reminder that Musk was a junior programmer on two of the worst Sega CD games.
Which two? I already knew his accomplishments are pure garbage, I just wanna know which games he fouled with his programming “prowess”.
I suppose they could up with a scheme where a portion of the $$ from BTC sales to the Treasury get used to buy TrumpCoin or whatever. But that requires him to trust BitCoin bros. Is that plausible?
Depends if he sees BTC bros as easy marks - and by my guess, he probably does.
New piece from Brian Merchant: ‘AI is in its empire era’
Recently finished it, here’s a personal sidenote:
This AI bubble’s done a pretty good job of destroying the “apolitical” image that tech’s done so much to build up (Silicon Valley jumping into bed with Trump definitely helped, too) - as a matter of fact, it’s provided plenty of material to build an image of tech as a Nazi bar writ large (once again, SV’s relationship with Trump did wonders here).
By the time this decade ends, I anticipate tech’s public image will be firmly in the toilet, viewed as an unmitigated blight on all our daily lives at best and as an unofficial arm of the Fourth Reich at worst.
As for AI itself, I expect it’s image will go into the shitter as well - assuming the bubble burst doesn’t destroy AI as a concept like I anticipate, it’ll probably be viewed as a tech with no ethical use, as a tech built first and foremost to enable/perpetrate atrocities to its wielder’s content.