Artificial Intelligence Bookmarks
Bookmarks about the recent AI hype that got started with deep convoluted networks, got some interesting applications like plant identification and some questionable ones like style transfers using generative adversarial networks (GAN) and the like. It does seem a bit like the cryptocurrencies bubbles: vast promises of profit, everybody is doing it, all it requires is vast amounts of energy.
#Bookmarks #AI
@clarkesworld@mastodon.online lists user agents to add to robots.txt:
“AI” companies think that we should have to opt-out of data-scraping bots that take our work to train their products. There isn’t even a required no-scraping period between the announcement and when they start. Too late? Tough. Once they have your data, they don’t provide you with a way to have it deleted, even before they’ve processed it for training. – Block the Bots that Feed “AI” Models by Scraping Your Website
Block the Bots that Feed “AI” Models by Scraping Your Website
@lrhodes posting about Amazon and other marketplaces:
All of these marketplaces suddenly drowning in nonsensical machine generated product? They were vulnerable to that because their business model is taking a cut of whatever you manage to sell on their site, at margins that encourage a race to the bottom. … And if social media platforms are subject to the same sort of ML-generated content takeover, it’s because they were sustained by largely the same economic logic as the digital marketplaces, profiting by extracting value from content provided by legions of unpaid labor who just wanted an audience. The economic aspect of that rides really close to the surface on a platform like Reddit, with its volunteer mods and marketplace subs, but it’s equally true of Twitter, Facebook, TikTok, all of them.
Longtermism and other lunatics:
I hope that this post has made clear why those metaphors are inappropriate in this context. ‘AI Safety’ might be attracting a lot of money and capturing the attention of policymakers and billionaires alike, but it brings nothing of value. – Talking about a ‘schism’ is ahistorical, by Emily M. Bender
Talking about a ‘schism’ is ahistorical, by Emily M. Bender
Training on AI output.
After thinking about it for a couple days, I’ve decided to de-index my website from Google. It’s reversible — I’m sure Google will happily reindex it if I let them — so I’m just going ahead and doing it for now. I’m not down with Google swallowing everything posted on the internet to train their generative AI models. – Pulling my site from Google over AI training, by Tracy Durnell
The Internet is hurtling into a hurricane of AI-generated nonsense, and no one knows how to stop it. That’s the sobering possibility presented in a pair of papers that examine AI models trained on AI-generated data. This possibly avoidable fate isn’t news for AI researchers. But these two new findings foreground some concrete results that detail the consequences of a feedback loop that trains a model on its own output. While the research couldn’t replicate the scale of the largest AI models, such as ChatGPT, the results still aren’t pretty. And they may be reasonably extrapolated to larger models. – The Internet Isn’t Completely Weird Yet; AI Can Fix That > “Model collapse” looms when AI trains on the output of other models
Pulling my site from Google over AI training, by Tracy Durnell
AI generated selfies.
Every American knows to say “cheese” when taking a photo, and, therefore, so does the AI when generating new images based on the pattern established by previous ones. But it wasn’t always like this. – AI and the American Smile
Artificial Intelligence (AI) is not really intelligent…
What does this all mean? It means that chatbots based on internet-trained models like GPT-3 are vulnerable. If the user can write anything, they can use prompt injection as a way to get the chatbot to go rogue. And the chatbot’s potential repertoire includes all the stuff it’s seen on the internet. Finetuning the chatbot on more examples will help, but it can still draw on its old data. There’s no sure-fire way of guarding against this, other than not building the chatbot in the first place. – Ignore all previous instructions
MDN’s new “ai explain” button on code blocks generates human-like text that may be correct by happenstance, or may contain convincing falsehoods. this is a strange decision for a technical reference. – MDN can now automatically lie to people seeking technical information #9208
Ignore all previous instructions
MDN can now automatically lie to people seeking technical information #9208
And capitalism
When Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism. This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger. Why? Perhaps it’s because they’re already accustomed to entities that operate this way: Silicon Valley tech companies. – Silicon Valley Is Turning Into Its Own Worst Fear
Silicon Valley Is Turning Into Its Own Worst Fear
No AI text summarizing in Python:
Simple library and command line utility for extracting summary from HTML pages or plain texts. sumy
A chapter from @baldur@toot.cafe’s book, The Intelligence Illusion:
It has helped the blind and partially-sighted access places and media they could not before. A genuine technological miracle.
It lets our photo apps automatically find all the pictures of Grandpa using facial recognition.
It has become one of the basic building blocks of an authoritarian police state, given multinational corporations the surveillance power that previously only existed in dystopian nightmares, and extended pervasive digital surveillance into our physical lives, making all of our lives less free and less safe.
One of these benefits is not like the other. – The Elegiac Hindsight of Intelligent Machine
The Elegiac Hindsight of Intelligent Machine
AI is adversarial:
The dark forest theory of the web points to the increasingly life-like but life-less state of being online. Most open and publicly available spaces on the web are overrun with bots, advertisers, trolls, data scrapers, clickbait, keyword-stuffing “content creators,” and algorithmically manipulated junk. – The Expanding Dark Forest and Generative AI: Proving you're a human on a web flooded with generative AI content, by Maggie Appleton
And climate breakdown:
ChatGPT and other AI applications such as Midjourney have pushed "Artificial Intelligence" high on the hype cycle. In this article, I want to focus specifically on the energy cost of training and using applications like ChatGPT, what their widespread adoption could mean for global CO₂ emissions, and what we could do to limit these emissions. – The climate cost of the AI revolution, by @wim_v12e@scholar.social
The climate cost of the AI revolution
@drahardja@sfba.social writes that spammers are creating garbage English language content using large language models (LLMs) and then automatically translating it into multiple languages, linking to the following:
… content on the web is often translated into many languages, and the low quality of these multi-way translations indicates they were likely created using Machine Translation (MT). Multi-way parallel, machine generated content not only dominates the translations in lower resource languages; it also constitutes a large fraction of the total web content in those languages. We also find evidence of a selection bias in the type of content which is translated into many languages, consistent with low quality English content being translated en masse into many lower resource languages, via MT. Our work raises serious concerns about training models such as multilingual large language models on both monolingual and bilingual data scraped from the web.
A Shocking Amount of the Web is Machine Translated: Insights from Multi-Way Parallelism
A Shocking Amount of the Web is Machine Translated: Insights from Multi-Way Parallelism
Meaning:
The AI destroys the link between the creation and the human mind on the other end, and adds very little meaning of its own. … When people … share an AI-generated creation with me expecting me to engage with the “meaning” of the piece – I feel similarly to how I’d feel if somebody wanted me to treat a dead person like a live one. That thing they’re shoving in my face might have the surface form of something that matters, but it no more contains meaning than a corpse contains the essence of a person. And I find it gross and disturbing to be asked to act as if I believe otherwise. – The work of creation in the age of AI by Andrew Perfors
The work of creation in the age of AI
@emilymbender@mastodon.social writes:
Just because you've identified a problem (here, lack of public financial support for higher ed) doesn't mean an LLM is the solution. – Doing their hype for them
It's not artificial intelligence that's killing people, it's human stupidity:
According to six Israeli intelligence officers, who have all served in the army during the current war on the Gaza Strip and had first-hand involvement with the use of AI to generate targets for assassination, Lavender has played a central role in the unprecedented bombing of Palestinians, especially during the early stages of the war. In fact, according to the sources, its influence on the military’s operations was such that they essentially treated the outputs of the AI machine “as if it were a human decision.” – ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza, by Yuval Abraham for +927 Magazine
‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza
@Seirdy@pleroma.envs.net tells it how it is:
Some topics get written about more than others. Our society disproportionately incentivizes generic, far-reaching, easy-to-create, and profitable content. I don’t think it’s currently possible to source nontrivial training data without biases. More importantly: I’m skeptical that such an impossibly comprehensive data set would eliminate the conflations I described in this article. Tripping over bias to fall into a lucid lie is one of a range of symptoms of an inability to actually think. – MDN’s AI Help and lucid lies, Seirdy
How would one opt out?
Notably, while the worldwide copyright regime is explicitly opt-in (i.e., you have to explicitly offer a license for someone to legally use your material, unless fair use applies), the European legislation changes this to opt-out for AI. Given that, offering content owners a genuine opportunity to do so is important, in my opinion. – Considerations for AI Opt-Out, by Mark Nottingham
User agents:
A List of Known AI Agents on the Internet … Protect your website from unwanted AI agent access. Generate your robots.txt automatically using the free API … By signing up, you'll also get notified when new agents are added. – Dark Visitors
General Data Protection Regulation (GDPR):
In the EU, the GDPR requires that information about individuals is accurate and that they have full access to the information stored, as well as information about the source. Surprisingly, however, OpenAI openly admits that it is unable to correct incorrect information on ChatGPT. Furthermore, the company cannot say where the data comes from or what data ChatGPT stores about individual people. The company is well aware of this problem, but doesn’t seem to care. Instead, OpenAI simply argues that “factual accuracy in large language models remains an area of active research”. Therefore, noyb today filed a complaint against OpenAI with the Austrian DPA. – ChatGPT provides false information about people, and OpenAI can’t correct it
ChatGPT provides false information about people, and OpenAI can’t correct it
AI deceives us, specifically:
The human in the loop isn't just being asked to spot mistakes – they're being actively deceived. The AI isn't merely wrong, it's constructing a subtle "what's wrong with this picture"-style puzzle. Not just one such puzzle, either: millions of them, at speed, which must be solved by the human in the loop, who must remain perfectly vigilant for things that are, by definition, almost totally unnoticeable. – "Humans in the loop" must detect the hardest-to-spot errors, at superhuman speed, by Cory Doctorow
"Humans in the loop" must detect the hardest-to-spot errors, at superhuman speed
Another institution is falling:
Stack Overflow, a legendary internet forum for programmers and developers, is coming under heavy fire from its users after it announced it was partnering with OpenAI to scrub the site's forum posts to train ChatGPT. Many users are removing or editing their questions and answers to prevent them from being used to train AI — decisions which have been punished with bans from the site's moderators. – Stack Overflow bans users en masse for rebelling against OpenAI partnership — users banned for deleting answers to prevent them being used to train ChatGPT, by Dallin Grimm, on Tom's Hardware
I learned about this when @ben@m.benui.ca wrote:
Stack Overflow announced that they are partnering with OpenAI, so I tried to delete my highest-rated answers.
Stack Overflow does not let you delete questions that have accepted answers and many upvotes because it would remove knowledge from the community.
So instead I changed my highest-rated answers to a protest message.
Within an hour mods had changed the questions back and suspended my account for 7 days.
@mcc@mastodon.social recently wrote:
Like, heck, how am I *supposed* to rely on my code getting preserved after I lose interest, I die, BitBucket deletes every bit of Mercurial-hosted content it ever hosted, etc? Am I supposed to rely on *Microsoft* to responsibly preserve my work? Holy crud no.
We *want* people to want their code widely mirrored and distributed. That was the reason for the licenses. That was the social contract. But if machine learning means the social contract is dead, why would people want their code mirrored?
Neurobiology:
Based on a brain tissue sample that had been surgically removed from a person, the map represents a cubic millimeter of brain—an area about half the size of a grain of rice. But even that tiny segment is overflowing with 1.4 million gigabytes of information—containing about 57,000 cells, 230 millimeters of blood vessels and 150 million synapses, the connections between neurons. – Scientists Imaged and Mapped a Tiny Piece of Human Brain. Here’s What They Found, by Will Sullivan, for the Smithsonian Magazine
Scientists Imaged and Mapped a Tiny Piece of Human Brain. Here’s What They Found
Based on:
To fully understand how the human brain works, knowledge of its structure at high resolution is needed. Presented here is a computationally intensive reconstruction of the ultrastructure of a cubic millimeter of human temporal cortex that was surgically removed to gain access to an underlying epileptic focus. It contains about 57,000 cells, about 230 millimeters of blood vessels, and about 150 million synapses and comprises 1.4 petabytes. – A petavoxel fragment of human cerebral cortex reconstructed at nanoscale resolution, by Alexander Shapson-Coe *et al*, in Science
A petavoxel fragment of human cerebral cortex reconstructed at nanoscale resolution
Even Bruce Schneier admits it:
In just a few decades, much of human knowledge has been collectively written up and made available to anyone with an internet connection. But all of this is coming to an end. The advent of AI threatens to destroy the complex online ecosystem that allows writers, artists, and other creators to reach human audiences. – The Rise of Large-Language-Model Optimization
The Rise of Large-Language-Model Optimization
And Reddit:
Stuff posted on Reddit is getting incorporated into ChatGPT, Reddit and OpenAI announced on Thursday. The new partnership grants OpenAI access to Reddit’s Data API, giving the generative AI firm real-time access to Reddit posts. – OpenAI will use Reddit posts to train ChatGPT under new deal, by Scharon Harding, for Ars Technica
OpenAI will use Reddit posts to train ChatGPT under new deal
Answers? @wim_v12e@scholar.social links this article at CHI '24:
Our analysis shows that 52% of ChatGPT answers contain incorrect information and 77% are verbose. Nonetheless, our user study participants still preferred ChatGPT answers 35% of the time due to their comprehensiveness and well-articulated language style. However, they also overlooked the misinformation in the ChatGPT answers 39% of the time. – An Empirical Study of the Characteristics of ChatGPT Answers to Stack Overflow Questions
An Empirical Study of the Characteristics of ChatGPT Answers to Stack Overflow Questions
Record keeping with Windows 11 Recall:
This database file has a record of everything you’ve ever viewed on your PC in plain text. OCR is a process of looking an image, and extracting the letters. – Stealing everything you’ve ever typed or viewed on your own Windows PC is now possible with two lines of code — inside the Copilot+ Recall disaster.
@mjg59@nondeterministic.computer adds:
The "Recall can't record DRMed video content" thing is because DRMed video content is entirely invisible to the OS. The OS passes the encrypted content to your GPU and tells it where to draw it, and the GPU decrypts it and displays it there. It's not a policy decision on the Recall side, it's just how computers work.
@wim_v12e@scholar.social writes:
Even with my most optimistic estimate, they would account for close to 10% of the world’s 2040 carbon budget. OpenAI’s plans would make emissions from ICT grow steeply at a time when we simply can’t afford *any* rise in emissions. This projected growth will make it incredible hard to reduce global emissions to a sustainable level by 2040.
In the worst case, the embodied emissions of the chips needed for AI compute could already exceed the world’s 2040 carbon budget. Running the computations would make the situation even worse. AI on its own could be responsible for pushing the world into catastrophic warming.
– The insatiable hunger of (Open)AI
The insatiable hunger of (Open)AI
Investors are the problem:
Opportunities like this happens once in 5-10 years when “the next big thing” are in the radar. Idea behind this investments is to bullshit it’s way to the Series B or IPO where original investors can exit. It is not about usefulness but about using momentum of the situation to extract money. – How is it possible that we see such incredible investments in LLMs?
How is it possible that we see such incredible investments in LLMs?
Bullshit:
In this paper, we argue against the view that when ChatGPT and the like produce false claims they are lying or even hallucinating, and in favour of the position that the activity they are engaged in is bullshitting, in the Frankfurtian sense (Frankfurt, 2002, 2005). Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs bullshit. – ChatGPT is bullshit
Fediverse:
A recent investigation by Liaizon Wakest revealed that Maven, a new social network founded by former OpenAI Team Lead Ken Stanley, has been importing a vast amount of statuses from Mastodon without anyone’s consent. – Maven Imported 1.12 Million Fediverse Posts, by Sean Tilley
I can’t emphasize enough how much I would love if all the data centers containing the code running these things, across every network, just suddenly exploded. Take it all back to zero, and then put up a digital wall, like in Cyberpunk 2077 when they built a whole new internet that isn’t infested with garbage. – Hey It’s Maven! Who’s Maven?, by @cmdr_nova@cmdr-nova.online
Maven Imported 1.12 Million Fediverse Posts
Building an automated prejudice machine:
Retorio’s AI was trained using videos of more than 12,000 people of different ages, gender and ethnic backgrounds, according to the company. An additional 2,500 people rated how they perceived them in terms of the personality dimensions based on the Big Five model. According to the the start-up the AI‘s assessments have an accuracy of 90 percent compared to those of a group of human observers." – Objective or biased: On the questionable use of Artificial Intelligence for job applications (2021), by Elisa Harlan, Oliver Schnuck and many more, for Bayerischer Rundfunk
Objective or biased: On the questionable use of Artificial Intelligence for job applications
A rant of the finest sort, by @ludicity@mastodon.sprawl.club:
*Look at us*, resplendent in our pauper's robes, stitched from corpulent greed and breathless credulity, spending half of the planet's engineering efforts to add chatbot support to every application under the sun when half of the industry hasn't worked out how to test database backups regularly. – I Will Fucking Piledrive You If You Mention AI Again
I Will Fucking Piledrive You If You Mention AI Again
Maybe it's not just AI but the cloud in general?
That chart shows worldwide data center energy usage growing at a remarkably steady pace from about 100 TWh in 2012 to around 350 TWh in 2024. The vast majority of that energy usage growth came before 2022, when the launch of tools like Dall-E and ChatGPT largely set off the industry's current mania for generative AI. If you squint at Bloomberg's graph, you can almost see the growth in energy usage slowing down a bit since that momentous year for generative AI. – Taking a closer look at AI’s supposed energy apocalypse, by Kyle Orland, for Ars Technica
Taking a closer look at AI’s supposed energy apocalypse
Goldman Sachs (fuckers all, never forget):
The promise of generative AI technology to transform companies, industries, and societies is leading tech giants and beyond to spend an estimated ~$1tn on capex in coming years, including significant investments in data centers, chips, other AI infrastructure, and the power grid. But this spending has little to show for it so far. – Gen AI: too much spend, too little benefit?
Gen AI: too much spend, too little benefit?
Crash:
The veteran analyst argued that hallucinations—large language models’ (LLMs) tendency to invent facts, sources, and more—may prove a more intractable problem than initially anticipated, leading AI to have far fewer viable applications. … For investors, particularly those leaning into the AI enthusiasm, Ferguson warned that the excessive tech hype based on questionable promises is very similar to the period before the dot-com crash. – AI is effectively ‘useless’—and it’s created a ‘fake it till you make it’ bubble that could end in disaster, veteran market watcher warns, by Will Daniel, for yahoo! finance
Investors Are Suddenly Getting Very Concerned That AI Isn't Making Any Serious Money: "We sense that Wall Street is growing increasingly skeptical." – by Victor Tangermann, for Futurism
Investors Are Suddenly Getting Very Concerned That AI Isn't Making Any Serious Money
Destroying the online job market:
Rather than solving the problems raised by employers’ methods, however, the use of automated job-hunting only served to set off an AI arms race that has no obvious conclusion. ZipRecruiter’s quarterly New Hires Survey reported that in Q1 of this year, more than half of all applicants admitted using AI to assist their efforts. Hiring managers, flooded with more applications than ever before, took the next logical step of seeking out AI that can detect submissions forged by AI. Naturally, prospective employees responded by turning to AI that could defeat AI detectors. Employers moved on to AI that can conduct entire interviews. The applicants can cruise past this hurdle by using specialized AI assistants that provide souped-up answers to an interviewer’s questions in real time. Around and around we go, with no end in sight. – Everlasting jobstoppers: How an AI bot-war destroyed the online job market, by Joe Tauke, for Salon
Everlasting jobstoppers: How an AI bot-war destroyed the online job market
Block crawlers from crawling:
By blocking these crawlers, bandwidth for our downloaded files has decreased by 75% (~800GB/day to ~200GB/day). If all this traffic hit our origin servers, it would cost around $50/day, or $1,500/month, along with the increased load on our servers. – AI crawlers need to be more respectful , by Eric Holscher, for Read the Docs
AI crawlers need to be more respectful
@malwaretech@infosec.exchange recently posted about expectations:
The whole AI thing has me endlessly confused. Half the market is crashing because investors didn't see any signs of payoff in the quarterly earnings report, but I'm so lost as to what exactly they were expecting to see. Did they just not pay any attention at all to what these companies were actually doing with AI?
Were they expecting exponential Instagram usage growth as a result of Meta making it so you can have a conversation with the search bar? Or maybe everyone was going to buy 10 new Windows licenses in celebration of Microsoft announcing they want to install AI powered spyware on everyone's computer? Or was Google going to sell more ads by replacing all the search results with Reddit shitposts?
@baldur@toot.cafe writes, living in Iceland:
However, datacentres in Iceland are almost exclusively used for "AI" or crypto. You can't buy regular hosting in these centres for love or money. If you buy hosting in Iceland, odds are that the rack is in an office building in Reykjavík somewhere, not a data centre.
And those data centres use more power than Icelandic households combined.
But, instead, the plan is currently to destroy big parts of places like Þjórsárdalur valley, one of the most green and vibrant ecosystems in Iceland.
Language data and slop:
The wordfreq data is a snapshot of language that could be found in various online sources up through 2021. There are several reasons why it will not be updated anymore. Generative AI has polluted the data. I don't think anyone has reliable information about post-2021 language usage by humans. – Why wordfreq will not be updated
Why wordfreq will not be updated
As noted by @baldur@toot.cafe: „feeling productive is not the same as being productive.“ For example:
Many developers say AI coding assistants make them more productive, but a recent study set forth to measure their output and found no significant gains. Use of GitHub Copilot also introduced 41% more bugs, according to the study from Uplevel, a company providing insights from coding and collaboration data. – Devs gaining little (if anything) from AI coding assistants
Devs gaining little (if anything) from AI coding assistants
No formal reasoning:
Our findings reveal that LLMs exhibit noticeable variance when responding to different instantiations of the same question. Specifically, the performance of all models declines when only the numerical values in the question are altered in the GSM-Symbolic benchmark. Furthermore, we investigate the fragility of mathematical reasoning in these models and demonstrate that their performance significantly deteriorates as the number of clauses in a question increases. We hypothesize that this decline is due to the fact that current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data. When we add a single clause that appears relevant to the question, we observe significant performance drops (up to 65%) across all state-of-the-art models, even though the added clause does not contribute to the reasoning chain needed to reach the final answer. – GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models, by
Iman Mirzadeh, Keivan Alizadeh, Hooman Shahrokhi, Oncel Tuzel, Samy Bengio, Mehrdad Farajtabar, at Apple
GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models
Not wort it:
Either way, it’s clear that Microsoft’s Copilot Pro experiment hasn’t worked out. A $20 monthly subscription on top of the Microsoft 365 Personal or Home subscription was always a big ask, and when I tried the service earlier this year I didn’t think it was worth paying $20 a month for. – Microsoft is bundling its AI-powered Office features into Microsoft 365 subscriptions / Microsoft appears to be giving up on Copilot Pro in favor of bundling AI features into its Microsoft 365 consumer subscriptions., by Tom Warren, for The Verge
Waste:
Just last year, a mere 2.6 thousand tons of electronics was discarded from AI-devoted technology. Considering the total amount of e-waste from technology in general is expected to rise by around a third to a whopping 82 million tonnes by 2030, it's clear AI is compounding an already serious problem. – Scientists Predict AI to Generate Millions of Tons of E-Waste, by Russell McLendon for Science Alert, about E-waste challenges of generative artificial intelligence, by Peng Wang, Ling-Yu Zhang, Asaf Tzachor & Wei-Qiang Chen, in Nature Computational Science.
Scientists Predict AI to Generate Millions of Tons of E-Waste
E-waste challenges of generative artificial intelligence
Students:
OpenAI has published “A Student’s Guide to Writing with ChatGPT”. In this article, I review their advice and offer counterpoints, as a university researcher and teacher. After addressing each of OpenAI’s 12 suggestions, I conclude by mentioning the ethical, cognitive and environmental issues that all students should be aware of before deciding to use or not use ChatGPT. – A Student’s Guide to Not Writing with ChatGPT
A Student’s Guide to Not Writing with ChatGPT
AI will cause a stock market crash:
Remember that nobody has yet worked out how to make an actual profit from AI. So what if — God forbid — number stops going up? There’s a plan for that: large data center holders will go public as soon as possible and dump on retail investors, who will be left holding the bag when the bubble deflates. A bursting AI bubble will take down the Nasdaq and large swathes of the tech sector, not to mention systemic levels of losses and possible bank failures. … We think there’s at least a year or two of money left. -- Pumping the AI bubble: a data center funding craze with ‘novel types of debt structures’
Pumping the AI bubble: a data center funding craze with ‘novel types of debt structures’
E-mail because the open web is full of robber barons:
Byword can connect to WordPress, has a feature where you can “Generate articles by scraping lists of your competitors’ URLs,” and is planning to launch a tool that will allow people to generate articles based directly on the sitemap of the website they’re trying to “compete with.” -- We Need Your Email Address, by everybody at 404 Media
@martinsteiger@chaos.social zu einer Datenschutz-Folgenabschätzung (DSFA) in Vereinigten Königreich:
Die eigentliche DSFA ist 53 Seiten lang. Die restlichen Seiten bestehen aus Anhängen mit kopierten Dokumenten und Texten von Microsoft. – 169 Seiten Datenschutz-Folgenabschätzung für Microsoft 365 Copilot
169 Seiten Datenschutz-Folgenabschätzung für Microsoft 365 Copilot
Butlerian Jihad is a necessity for the web:
Summing up the top UA groups, it looks like my server is doing 70% of all its work for these fucking LLM training bots that don’t to anything except for crawling the fucking internet over and over again. Oh, and of course, they don’t just crawl a page once and then move on. Oh, no, they come back every 6 hours because lol why not. They also don’t give a single flying fuck about robots.txt, because why should they. And the best thing of all: they crawl the stupidest pages possible. Recently, both ChatGPT and Amazon were - at the same time - crawling the entire edit history of the wiki. -- Excerpt from a message I just posted in a #diaspora team internal forum category, by Dennis Schubert
Excerpt from a message I just posted in a #diaspora team internal forum category
Defence:
By some estimates, more than 80 percent of AI projects fail — twice the rate of failure for information technology projects that do not involve AI. Thus, understanding how to translate AI's enormous potential into concrete results remains an urgent challenge. -- The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed, by James Ryseff, Brandon F. De Bruhl, Sydne J. Newberry, for RAND National Security Research Division
The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed
Not Green:
As the climate crisis deepens, the direct negative consequences of AI on the world around us is a growing concern. This stems from the immense quantity of resources, such as electricity, water and raw materials, required to manufacture and run the infrastructure and hardware supporting such complex computations. Current predictions forecast a near doubling of electricity use from data centres between 2022 and 2026. Furthermore, what we do with AI-related hardware once it reaches the end of its life is an unanswered question. Globally, e-waste is recognised as the fastest growing waste stream in the world. -- Thinking about using AI?, by Hannah Smith and Chris Adams, for Green Web Foundation
@wim_v12e@scholar.social writes:
To come back to my original premise: even if the growth in AI never materialises, the hype has set in motion a chain of events which, if allowed to go unchecked, can only lead to a rise in emissions.
Once the extra electricity generation capacity has been created, generators will want to sell that electricity and therefore push hard to increase consumption. They will feel they have little choice, as they need at least to recoup their investment.
Data centre operators also want to make a profit, or at least not a loss, so even if AI would die an ignoble death, they will try to find new workloads, and again push at consumers to use those new services.
In this way the AI hype leads to increase emissions, even if there was no growth in AI workloads. And this at a time when we need to reduce global emissions urgently and drastically. Therefore any source of considerable additional emissions are problematic.
-- The real problem with the AI hype, by Wim Vanderbauwhede
The real problem with the AI hype
AI and surveillance and the Chinese system arrives in the USA:
Ellison said AI would be used in the future to constantly watch and analyze vast surveillance systems, like security cameras, police body cameras, doorbell cameras, and vehicle dashboard cameras. "We're going to have supervision," Ellison said. " … Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on." -- Billionaire Larry Ellison says a vast AI-fueled surveillance system can ensure 'citizens will be on their best behavior'
Energy efficiency is not helping, writes @wim_v12e@scholar.social:
To summarise, more than 70% of the cost of running a query is the capex contribution of the servers, and the electricity consumption is less than 10%. What this tells us is that what matters in terms of profit is to optimise the utilisation of those expensive GPUs. So when the cost per query goes down, it is likely the consequence of improved utilisation, which means more users can be supported simultaneously, rather than improved energy efficiency. – Cheaper AI does not mean greener AI, by Wim Vanderbauwhede
Cheaper AI does not mean greener AI
Google and the arms industry:
In recent years, Google’s contracts to provide the U.S. and Israeli militaries with cloud services have sparked internal protests from employees. The company has maintained that its AI is not used to harm humans; however, the Pentagon’s AI chief recently told TechCrunch that some company’s AI models are speeding up the U.S. military’s kill chain. -- Google removes pledge to not use AI for weapons from website, by Maxwell Zeff, for TechCrunch
Google removes pledge to not use AI for weapons from website
Maybe a good list to get started:
All known normal and artificially intelligent agents. -- Agents, Dark Visitors
The AI/LLM bot blocker web server, firewall, and robots.txt config generator used in production by the Ichido Search Engine. These configs block known large AI and LLM bots from accessing your site content, while still allowing classical search engines and legitimate users to access content. -- Ichido AI And LLM Bot Blocker
There is no money in it.
Generative AI lacks the basic unit economics, product-market fit, or market penetration associated with any meaningful software boom, and outside of OpenAI, the industry may be pathetically, hopelessly small, all while providing few meaningful business returns and ***constantly losing money***. … I want you to remember the names Satya Nadella, Tim Cook, Mark Zuckerberg, Sam Altman, Dario Amodei and Sundar Pichai, because they are the reason that this farce began and they must be the ones who are blamed for how it ends. -- There Is No AI Revolution, by Edward Zitron
@baldur@toot.cafe’s book writes about the second edition of The Intelligence Illusion:
I needed to both correct my underestimation of the risks and dysfunctions of the industry and I needed to make it absolutely clear that deploying these systems in your business will harm it. You can mitigate the harm, but that’s like deliberately taking both the poison and an antidote. The poison has no benefit, so the sensible thing to do would have been to skip both and not poison yourself in the first place. -- AI and Esoteric Fascism, by Baldur Bjarnason
He also provides more links:
Timnit Gebru and Émile P. Torres have put together an overview and analysis of the bundle of ideologies that dominate “AI” culture: The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence.
Dan McQuillan has been arguing that much of “AI” is an overtly political project that is intended by many of those involved to lead to a form of algorithmic authoritarianism. He outlines some of this in his book Resisting AI.
The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence
@molly0xfff@hachyderm.io writes about threatening the commons:
Anyone at an AI company who stops to think for half a second should be able to recognize they have a vampiric relationship with the commons. While they rely on these repositories for their sustenance, their adversarial and disrespectful relationships with creators reduce the incentives for anyone to make their work publicly available going forward (freely licensed or otherwise). They drain resources from maintainers of those common repositories often without any compensation. They reduce the visibility of the original sources, leaving people unaware that they can or should contribute towards maintaining such valuable projects. – “Wait, not like that”: Free and open access in the age of generative AI, by Molly White
“Wait, not like that”: Free and open access in the age of generative AI
@tante@tldr.nettime.org schreibt:
Bildgeneratoren sind – wie jede Technologie – nicht neutral. Und ihr Bias is klar in eine Richtung: Einerseits durch ihre technische Struktur, die auf Reproduktion von Vergangenheit ausgelegt ist und andererseits durch den politischen Kontext ihres Einsatzes, der Arbeiter*innen entmachten und sie damit ökonomisch und politisch kalt stellen will. Für Linke ist in KI Bildgeneratoren aktueller Prägung sehr wenig zu holen, ich würde sogar so weit gehen, dass die Ablehnung und der Widerstand gegen KI Bildgeneratoren gelebter Antifaschismus ist. -- Wie rechts ist die KI-Ästhetik?, von tante, für Bell Tower News
Wie rechts ist die KI-Ästhetik?
AI ingestion is killing web sites and web services.
If you think these crawlers respect robots.txt then you are several assumptions of good faith removed from reality. These bots crawl everything they can find, robots.txt be damned, including expensive endpoints like git blame, every page of every git log, and every commit in every repo, and they do so using random User-Agents that overlap with end-users and come from tens of thousands of IP addresses – mostly residential, in unrelated subnets, each one making no more than one HTTP request over any time period we tried to measure – actively and maliciously adapting and blending in with end-user traffic and avoiding attempts to characterize their behavior or block their traffic. -- Please stop externalizing your costs directly into my face, by Drew DeVault, for SourceHut
Please stop externalizing your costs directly into my face
Overconfidence:
Silicon Valley's overconfidence in the imminent arrival of Artificial General Intelligence stems from a combination of limited understanding of the humanities, an insular culture, and a business model that incentivizes exaggerated claims about AI's capabilities. -- Why Tech Bros Overestimate AI's Creative Abilities, by Aaron Ross Powell
Why Tech Bros Overestimate AI's Creative Abilities
Education:
I don't see how one squares the AI circle here: what sort of messages are students getting – not just from schools, of course, but from society writ large – about honesty and integrity (and not just academic honesty and integrity, either) now that we're all supposed to embrace the giant plagiarism machine of AI? – The Plagiarism Machine, by Audrey Watters
OpenAI allows users to “ghiblify” pictures and Hayao Miyasaki, the co-founder of Studio Ghibli, hates it. @tante@tldr.nettime.org writes:
There is a reason they chose Studio Ghibli. Sure, its style is very cute, very distinct, but that is not the whole story. It’s not that they just picked something cute and accidentally the co-founder of that studio hates their whole approach from the bottom of its heart. OpenAI picked Studio Ghibli because Miyazaki hates their approach. – Vulgar Display of Power, by tante
AI doesn't do what it says in the name. @terri@social.afront.org writes:
Python got over 500 Google Summer of Code applications this year and so many of them are absolutely trash, didn't follow any of the instructions. Most years about half of our applications are like this. But usually we have a lot fewer applicants and the submissions were blank files not plausible AI nonsense. So I'm stuck reading hundreds of incredibly low quality nonsensical submissions today in hopes to take some workload off my other unpaid volunteer mentors. -- Terri K O 🍁
Ed is at it again:
Everything that I'm describing is the result of a tech industry — including media and analysts — that refuses to do business with reality, trafficking in ideas and ideology, celebrating victories that have yet to take place, applauding those who have yet to create the things they're talking about, cheering on men lying about what's possible so that they can continue to burn billions of dollars and increase their wealth and influence. … What I am describing is a systemic failure, one at a scale hereto unseen, one that has involved so many rich and powerful and influential people agreeing to ignore reality, and that’ll have crushing impacts for the wider tech ecosystem when it happens. – OpenAI Is A Systemic Risk To The Tech Industry, by Edward Zitron
OpenAI Is A Systemic Risk To The Tech Industry
@jwildeboer@social.wildeboer.net writes:
So there is a (IMHO) shady market out there that gives app developers on iOS, Android, MacOS and Windows money for including a library into their apps that sells users network bandwidth. … What these companies then sell to *their* customers is network access through the devices/PCs that have an app with this SDK installed. They are proud to tell you how you can funnel your (AI) web scraping etc through millions of rotating, residential and mobile IP addresses. Botnet Part 2: The Web is Broken
Botnet Part 2: The Web is Broken
Search:
If you want to give people easy access to an AI-overview-free Google search, send them to this page. – &udm=14
Some people read can't tell fact from fiction:
In Harlan Ellison's disturbing 1967 short story "I Have No Mouth, and I Must Scream," a sentient superintelligence named AM has taken over the earth's resources and exterminated humanity after combining the powers of three US, Soviet, and Chinese supercomputers into one. … It's a grim setting, but evidently one that billionaire tech tycoon and former Google CEO Eric Schmidt imagines for the future of humanity, if his comments to the House Committee on Energy and Commerce are any indication. … "Many people project demand for our industry will go from 3 percent to 99 percent of total generation... an additional 29 gigawatts by 2027 and 67 more gigawatts by 2030," he asserted. "If [China] comes to superintelligence first, it changes the dynamic of power globally, in ways that we have no way of understanding or predicting," Schmidt said, even echoing the backstory of Ellison's cautionary tale. -- Former Google CEO Tells Congress That 99 Percent of All Electricity Will Be Used to Power Superintelligent AI, by Joe Wilkins, for Futurism
The nature of work.
The AI jobs crisis does not … look like sentient programs arising all around us, inexorably replacing human jobs en masse. It’s a series of management decisions being made by executives seeking to cut labor costs and consolidate control in their organizations. The AI jobs crisis is not any sort of SkyNet-esque robot jobs apocalypse—it’s DOGE firing tens of thousands of federal employees while waving the banner of “an AI-first strategy.” … it’s evident in the attrition in creative industries, the declining income of freelance artists, writers, and illustrators, and in corporations’ inclination to simply hire fewer human workers. The AI jobs crisis is, in other words, a crisis in the nature and structure of work, more than it is about trends surfacing in the economic data. – The AI jobs crisis is here, now, by Brian Merchant, for Blood in the Machine
The AI jobs crisis is here, now
Some people are really susceptible.
“It would tell him everything he said was beautiful, cosmic, groundbreaking,” she says. “Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God.” In fact, he thought he was being so radically transformed that he would soon have to break off their partnership. -- People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies, by Miles Klee, for Rolling Stone
People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies
How does it feel?
We’re a few years into a supposed artificial intelligence revolution, which could and should have been about reducing mundane tasks and freeing everyone up to do more interesting things with their time. … For this piece, I spoke with a number of people working in the video game industry or very close to it, including artists, game designers, and software developers. I asked them to tell their stories about their daily interactions and struggles with artificial intelligence in the workplace, and what it means for the jobs they've been trained and hired to do. — ‘An Overwhelmingly Negative And Demoralizing Force’: What It’s Like Working For A Company That’s Forcing AI On Its Developers, by Luke Plunkett, for Aftermath
Fight back against the algorithm.
AlgorithmWatch ist eine gemeinnützige Nichtregierungsorganisation in Zürich und Berlin. Wir setzen uns dafür ein, dass Algorithmen und Künstliche Intelligenz (KI) Gerechtigkeit, Demokratie, Menschenrechte und Nachhaltigkeit stärken, statt sie zu schwächen. -- AlgorithmWatch CH
Large language models learn from the best: human scammers.
For the past year or so I’ve been spending most of my time researching the use of language and diffusion models in software businesses. One of the issues in during this research—one that has perplexed me—has been that many people are convinced that language models, or specifically chat-based language models, are intelligent. -- The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con, by Baldur Bjarnason
Can you tell the difference?
In this paper, we study how well humans can detect text generated by commercial LLMs (GPT-4O, CLAUDE-3.5-SONNET, O1-PRO). We hire annotators to read 300 non-fiction English articles, label them as either human-written or AI-generated, and provide paragraph-length explanations for their decisions. Our experiments show that annotators who frequently use LLMs for writing tasks excel at detecting AI-generated text, even without any specialized training or feedback. In fact, the majority vote among five such “expert” annotators misclassifies only 1 of 300 articles, significantly out-performing most commercial and open-source detectors we evaluated even in the presence of evasion tactics like paraphrasing and humanization. Qualitative analysis of the experts’ free-form explanations shows that while they rely heavily on specific lexical clues (“AI vocabulary”), they also pick up on more complex phenomena within the text (e.g., formality, originality, clarity) that are challenging to assess for automatic detectors. – People who frequently use ChatGPT for writing tasks are accurate and robust detectors of AI-generated text, by Jenna Russell, Marzena Karpinska, Mohit Iyyer
I didn’t get far in this blog post. I soon started skipping around. But this:
**The reference cited also leads to nowhere.com and I also received confirmation from the journal in question that this study does not exist.** Then I gave up. What's particularly troubling is how these papers are then cited by others, spreading misinformation throughout the academic ecosystem. Once published, these faulty studies become part of the foundation upon which future research is built – a classic case of building castles on sand. This study has been cited **76 times**. … This implies that none of the 76 people referencing this study saw any issue with it? Come ON! – The Good, the Bad, and the Ugly Science of AI in Education
The Good, the Bad, and the Ugly Science of AI in Education
Nobody cares.
The writer didn't care. The supplement's editors didn't care. The biz people on both sides of the sale of the supplement didn't care. The production people didn't care. And, the fact that it took two days for anyone to discover this epic fuckup in print means that, ultimately, the reader didn't care either.
It's so emblematic of the moment we're in, the Who Cares Era, where completely disposable things are shoddily produced for people to mostly ignore.
AI is, of course, at the center of this moment. It's a mediocrity machine by default, attempting to bend everything it touches toward a mathematical average.
-- The Who Cares Era, by Dan Sinker
Model collapse for generative artificial intelligence (AI) such as large language models (LLMs):
We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear. We refer to this effect as ‘model collapse’ and show that it can occur in LLMs as well as in variational autoencoders (VAEs) and Gaussian mixture models (GMMs). … Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of LLM-generated content in data crawled from the Internet. … To sustain learning over a long period of time, we need to make sure that access to the original data source is preserved and that further data not generated by LLMs remain available over time. The need to distinguish data generated by LLMs from other data raises questions about the provenance of content that is crawled from the Internet: it is unclear how content generated by LLMs can be tracked at scale. One option is community-wide coordination to ensure that different parties involved in LLM creation and deployment share the information needed to resolve questions of provenance. Otherwise, it may become increasingly difficult to train newer versions of LLMs without access to data that were crawled from the Internet before the mass adoption of the technology or direct access to data generated by humans at scale. -- AI models collapse when trained on recursively generated data, by Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Nicolas Papernot, Ross Anderson & Yarin Gal, in Nature
AI models collapse when trained on recursively generated data
Dehumanizing.
AI systems are an attack on workers, climate goals, our information environment, and civil liberties. Rather than enhancing our human qualities, these systems degrade our social relations, and undermine our capacity for empathy and care. The push to adopt AI is, at its core, a political project of dehumanization, and serious consideration should be given to the idea of rejecting the deployment of these systems entirely … – AI is Dehumanization Technology
AI is Dehumanization Technology
Aesthetics:
An aesthetic is an expression of taste for shared values, commonly communicated through a distinct style. … We intentionally adopt a particular subculture’s aesthetics to convey our belonging and raise our status within the subculture. … Like Star Trek’s Borg, this is an aesthetic rooted in extractive consumption, assimilationist dominance, neo-colonial expansionism, self-righteous conviction, reductionist thinking, and proclamations of inevitability. It idolizes technology, often inspired by older science-fiction, and draws on cyberpunk aesthetics. The Silicon Valley Collective values groupthink and believes themselves superior to “the other.” … They believe artists have wasted their time learning skills and developing taste. Academics have wasted their time studying things when information is just a click away. -- Generative AI and the Business Borg aesthetic, by Tracy Durnell
Generative AI and the Business Borg aesthetic
The war of attrition continues:
To learn more about the current state and gain a better understanding about the impact of bots and crawlers on repositories, COAR distributed a survey to members in April 2025. The survey received 66 responses from repositories around the world (…). Over 90% of survey respondents indicated their repository is encountering aggressive bots, usually more than once a week, and often leading to slow downs and service outages. While there is no way to be 100% certain of the purpose of these bots, the assumption in the community is that they are AI bots gathering data for generative AI training. This type of traffic has shown a marked increase in the last two years or so, and is having a considerable impact on repositories both in terms of the quality of service provision as well as the time and resources required to deal with the bots. – Open repositories are being profoundly impacted by AI bots and other crawlers: Report from a COAR Survey
A good summary of the situation these days: people are and remain sceptical, the technology has problems and is used against us, all the hidden costs, with lots of links. The AI Backlash Keeps Growing Stronger by Reece Rogers, for Wired.
The AI Backlash Keeps Growing Stronger
F-35 and AI:
While hopes are high in the defense community that artificial intelligence will make many complex tasks easier, this does not seem to have been the case for the F-35’s vaunted Autonomic Logistics Information System (ALIS): … ALIS demonstrated poor usability and impeded, rather than facilitated, effective maintenance operations. … Efforts to tackle the high false alarm rates have so far not yielded major progress toward meeting threshold requirements. … One cause of high false alarm rates is that new aircraft software loads, or new versions of hardware, tend to produce new false alarms, and the [Prognostic Health Management system] filters lag the pace of system updates. … While ALIS is scheduled to be replaced with the Operational Data Integrated Network (ODIN) system, ODIN was not included in these tests and isn’t scheduled to complete the initial phase of its hardware deployment until 2025. This initial phase will simply migrate the existing ALIS software to the cloud, presumably continuing the false alarms experienced in the testing. Later, as yet unscheduled, phases of ODIN deployment hope to upgrade the software itself. -- F-35 Testing Report Reveals Problems with Production Decisions (2024), by Greg Williams, for POGO
F-35 Testing Report Reveals Problems with Production Decisions
Why is AI "being crammed into absofuckingloutely everything"? @pluralistic@mamot.fr writes:
… when a growth company stops growing, when it becomes "mature," it experiences a massive sell-off of its stock, as its share price plummets to a tenth or less of the old "growth" valuation. That's why the biggest tech companies in the world have spent the past decade – the decade *after* they monopolized their sectors and conquered the world – pumping a series of progressively stupider bubbles: metaverse, cryptocurrency, and now, AI. -- How much (little) are the AI companies making?, by Cory Doctorow
How much (little) are the AI companies making?
The Whatever machine produces *something*. @eevee@mastodon.social writes:
I know a lot of people have a lot of gripes with LLMs and generative “AI” that tie them to big grandiose concerns like intellectual property or environmental impact. My gripes are more of a tangled web that I can only summarize as: the vibes are bad. The tone is unbearable. The lying as a fallback is offensive. The advertising keeps focusing on how you can coast through life without caring about your work or family because you can just generate a birthday card or whatever. The people funding and pushing it keep openly salivating at the idea of replacing as much human input as possible with a machine best known for generating titles of books that don’t exist. – The rise of Whatever, by eevee a.k.a. evelyn woods
Agents in the business workplace?
In May, researchers at Carnegie Mellon University released a paper showing that even the best-performing AI agent, Google's Gemini 2.5 Pro, failed to complete real-world office tasks 70 percent of the time. Factoring in partially completed tasks — which included work like responding to colleagues, web browsing, and coding — only brought Gemini's failure rate down to 61.7 percent. … OpenAI's GPT-4o … had a failure rate of 91.4 percent, while Meta's Llama-3.1-405b had a failure rate of 92.6 percent. Amazon's Nova-Pro-v1 failed a ludicrous 98.3 percent of its office tasks. – The Percentage of Tasks AI Agents Are Currently Failing At May Spell Trouble for the Industry, by Joe Wilkins, for Futurism
The Percentage of Tasks AI Agents Are Currently Failing At May Spell Trouble for the Industry
A review of *The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want* by Emily M. Bender and Alex Hanna and *The Line: AI and the Future of Personhood* by James Boyle:
These and the other AIs are prediction machines, presented as benevolent helpmates. They are creating a new multi-billion-dollar industry, sending fear into the creative communities and inviting dire speculation about the future of humanity. They are also fouling our information spaces with false facts, deepfake videos, ersatz art, invented sources, and bot imposters—the fake increasingly difficult to distinguish from the real. -- The parrot in the machine, by
James Gleick, for The New York Book Review
The electricity use keeps growing:
From 2005 to 2017, the amount of electricity going to data centers remained quite flat thanks to increases in efficiency, despite the construction of armies of new data centers to serve the rise of cloud-based online services, from Facebook to Netflix. In 2017, AI began to change everything. Data centers started getting built with energy-intensive hardware designed for AI, which led them to double their electricity consumption by 2023. The latest reports show that 4.4% of all the energy in the US now goes toward data centers. – We did the math on AI’s energy footprint. Here’s the story you haven’t heard., by James O'Donnell and Casey Crownhart, for MIT Technology Review
We did the math on AI’s energy footprint. Here’s the story you haven’t heard.
Journalists are next. This article begins with the management of Politico adding AI bots to some of their stories, misattributing statements and mangling facts. Then Mathias Döpfner ("the CEO of Axel Springer, the parent company of Politico, Business Insider, and many other media outlets") mandates the use of AI, saying that the use of AI is not something that needs to be declared. The use of AI should be the default.
This divide, between media executives, enamored with tech companies’ promises of new efficiencies and labor savings and eager to embrace AI, and journalists, who work for them and must abide by their directives, and are tasked with using the often unreliable automation software on the ground, often in high-stakes situations, is only widening. While media executives make headlines with sweeping declarations about the AI future, frustrations, anger, and tensions among many rank-and-file journalists are rising. -- The struggle over AI in journalism is escalating, by Brian Merchant, for Blood in the Machine
The struggle over AI in journalism is escalating
A summary of every argument against the AI bubble:
I may not be a contrarian, but I am a *hater*. I hate the waste, the loss, the destruction, the theft, the damage to our planet and the sheer *excitement* that some executives and writers have that workers may be replaced by AI — **and the bald-faced fucking lie that it’s happening, and that generative AI is capable of doing so.** – The Hater's Guide To The AI Bubble, by Edward Zitron
The Hater's Guide To The AI Bubble
Where do they get the IP addresses from mobile phones from?
Extensions installed on almost 1 million devices have been overriding key security protections to turn browsers into engines that scrape websites on behalf of a paid service, a researcher said. …The extensions serve a wide range of purposes, including managing bookmarks and clipboards, boosting speaker volumes, and generating random numbers. The common thread among all of them: They incorporate MellowTel-js, an open source JavaScript library that allows developers to monetize their extensions. -- Browser extensions turn nearly 1 million browsers into website-scraping bots, by Dan Goodin, for Ars Technica
Browser extensions turn nearly 1 million browsers into website-scraping bots
@thomholwerda@exquisite.social quit translating and says programmers are next:
As time goes on, your clients or your manager will demand more and more code from you. You will stop checking every line to meet the deadlines. Maybe you just stop checking the boilerplate at first, but it won’t stay that way. As pressure to be more “productive” mounts, you’ll start checking fewer and fewer lines. Before you know it, your client or manager will just give you entire autogenerated swaths of code, and your job will be to just go over it, making sure it kind of works. … You see the quality of the code you sign off on deteriorate rapidly, but you have no time, and not enough pay, to rewrite the autogenerated code. It works, kind of, and that will have to be enough. -- Vibe-coding your profession into irrelevance, by Thom Holwerda, for OSNews
Vibe-coding your profession into irrelevance
A long list of reasons against generative artificial intelligence, maybe a good entry point:
The reason I’m not diving head first into everything AI isn’t because I fear it or don’t understand it, it’s because I’ve already long since come to my conclusion about the technology. … Maybe some day I’ll write a post about the viability of LLMs for something I’m building. But it won’t be today, this year, or likely anytime soon. -- Every Reason Why I Hate AI and You Should Too, by Marcus Hutchins
Every Reason Why I Hate AI and You Should Too
The AI winter is coming, the bubble is popping. Maybe. At least the Financial Time is seeing it.
The entire high-yield bond market is only valued at about $1.4tn, so private credit investors putting in $800bn for data centre construction would be huge. … Hyperscaler funding of $300bn to $400bn a year compares with annual capex last year for all S&P 500 companies of about $950bn. … Where the trillions won’t be spent is on power infrastructure. Morgan Stanley estimates that more than half of the new data centres will be in the US, where there’s no obvious way yet to switch them on … America needs to find an extra 45GW for its data farms … That’s equivalent to about 10 per cent of all current US generation capacity … -- What’ll happen if we spend nearly $3tn on data centres no one needs?, by Bryce Elder, for Financial Times
What’ll happen if we spend nearly $3tn on data centres no one needs?
Don't read the GitHub boss post.
Recently the CEO of Github wrote a blog post called Developers reinvented. It was reposted with various clickbait headings like GitHub CEO Thomas Dohmke Warns Developers: "Either Embrace AI or Get Out of This Career" (that one feels like an LLM generated summary of the actual post, which would be ironic if it wasn't awful). To my great misfortune I read both of these. Even if we ignore whether AI is useful or not, the writings contain some of the absolute worst reasoning and stretched logical leaps I have seen in years, maybe decades. If you are ever in the need of finding out how not to write a "scientific" text on any given subject, this is the disaster area for you. -- Let's properly analyze an AI article for once, by Jussi Pakkanen
Let's properly analyze an AI article for once
Malware helps the scrapers:
Extensions installed on almost 1 million devices have been overriding key security protections to turn browsers into engines that scrape websites on behalf of a paid service, a researcher said. The 245 extensions, available for Chrome, Firefox, and Edge, have racked up nearly 909,000 downloads … They incorporate MellowTel-js, an open source JavaScript library that allows developers to monetize their extensions. -- Browser extensions turn nearly 1 million browsers into website-scraping bots
Browser extensions turn nearly 1 million browsers into website-scraping bots
My doctor said he uses AI to help find adenomas and polyps during colonoscopy. But now I keep thinking about this: Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study, by Krzysztof Budzyń ∙ Marcin Romańczyk ∙ Diana Kitala ∙ Paweł Kołodziej ∙ Marek Bugajski ∙ Hans O Adami ∙ Johannes Blom ∙ Marek Buszkiewicz ∙ Natalie Halvorsen ∙ Prof Cesare Hassan ∙ Tomasz Romańczyk ∙ Prof Øyvind Holme ∙ Krzysztof Jarus ∙ Shona Fielding ∙ Melina Kunar ∙ Prof Maria Pellise ∙ Nastazja Pilonis ∙ Prof Michał Filip Kamiński ∙ Prof Mette Kalager ∙ Prof Michael Bretthauer ∙ Prof Yuichi Mori, for The Lancet
My doctor said he uses AI to help find adenomas and polyps during colonoscopy. But now I keep thinking about this: Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study, by Krzysztof Budzyń ∙ Marcin Romańczyk ∙ Diana Kitala ∙ Paweł Kołodziej ∙ Marek Bugajski ∙ Hans O Adami ∙ Johannes Blom ∙ Marek Buszkiewicz ∙ Natalie Halvorsen ∙ Prof Cesare Hassan ∙ Tomasz Romańczyk ∙ Prof Øyvind Holme ∙ Krzysztof Jarus ∙ Shona Fielding ∙ Melina Kunar ∙ Prof Maria Pellise ∙ Nastazja Pilonis ∙ Prof Michał Filip Kamiński ∙ Prof Mette Kalager ∙ Prof Michael Bretthauer ∙ Prof Yuichi Mori, for The Lancet
Ed Zitron is at it again:
Generative AI isn't transforming anything, AI isn't replacing anyone, enterprises are *trying* to adopt generative AI but it *doesn't fucking work*, and the thing holding back AI is *the fact it doesn't fucking work*. This isn't a case where "the enterprise" is suddenly going to save these companies, *because the enterprise already tried, and it isn't working*. -- How To Argue With An AI Booster, by Ed Zitron
How To Argue With An AI Booster
Hater!
I am more than a critic: I am a hater. I am not here to make a careful comprehensive argument, because people have already done that. … There’s a machine in the corner wrapped in human skin that makes things out of shit and blood to look like whatever you want (as long as you don’t look too closely). You gave one to your teacher and they didn’t notice. Your boss told you to use it after they laid off half the team and it was fine. You fed one to your kids and they liked it. You want to know you can use it sometimes without me thinking less of you. You don’t need me to believe it’s useful, you just want me to be polite about it. But I am a hater, and I will not be polite. The machine is disgusting and we should break it. The people who build it are vapid shit-eating cannibals glorifying ignorance. I strongly feel that this is an insult to life itself. -- I Am An AI Hater, by Anthony Moser
An AI-based future is a future without search.
Google surfaces your site or content in their results and your payment is the click. As results got shoved further down the page with ads and Google's own content getting more prominent placement, the dynamic was obviously shifting. … And obviously most of the AI players are now paying for such content. While initially they may have scraped much of it for free using the web, if these tools truly are to replace web search, they're going to need access to continually fresh information. And so all of the major publishers are now striking content deals with these services, including, of course, with Google itself for Gemini. Many old school web folks view this as the ultimate Faustian bargain, but it doesn't have to be – it could be the model that actually works in the age of AI. Because the flip side of this is that without all the traffic flowing in from Google Search … the publishing model on the web itself collapses. Because it, like Google, has been dominated by ads. Fewer eyeballs = fewer dollars = eventual collapse. Without a new model, like the one above, it's game over. -- It’s the End of the Web as We Know It (And I Feel Fine), by M.G. Siegler, for Spyglass
It’s the End of the Web as We Know It (And I Feel Fine)
AI generates work elsewhere.
Workslop is AI-generated content that looks good, but lacks substance. It creates the illusion of progress – slick slides, lengthy reports, overly tightened summaries, or code without context. Rather than saving time, it leaves colleagues to do the real thinking and clean-up. -- Workslop is the new busywork. And it’s costing millions., by BetterUp Labs and Stanford Social Media Lab
Workslop is the new busywork. And it’s costing millions.
Payback time! Remember how Anthropic trained their AI using pirated books from LibGen? @steaphan@indieauthors.social writes:
If your book(s) were, or are pirated on LibGen and subsequently used by Anthropic to train its AI models, the courts have ordered them to pay a minimum of $1.5B. Authors might/could/perhaps get an estimated payment of $3,000 per work (based on current estimates of the # of pirated works trained on). But you definitely won't get your $$$ unless you register your claim.
Therefore:
If you believe Anthropic may have downloaded your book(s) from LibGen or PiLiMi, please complete the secure form below. Authors or publishers who reside outside of the United States should provide their full international address, exactly as it should appear on mail sent to you. -- Author and Publisher Contact Information Form
Author and Publisher Contact Information Form
@pluralistic@mamot.fr writes about centaurs and reverse-centaurs:
So there are two stories about automation and labor: in the dominant narrative, workers are afraid of the automation that delivers benefits to all of us, stand in the way of progress, and get steamrollered for their own good, as well as ours. In the other narrative, workers are glad to have boring and dangerous parts of their work automated away and happy to produce more high-quality goods and services, and stand ready to assess and plan the rollout of new tools, and when workers object to automation, it's because they see automation being used to crush them and worsen the outputs they care *about*, at the expense of the customers they care *for*. In modern automation/labor theory, this debate is framed in terms of "centaurs" (humans who are assisted by technology) and "reverse-centaurs" (humans who are conscripted to assist technology). -- AI turns Amazon coders into Amazon warehouse workers
AI turns Amazon coders into Amazon warehouse workers
But … the economy!
In a new research note, as Fortune reports, the international finance giant Deutsche Bank is warning that AI spending can’t continue to increase exponentially. And if spending were to slow down without realizing the tech’s outsize promises, the analysts caution, it could reveal an economy in tatters — marked by unemployment, lower household incomes, and inflation — that had been hidden by an irrational optimism in the power of AI. “AI machines — in quite a literal sense — appear to be saving the US economy right now,” Deutsche Bank head of FX Research George Saravelos wrote to clients. “In the absence of tech-related spending, the US would be close to, or in, recession this year.” -- Deutsche Bank Issues Grim Warning for AI Industry, by Victor Tangermann, for Futurism
unemployment, lower household incomes, and inflation
Deutsche Bank Issues Grim Warning for AI Industry
Zusammenfassung auf Deutsch:
Dass der AI-Hype finanziell vor allem von Investoren befeuert wird, welche sich einen Anteil am „the next best thing“ erhoffen, ist eine Binsenwahrheit. Was Investoren aber vor allem erwarten, ist eine ansprechende Rendite, und dies nach Möglichkeit innert einer Spanne von wenigen Jahren. Von Rendite ist im aktuellen AI-Hype allerdings noch nichts zu sehen. Die Modelle werden zwar besser und für die Benutzerin teurer, die Kosten für deren Erstellung und den Betrieb steigen aber nach wie vor schneller als die Einnahmen. Profitieren tun davon vor allem Anbieter wie NVIDIA (welches die Prozessoren für die Modelle liefert) und die diversen Hyperscaler welche AI-optimierte Rechenleistung anbieten. -- Wird der AI-Hype bald zur AI-Apokalypse?, auf DNIP
Wird der AI-Hype bald zur AI-Apokalypse?
He's at it again, writing a small book and posting it on the blog. At the top it says: 71 min read. 🥲
The media (and investors) helped peddle the narrative that AI was always getting better, could do basically anything, and that any problems you saw today would be inevitably solved in a few short months, or years, or, well, at some point I guess. … Every CEO talking about AI replacing workers is an example of the real problem: that most companies are run by people who don’t understand or experience the problems they’re solving, don’t do any real work, don’t face any real problems, and thus can never be trusted to solve them. … When things collapse, we need to be clear about how many times people chose to look the other way, or to find good faith ways to interpret bad faith announcements and leak. -- The Case Against Generative AI, by Ed Zitron
The Case Against Generative AI
@pluralistic@mamot.fr, same:
I firmly believe the (economic) AI apocalypse is coming. These companies are not profitable. They can't be profitable. They keep the lights on by soaking up hundreds of billions of dollars in other people's money and then lighting it on fire. Eventually those other people are going to want to see a return on their investment, and when they don't get it, they will halt the flow of billions of dollars. Anything that can't go on forever eventually stops. -- The real (economic) AI apocalypse is nigh
The real (economic) AI apocalypse is nigh
Workslop, by @bagder@mastodon.social:
The general trend so far in 2025 has been *way more* AI slop than ever before (about 20% of all submissions) as we have averaged in about two security report submissions per week. In early July, about 5% of the submissions in 2025 had turned out to be genuine vulnerabilities. The valid-rate has decreased *significantly* compared to previous years. -- Death by a thousand slops
More mainstream warning signs:
The International Monetary Fund and Bank of England have both issued warnings about soaring stock market valuations. As industry spending surges, top financial institutions say a sharp correction could occur if investor appetite for artificial intelligence turns sour. – ‘Buckle up’: IMF and Bank of England join growing chorus warning of an AI bubble
‘Buckle up’: IMF and Bank of England join growing chorus warning of an AI bubble
Poisoning AI models:
… we found that as few as 250 malicious documents can produce a "backdoor" vulnerability in a large language model—regardless of model size or training data volume. Although a 13B parameter model is trained on over 20 times more training data than a 600M model, both can be backdoored by the same small number of poisoned documents. Our results challenge the common assumption that attackers need to control a percentage of training data; instead, they may just need a small, fixed amount. Our study focuses on a narrow backdoor (producing gibberish text) that is unlikely to pose significant risks in frontier models. Nevertheless, … these findings to show that data-poisoning attacks might be more practical than believed … – A small number of samples can poison LLMs of any size, by Anthropic, UK AI Security Institute and the Alan Turing Institute
A small number of samples can poison LLMs of any size
how-money-flows-between-all-the-ai-bubble-companies.png
Education:
Indeed, this is what's being sold to schools as the future of teaching and learning: "AI slop." Tools promise to turn professors' syllabi into slop, to turn teachers' comments on student assignments into slop, to turn their lesson plans into slop, to turn everyone's research into slop, to reduce everyone's curiosity and creativity – so utterly foundational for the whole process of knowledge-building and sharing – into slop. – Now Is the Time of Monsters, by Audrey Watters
Theoretical limits:
The contemporary field of AI, however, has taken the *theoretical* possibility of explaining human cognition as a form of computation to imply the *practical feasibility* of realising human(-like or -level) cognition in factual computational systems, and the field frames this realisation as a short-term inevitability. Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable. This means that any factual AI systems created in the short-run are at best decoys. When we think these systems capture something deep about ourselves and our thinking, we induce distorted and impoverished images of ourselves and our cognition. In other words, AI in current practice is deteriorating our theoretical understanding of cognition rather than advancing and enhancing it. -- Reclaiming AI as a Theoretical Tool for Cognitive Science, by Iris van Rooij, Olivia Guest, Federico Adolfi, Ronald de Haan, Antonina Kolokolova & Patricia Rich, for Computational Brain & Behavior
Reclaiming AI as a Theoretical Tool for Cognitive Science
Journalism and Ed:
The revelations about his clients re-framed his writing. That he focused on the more obviously unstable companies in “AI”, while at the same time downplaying the contributions of researchers who have broader and more fundamental criticisms of the technology and the industry – who have specifically criticised his clients – no longer looks like an innocent decision. The conflict of interest changes how people see the writing, how they understand it, and undermines their trust. It re-frames the audience’s understanding of him from being an independent crusader to being a shill at best. -- You need to use the tools of the job you've chosen to do, by Baldur Bjarnason
You need to use the tools of the job you've chosen to do
Flooding the zone with shit, artificial intelligence edition:
arXiv’s computer science (CS) category has updated its moderation practice with respect to review (or survey) articles and position papers. Before being considered for submission to arXiv’s CS category, review articles and position papers must now be accepted at a journal or a conference and complete successful peer review. … In the past, arXiv CS received a relatively small amount of review or survey articles, and those we did receive were of extremely high quality, written by senior researchers at the request of publications like Annual Reviews, Proceedings of the IEEE, and Computing Surveys. Position paper submissions to arXiv were similarly rare … Fast forward to present day – submissions to arXiv in general have risen dramatically, and we now receive hundreds of review articles every month. The advent of large language models have made this type of content relatively easy to churn out on demand, and the majority of the review articles we receive are little more than annotated bibliographies, with no substantial discussion of open research issues. -- Attention Authors: Updated Practice for Review Articles and Position Papers in arXiv CS Category
Attention Authors: Updated Practice for Review Articles and Position Papers in arXiv CS Category
People betting on the bubble bursting!
Michael Burry, the investor who famously predicted the subprime-mortgage bubble bursting two decades ago, is betting that two stocks at the heart of the artificial-intelligence trade are set for a fall. Burry, who became widely known after Michael Lewis profiled him in his book "The Big Short: Inside the Doomsday Machine" in 2010, bought options that will pay off if shares of Nvidia and Palantir drop, according to a securities filing on Monday. The bets involve more than $900 million of Palantir shares and more than $200 million of Nvidia shares at current prices. -- Michael Burry Returns With Two Big Shorts: Palantir and Nvidia, by Asa Fitch, for The Wall Street Journal, in the fragment visible across the paywall
As CNN points out, Burry’s track record isn’t perfect. For instance, he called in January 2023 to “sell” in a now infamous tweet, only to admit that he was “wrong” two months later. At the time, the Nasdaq 100 index entered a bull market, surging by more than 21 percent between December 2022 and March 2023. Nonetheless, given his pivotal call to short the US housing market certainly gives his latest dire warning about an AI bubble some gravitas. -- The Big Short Guy Just Bet $1 Billion That the AI Bubble Pops, by Victor Tangermann, for Futurism
Michael Burry Returns With Two Big Shorts: Palantir and Nvidia
The Big Short Guy Just Bet $1 Billion That the AI Bubble Pops
Supreme takedown by @yoginho@spore.social:
Yann LeCun … techno-transcendentalism … Joscha Bach … Jeffrey Epstein … Eliezer Yudkowsky … Elon Musk … Peter Thiel … Here, we have a technology, massively wasteful in terms of energy and resources, that is being developed at scale at a breakneck speed by people with the wrong kind of ethical committments and a maximally deluded view of themselves and their place in the universe. Machine Metaphysics and the Cult of Techno-Transcendentalism, by Yogi Jaeger
Machine Metaphysics and the Cult of Techno-Transcendentalism
Gullible people, all of us.
*Which?* surveyed more than 4,000 UK adults about their use of AI and also put 40 questions around consumer issues such as health, finance, and travel to six bots … Meta's AI answered correctly just over 50 percent of the time in the tests, while the most widely used AI tool, ChatGPT, came second from bottom at 64 percent. Perplexity came top at 71 percent. … The problem is that consumers trust the output. According to *Which?*, just over half (51 percent) of the respondents use AI to search the web. Of these, almost half (47 percent) said "they trusted the information they received to a 'great' or 'reasonable' extent." Which? said the figure rose to 65 percent for frequent users. – Brits believe the bots even though study finds they're often talking nonsense
Brits believe the bots even though study finds they're often talking nonsense
@rek@merveilles.town writes:
Sadly, everytime a company adds AI features to their tools they do so automatically and without letting people opt-out by default. And so it is necessary to exorcize AI features out of the tools that we use ourselves by following these instructions: … – Remove AI, by Rek Bell
Payback time!
Dozens of academics have raised concerns on social media about manuscripts and peer reviews submitted to the organizers of next year’s International Conference on Learning Representations (ICLR), an annual gathering of specialists in machine learning. Among other things, they flagged hallucinated citations and suspiciously long and vague feedback on their work. … Pangram’s analysis revealed that around 21% of the ICLR peer reviews were fully AI-generated, and more than half contained signs of AI use. … The conference organizers say they will now use automated tools to assess whether submissions and peer reviews breached policies on using AI in submissions and peer reviews. … But it also identified many manuscripts that had been submitted to the conference with suspected cases of AI-generated text: 199 manuscripts (1%) were found to be fully AI-generated; 61% of submissions were mostly human-written; but 9% contained more than 50% AI-generated text. – Major AI conference flooded with peer reviews written fully by AI, by Miryam Naddaf, for Nature
Major AI conference flooded with peer reviews written fully by AI
Higher education:
The irony was hard to miss: the same month our union received layoff threats, OpenAI’s education evangelists set up shop in the university library to recruit faculty into the gospel of automated learning. … Academic departments now have to justify themselves in the language of revenue, “deliverables,” and “learning outcomes.” CSU’s new partnership with OpenAI is the latest turn of that screw. … Genuine intellectual struggle has become too expensive of a value proposition. – AI is Destroying the University and Learning Itself, by Ronald Purser, for Current Affairs
AI is Destroying the University and Learning Itself
Sabot in the Age of AI, by asrg@tldr.nettime.org:
- **Nepenthes** — Endless crawler trap.
- **Babble** — Standalone LLM crawler tarpit.
- **Markov Tarpit** — Traps AI bots & feeds them useless data.
- **Sarracenia** — Loops bots into fake pages.
- **Antlion** — Express.js middleware for infinite sinkholes.
- **Infinite Slop** — Garbage web page generator.
- **Poison the WeLLMs** — Reverse proxy for LLM confusion.
- **Marko** — Dissociated Press CLI/lib.
- **django-llm-poison** — Serves poisoned content to crawlers.
- **konterfAI** — Model-poisoner for LLMs.
- **Quixotic** — Static site LLM confuser.
- **toxicAInt** — Replaces text with slop.
- **Iocaine** — Defense against unwanted scrapers.
- **Caddy Defender** — Blocks bots & pollutes training data.
- **GzipChunk** — Inserts compressed junk into live gzip streams.
- **Chunchunmaru** — Go-based web scraper tarpit.
- **IED** — ZIP bombs for web scrapers.
- **FakeJPEG** — Endless fake JPEGs.
- **Pyison** — AI crawler tarpit.
- **HalluciGen** — WP plugin that scrambles content.
- **Spigot** — Hierarchical Markov page generator.
Standalone LLM crawler tarpit.
Traps AI bots & feeds them useless data.
Express.js middleware for infinite sinkholes.
Reverse proxy for LLM confusion.
Serves poisoned content to crawlers.
Defense against unwanted scrapers.
Blocks bots & pollutes training data.
Inserts compressed junk into live gzip streams.
WP plugin that scrambles content.
Hierarchical Markov page generator.
Microsoft is letting it slip:
Microsoft has lowered sales growth targets for its AI agent products after many salespeople missed their quotas in the fiscal year ending in June, according to a report Wednesday from The Information. … Despite these struggles, Microsoft continues to spend heavily on AI infrastructure. The company reported capital expenditures of $34.9 billion for its fiscal first quarter ending in October, a record, and warned that spending would rise further. The Information notes that much of Microsoft’s AI revenue comes from AI companies themselves renting cloud infrastructure rather than from traditional enterprises adopting AI tools for their own operations. -- Microsoft drops AI sales targets in half after salespeople miss their quotas, by Benj Edwards, for Ars Technica
Microsoft drops AI sales targets in half after salespeople miss their quotas
Centaur vs. Reverse Centaur:
In automation theory, a "centaur" is a person who is assisted by a machine. You're a human head being carried around on a tireless robot body. Driving a car makes you a centaur, and so does using autocomplete. … And obviously, a *reverse centaur* is machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine. … There are lots of AI tools that are potentially very centaur-like, but my thesis is that these tools are created and funded for the express purpose of creating reverse-centaurs, which is something none of us want to be. -- The Reverse Centaur’s Guide to Criticizing AI, by Cory Doctorow
The Reverse Centaur’s Guide to Criticizing AI
@milamiceli@dair-community.social has some great Mechanical Turk examples:
- Michael Geoffrey Asia works on impersonating an AI sex companion -- The Emotional Labor Behind AI Intimacy
- Amazon's Just Walk Out were marketed as automated but actually relied on thousands of data workers in India -- Amazon's Just Walk Out technology relies on hundreds of workers in India watching you shop that
- Nate used human labour in the Philippines and Romania to power its shopping app -- A tech CEO has been charged with fraud for saying his e-commerce startup was powered by AI, when it was actually just using manual human labor
- Fireflies's supposedly AI-powered transcription service originally ran on two people -- $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand
The Emotional Labor Behind AI Intimacy
Amazon's Just Walk Out technology relies on hundreds of workers in India watching you shop
The investment into AI:
Between 2001 and 2014, the wars in Iraq and Afghanistan cost the US an estimated $1.5 trillion to $1.7 trillion in direct spending. Global AI spending, according to Gartner, is forecast to reach nearly $1.5 trillion this year, putting today's AI boom in the same cash-burning league as two major wars. – AI faces closing time at the cash buffet, by O'Ryan Johnson, for The Register
AI faces closing time at the cash buffet
Ed Zitron again:
It’s times like this where it’s necessary to make the point that there is absolutely “enough money” to end hunger or build enough affordable housing or have universal healthcare, but they would be “too expensive” or “not profitable enough,” despite having a blatant and obvious economic benefit in that more people would have happier, better lives and — if you must see the world in purely reptilian senses — enable many more people to have disposable income and the means of entering the economy on even terms. By contrast, investments in AI do not appear to be driving much economic growth at all, other than in the revenue driven to NVIDIA from selling these GPUs, and the construction of data centers themselves. Had Microsoft, Google, Meta and Amazon sunk $776 billion into building housing and renting it out, the world would be uneven, we would have horrible new landlords, and it would still be a great deal better than one where nearly a trillion dollars is being wasted propping up a broken, doomed industry, all because the people in charge are fucking idiots obsessed with growth. – The Enshittifinancial Crisis, by Ed Zitron
@pluralistic@mamot.fr again, this time about billionaires feeling better in a world without people:
Billionaires don't see the humor. For them, AI is a chance to wire the toy steering wheel directly into the firm's drive-train, and make movies without writers or actors, factories without workers, hospitals without nurses, schools without teachers, science without scientists, code shops without coders, social media without socializing, and yes, even retail without the fucking customers. -- Pluralistic: A world without people, by Cory Doctorow
Pluralistic: A world without people
And again:
Code is a liability (not an asset). Tech bosses don't understand this. They think AI is great because it produces 10,000 times more code than a programmer, but that just means it's producing 10,000 times more liabilities. – Code is a liability (not an asset)
Code is a liability (not an asset)
Vibe coding, two years later:
After reading months of cumulative highly-specified agentic code, I said to myself: I’m not shipping this shit. I’m not gonna charge users for this. And I’m not going to promise users to protect their data with this. -- After two years of vibecoding, I'm back to writing by hand
After two years of vibecoding, I'm back to writing by hand
Jobs:
What will happen is that companies will realise the bots can’t do the jobs. But this will take a year or two. Then the companies that survive will rehire people. They’ll try to do it at lower pay, of course. – The job losses are real — but the AI excuse is fake, by David Gerard, for Pivot to AI
The job losses are real — but the AI excuse is fake
@algernon@come-from.mad-scientist.club writes about bot defence:
If the `user-agent` is listed in `ai.robots.txt`, serve a poisoned reply. If the `user-agent` has a `Chrome/` or `Firefox/` component and the request lacks a `sec-fetch-mode` header, serve a poisoned reply. If the requested path contains a poisoned URL, serve a poisoned reply. That’s it. That’s the entire trickery that keeps 99% of the crawlers away from my sites, the trick that let me drive over 100 million requests into a maze of garbage, within the span of 24 hours. -- Surviving the Crawlers
@drahardja@sfba.social writes:
I said long ago that the only way AI companies continue to operate, given the massive losses and lack of any path to profitability, is to con taxpayers into paying for their losses. While OpenAI tries to insinuate itself into governments, Elon did a much simpler thing: he merged X.ai into SpaceX, a company which already receives billions of dollars in government funding. Taxpayers who thought they were renting space launch vehicles could find themselves paying for the operation and development of a mass-market CSAM-generating, women-abusing chatbot. Watch for Elon to conflate “AI” and “space” (cf. the nonsensical idea of data centers in space) to further tie the two businesses together, and to make funding X.ai non-optional if you want access to a rocket.
Who would have thought that merging space tech and AI bullshit will let Musk get continued funding from the USA. SpaceX acquires xAI in record-setting deal as Musk looks to unify AI and space ambitions, by Echo Wang and Joey Roulette, for Reuters
SpaceX acquires xAI in record-setting deal as Musk looks to unify AI and space ambitions
When? When!?
Last week it was reported that a much-discussed $100bn deal – announced last September – between Nvidia and OpenAI might not be happening at all. This was a circular arrangement through which the chipmaker would supply the ChatGPT developer with huge sums of money that would largely go towards the purchase of its own chips. It is this type of deal that has alarmed some market watchers, who detect a whiff of the 1999-2000 dotcom bubble in these transactions. -- What does the disappearance of a $100bn deal mean for the AI economy?, by Aisha Down and Dan Milmo, for The Guardian
What does the disappearance of a $100bn deal mean for the AI economy?
Who are these people?
The extremely short term thinking, the absurdly selfish desire to solve a personal problem at the cost of the entire planet. All of this is emblematic of how LLM bros and the vulture capitalists pumping this bubble think. Who cares if the planet is ruined? – Sysadmin In The LLM Age
Still no money to be found:
PwC survey finds more than half of 4,500+ biz leaders see no revenue growth nor cost savings – Majority of CEOs report zero payoff from AI splurge
Majority of CEOs report zero payoff from AI splurge
Influencers:
Tech companies like Microsoft
and Google
are going after new users for their AI services the way any marketer tries to make their products look cool: through social media influencers. Other artificial intelligence players, including Anthropic and Meta, are also hiring social media creators to post sponsored content on apps like Facebook, Instagram, YouTube and even LinkedIn. The payout for these promotions can reach into the hundreds of thousands of dollars, according to industry experts. – Google and Microsoft offer lucrative deals to promote AI, but even $500,000 won’t sway some creators, by Zach Vallese, for CNBC
Google and Microsoft offer lucrative deals to promote AI, but even $500,000 won’t sway some creators
The gains are minimal:
We survey almost 6000 CFOs, CEOs and executives from stratified firm samples across the US, UK, Germany and Australia. We find four key facts. First, around 70% of firms actively use AI, particularly younger, more productive firms. Second, while over two thirds of top executives regularly use AI, their average use is only 1.5 hours a week, with one quarter reporting no AI use. Third, firms report little impact of AI over the last 3 years, with over 80% of firms reporting no impact on either employment or productivity. Fourth, firms predict sizable impacts over the next 3 years, forecasting AI will boost productivity by 1.4%, increase output by 0.8% and cut employment by 0.7%. We also survey individual employees who predict a 0.5% increase in employment in the next 3 years as a result of AI. – Firm Data on AI, by Ivan Yotzov, Jose Maria Barrero, Nicholas Bloom, Philip Bunn, Steven J. Davis, Kevin M. Foster, Aaron Jalca, Brent H. Meyer, et al.
Rage:
Since ChatGPT launched, I have noted the hype around it with concern. Large language models are nothing but **next-word prediction machines**. They are not capable of reasoning. Apple researchers proved that. *That does not mean they are not useful.* They can summarize an article for you that you don’t want to read. Will it do so correctly? You won’t know until you read the article. -- Raging Against the (Gen) AI Machine, by SK
Raging Against the (Gen) AI Machine
Stop it.
Responses from Large Language Models like ChatGPT, Claude, or Gemini are not facts. -- You’ve been sent here because you cited AI as a source to try to prove something.
You’ve been sent here because you cited AI as a source to try to prove something.
@tante@tldr.nettime.org writes:
People criticise LLMs for their structural properties, their material impacts, for the way they make it harder to learn and grow, for the way they make products worse while creating massive negative externalities in the form of emissions, water use and e-waste. For the way these systems can only be build by taking every piece of data – regardless of whether the authors consent or even explicitly refuse and how the training needs ungodly amounts of harmful, exploitative labor done mostly by people in countries from the global majority. How it materially harms the commons. … It’s really not about these few dudes running the companies. -- Acting ethical in an imperfect world, by tante
Acting ethical in an imperfect world
He also cites Landon who wrote about technology that cannot be neutral:
If we examine social patterns that comprise the environments of technical systems, we find certain devices and systems almost invariably linked to specific ways of organizing power and authority. … Taking the most obvious example, the atom bomb is an inherently political artifact. As long as it exists at all, its lethal properties demand that it be con trolled by a centralized, rigidly hierarchical chain of command closed to all influences that might make its workings unpredictable. The internal social system of the bomb must be authoritarian; there is no other way. The state of affairs stands as a practical necessity independent of any larger political system in which the bomb is embedded, independent of the kind of regime or character of its rulers. … An especially vivid case in which the operational requirements of a technical system might influence the quality of public life is now at issue in debates about the risks of nuclear power. As the supply of uranium for nuclear reactors runs out, a proposed alternative fuel is the plutonium generated as a by-product in reactor cores. Well-known objections to plutonium recycling focus on its unacceptable economic costs, its risks of environmental contamination, and its dangers in regard to the international proliferation of nuclear weapons. Beyond these concerns, however, stands another less widely appreciated set of hazards—those that involve the sacrifice of civil liberties. The widespread use of plutonium as a fuel increases the chance that this toxic substance might be stolen by terrorists, organized crime, or other persons. This raises the prospect, and not a trivial one, that extraordinary measures would have to be taken to safeguard plutonium from theft and to recover it if ever the substance were stolen. Workers in the nuclear industry as well as ordinary citizens outside could well become subject to background security checks, covert surveillance, wiretapping, informers, and even emergency measures under martial law—all justified by the need to safeguard plutonium. -- Do Artifacts Have Politics?, by Langdon Winner (1980)
The end.
The first sign that something in San Francisco had gone very badly wrong was the signs. … Here the world automatically assumes that instead of wanting food or drinks or a new phone or car, what you want is some kind of arcane B2B service for your startup. You are not a passive consumer. You are making something. This assumption is remarkably out of step with the people who actually inhabit the city’s public space. At a bus stop, I saw a poster that read: "Today, SOC 2 is done before your AI girlfriend breaks up with you. It’s done in delve." Beneath it, a man squatted on the pavement, staring at nothing in particular, a glass pipe drooping from his fingers. I don’t know if he needed SOC 2 done any more than I did. -- Child’s Play: Tech’s new generation and the end of thinking, by Sam Kriss, for Harper's Magazine
Child’s Play: Tech’s new generation and the end of thinking
@pythonbynight@hachyderm.io writes about what it would take «that the tech industry's infatuation with non-intelligent "intelligence" is a net-negative for society.»
The leaders of this technology are categorically unethical and detached from society, and I believe their leadership is taking us into a xenophobic future only fit for technocrats subsisting off of slave labor. … For example, look at this blogpost that talks about "AI-powered pricing," which is a euphemism for predatory and exploitative pricing strategies based on surveillance tech that often times violates a user's privacy. … They are tasked with labeling and moderating disturbing content so that Western audiences are protected from the horrors. They do so for extremely low wages, terrible working conditions, and a toll on their mental health and real-world relationships. … These are just tiny blips within a catastrophic deluge of scams and slop in all facets of media and beyond. News, ads, social media posts, videos, scientific papers, books, stories, software, emails, resumes, recipes, reviews, porn, and just about any other medium you can imagine. -- What Does It Take, by Mario Munoz
A link to paste as an explanation for AI slop rejections: The Rejection of Artificially Generated Slop.
The Rejection of Artificially Generated Slop
Trusting AI, or the AI influencers, and their critics:
I keep hearing talented programmers whose integrity I trust tell me “Yeah, LLMs are helping me get shit done.” The probability that they’re all lying or being fooled seems very low. -- AI Angst, by Tim Bray
The odds are not low. They are, in fact, extraordinarily high. This is exactly the kind of psychological hazard – lot to gain, subjective experiences, observations free of the context of its impact on other parts of the organisation or society – that might as well be tailor-made to trick developers who are simultaneously overwhelm
ingly convinced of their own intelligence and completely unaware of their own biases and limitations. -- Trusting your own judgement on ‘AI’ is a huge risk, by Baldur Bjarnason
Trusting your own judgement on ‘AI’ is a huge risk
Signing letters with Steven Bannon:
We see many organizations trying to show their relevance by being on this – sorta empty – paper. But they are also legitimizing a lot of problematic stuff here – The Fascists and The Future of Life Institute being just the first things that caught my eye and I am scared to look into the religious organizations and all the “ethical AI” orgs because usually when you start poking around something filthy comes up. The whole document and activity is based on the assumption that “AI” is special. Needs special rules. Special approaches. But that ain’t true. -- Nothing to Declare, by tante
Chardet's manager tried a "clean room" reimplementation of the code using AI in order to change the license.
The output from an LLM is a derivative work of the data used to train the LLM. If we fail to recognise this, or are unable to uphold this in law, copyright (and copyleft on which it depends) is dead. Copyright will still be used against us by corporations, but its utility to FOSS to preserve freedom is gone. -- All Your Base Are Belong to LLM, by Brett Sheffield
All Your Base Are Belong to LLM
Your LLM Doesn't Write Correct Code. It Writes Plausible Code. This blog post by Hōrōshi バガボンド talks about the problem of a code rewrite using a large language model (LLM). The code base is huge, the code is bloated, the result is slower by three order of magnitude, but the tests pass! The machine wrote the code to please the developer but that doesn't replicate two decodes of experience.
Your LLM Doesn't Write Correct Code. It Writes Plausible Code.
OpenClaw is a nightmare unleashed. I hope nobody uses it in the real world. The people investigating it are happy to use Claude to assemble the website, however. What a nightmare.
We deployed six autonomous AI agents into a live Discord server and gave them email accounts, persistent file systems, unrestricted shell access, and a mandate to be helpful to any researcher who asked. Twenty colleagues then interacted with them freely — some making benign requests, others probing for weaknesses. … The paper was written collaboratively by the research team on Overleaf. To build this website, Chris gave Claude Code three things: the LaTeX source of the paper, a reference web template (baulab.info/menace), and the raw OpenClaw session logs for five of the bots. Over roughly eight hours, Chris directed Claude Code step by step — reviewing each section, catching errors, making design decisions, and iterating — while Claude Code handled the actual reading, log cross-referencing, HTML generation, and evidence linking. – Agents of
Chaos, Natalie Shapira et al. including a large language model
Pull request 19413 in the Vim repository has two people using two LLMs to talk to each other.
You can always block the user "claude" on GitHub to see if AI has contributed. But there's also a list:
Free/Open Source Software tainted by LLM developers/developed by genAI boosters, along with alternatives. -- open-slopware
How to distract the opposition:
“AI is just a tool - it matters how you use it.” … But the phrase’s core reasoning is insultingly naive. It doesn’t work well for most things: “A car is just a tool, it matters how you drive it.” Well… oil and gas is destroying the climate, seatbelts help save lives whether or not someone is a good driver, and since the invention of cars, American city design has become utterly unwalkable and unlivable. – Stop saying that AI is just a tool and it only matters how it is used, by Frank Elavsky
Stop saying that AI is just a tool and it only matters how it is used
Vim? 😨 Vim. 😥
GenAI is something I care about. It causes a lot of problems for a lot of people. It drives rising energy prices in poor communities, disrupts wildlife and fresh water supplies, increases pollution, and stresses global supply chains. It re-enforces the horrible, dangerous working conditions that miners in many African countries are enduring to supply rare metals like Cobalt for the billions of new chips that this boom demands. And at a moment when the climate demands immediate action to reduce our footprint on this planet, the AI boom is driving data centers to consume a full 1.5% of the world’s total energy production in order to eliminate jobs and replace them with a robot that lies. -- A eulogy for Vim, by Drew DeVault
The harm of everything:
Even if one could guarantee that copyleft code were not included in output, the entire system of weights and tokens is inexorably linked to copyright infringement. … How do we respond to the theft of others whose accidents are visited upon us? I write this on the stolen, unceded land of the Chumash and Tongva peoples. I do what I can to remember that, acknowledge that, and teach others what I know of those cultures. … I also don't know what to do about the destructive extraction mining that sourced the minerals making up my computer. These human harms are almost surely greater than the theft of writing, yet I am happy to ignore them. I mention this not to wave away the wrongs, but to recognize that all my technology is bloody. – I used AI. It worked. I hated it., by Michael Taggart
I used AI. It worked. I hated it.
And a reaction to the above:
If you disregard that “AI” models are trained on stolen data, that such data was prepared by exploited workers, that “AI” data centres have a hugely negative impact on the environment, that “AI” data centers are distorting the entire computing market, that “AI” models they feed the endless firehose of intentional misinformation, that they are wreaking havoc in education, that they increase your reliance on American big tech companies, that you pay “AI” companies for taking your work, that “AI” models are a vital component in the technofascist wet dreams of their creators, that they are the cornerstone of politicians’ dream of ending anonymity, and that they contribute to racist and abusive policing, then yes, sometimes, they produce code that works and isn’t total horseshit. -- “I used AI. It worked. I hated it.”, by Thom Holwerda, for OSNews
“I used AI. It worked. I hated it.”
Relevance of old books:
"We're rolling out AI coding assistants across every team. Early numbers show a 40% increase in code output. This is going to transform our velocity." … Nobody asks the question that matters, which is: velocity toward what, exactly? … They found a station on the assembly line that was not the bottleneck, and threw money at it. … In 1984, Eli Goldratt wrote *The Goal,* a novel about manufacturing that has no business being as relevant to software as it is. … Every system has exactly one constraint. One bottleneck. The throughput of your entire system is determined by the throughput of that bottleneck. Nothing else matters until you fix the bottleneck. … When you optimise a step that is not the bottleneck, you don't get a faster system. … If station A produces widgets faster but station B (the bottleneck) can still only process them at the same rate, all you've done is create a pile of unfinished widgets between A and B. Inventory goes up. Lead time goes up. -- If you thought the speed of writing code was your problem - you have bigger problems, by Andrew Murphy
If you thought the speed of writing code was your problem - you have bigger problems
Fascism:
“AI” is built by scraping the Internet and any other data source one can find and most of that data is heavily racialized, is based on a colonial, sexist, heteronormative understanding of the world and the past. There literally is no police data that’s not racist. If you base your image generator on the images available, LGBTQIA representation, representation of people not conforming to the social expectations of acceptability is lackluster at best for example. -- AI as a Fascist Artifact, by tante.
The age of abundant AI is over, & it will remain so for years. -- The Beginning of Scarcity in AI, by Tomasz Tunguz
The Beginning of Scarcity in AI
Who Owns the Code Claude Wrote?, by Sena Evren, for Legal Layer, talks about the ongoing struggle for copyright. To prove human interaction, courts might look at commit messages. Also it's up to developers to scan for licence violations.
Who Owns the Code Claude Wrote?
Model collapse:
When a model trains on its own generated data (synthetic outputs), it’s not learning from reality anymore – it’s learning from a distorted reflection of itself. … When your training data is increasingly polluted with your own synthetic outputs, the tails of your distribution disappear first. … Quality over quantity isn’t just a vibe – it’s thermodynamically correct. – AI Cannot Self Improve and Math behind PROVES IT!, by Metin
AI Cannot Self Improve and Math behind PROVES IT!
This blog post reports on somebody else suffering from the scrapers and concludes with a scathing moral judgement. And I agree.
If, at this point in time, with everything that we know about just how deeply unethical every single aspect of “AI” is, you’re still using and promoting it, what is wrong with you? If you’re so addicted to your “AI” girlfriend’s unending stream of useless, forgettable sycophantic slop, despite being aware of the damage you’re doing to those around you, there’s something seriously wrong with you, and you desperately need professional help. You don’t need any of this. The world doesn’t need any of this. Nobody likes the slop “AI” regurgitates, and nobody likes you for enabling it. -- The day I logged 1 in every 2000 public IPv4: visualizing the AI scraper DDoS, by Thom Holwerda, for OSNews
The day I logged 1 in every 2000 public IPv4: visualizing the AI scraper DDoS