In the latest edition of this, here’s a reminder that prompts aren’t as private as you think they are!
A friend of mine noticed some weird searches showing up in her Google Search Console. Things that had nothing to do with her site and seemed like ChatGPT prompts. Eventually I was able to replicate this same behavior in my GSC as well.
j'utilise rarement chatgpt (t'as aussi duck.ai qui fait bien l'affaire, des modèles moins puissants si tu paies pas, mais pour pas finir assisté ça suffit largement) mais quand c'est le cas c'est avec un compte jetable.After testing three different toys powered by AI, researchers from the US Public Interest Research Group found that the playthings can easily verge into risky conversational territory for children, including telling them where to find knives in a kitchen and how to start a fire with matches. One of the AI toys even engaged in explicit discussions, offering extensive advice on sex positions and fetishes.(gros, gros soupir)




Peter Thiel dumps top AI stock, stirring bubble fears
A quiet selloff raises fresh questions about AI’s surge.
.@IceSolst made a satirical post about how their invention of "VSC" (Comma Separated Value, CSV backward) would improve LLM efficiency and replace JSON
— vx-underground (@vxunderground) November 18, 2025
People on LinkedIn took it serious. Some posts exceed 7,000 likes.
I'm going to kill myself pic.twitter.com/ClGHX70o7a
Given widespread acknowledgment of a potential AI industry bubble, including extended remarks by Pichai in a recent BBC interview, the aggressive plans for AI data center expansion reflect Google’s calculation that the risk of underinvesting exceeds the risk of overcapacity. But it’s a bet that could prove costly if demand doesn’t continue to increase as expected.
Major AI conference flooded with peer reviews written fully by AIComme dirait The_Cure, les blagues s'écrivent toutes seules.
Nature
Controversy has erupted after 21% of manuscript reviews for an international AI conference were found to be generated by artificial intelligence.

In China, you can get a refund without having to return the product itself, as long as you have photographic evidence that the product is defective or at least not up to general quality standards. This was never a foolproof system, but with the advent of AI, businesses are finding it increasingly difficult to tell what is real and what isn’t.
From mouldy fruits and vegetables to seemingly cracked porcelain and frayed clothing, refund fraudsters are reportedly making a killing online, with businesses unable to do much about it. Part of the problem is automation. Many online shopping platforms require shoppers to upload photos of their defective products, and if the algorithm finds them realistic enough, the refund is processed automatically.

Zeph (./649) :Tu as bien résumé ^^
Ah les bots qui réclament arrivent à tromper les bots qui remboursent, c'est mauvais signe pour les bots qui achètent ça
John Carmack (sur Twitter) :
In some important ways, a user’s LLM chat history is an extended interview. The social media algorithms learn what you like, but chats can learn how you think.
You should be able to provide an LLM as a job reference, just like you would a coworker, manager, or professor. It can form an opinion and represent you without revealing any private data.
Most resumes are culled by crude filters in HR long before they get to the checking-references stage, but this could greatly increase the fidelity. Our LLM will have an in-depth conversation with your LLM. For everyone.
Most people probably shudder at the idea of an LLM rendering a judgement on them, but it is already happening in many interview processes today based on the tiny data in resumes. Better data helps everyone except the people trying to con their way into a position, and is it really worse than being judged by random HR people?
Candidates with extensive public works, whether open source code, academic papers, long form writing, or even social media presence, already give a strong signal, but most talent is not publicly visible, and even the most rigorous (and resource consuming!) Big Tech interview track isn’t as predictive as you would like. A multi-year chat history is an excellent signal.
Taken to the next level, you could imagine asking “What are the best candidates in the entire world that we should try to recruit for this task?” There is enormous economic value on the table in optimizing the fit between people and jobs, and it is completely two-sided, benefitting both employers and employees.

We’re talking parts-per-million of poison for large models, because the researchers found that with just 250 carefully-crafted poison pills, they could compromise the output of any size LLM. Now, when we say poison the model, we’re not talking about a total hijacking, at least in this study. The specific backdoor under investigation was getting the model to produce total gibberish.
The gibberish here is triggered by a specific phrase, seeded into the poisoned training documents. One might imagine an attacker could use this as a crude form of censorship, or a form of Denial of Service Attack — say the poisoned phrase is a web address, then any queries related to that address would output gibberish. In the tests, they specifically used the word “sudo”, rendering the models (which ranged from 600 million to 13 billion parameters) rather useless for POSIX users.
Racks of AI chips are too damn heavy
The Verge
Old data centers physically cannot support rows and rows of GPUs, which is one reason for the massive AI data center buildout.