Racks of AI chips are too damn heavy
The Verge
Old data centers physically cannot support rows and rows of GPUs, which is one reason for the massive AI data center buildout.
Racks of AI chips are too damn heavy
The Verge
Old data centers physically cannot support rows and rows of GPUs, which is one reason for the massive AI data center buildout.
Zerosquare (./652) :John Carmack (sur Twitter) :
In some important ways, a user’s LLM chat history is an extended interview. The social media algorithms learn what you like, but chats can learn how you think.
You should be able to provide an LLM as a job reference, just like you would a coworker, manager, or professor. It can form an opinion and represent you without revealing any private data.
Most resumes are culled by crude filters in HR long before they get to the checking-references stage, but this could greatly increase the fidelity. Our LLM will have an in-depth conversation with your LLM. For everyone.
Most people probably shudder at the idea of an LLM rendering a judgement on them, but it is already happening in many interview processes today based on the tiny data in resumes. Better data helps everyone except the people trying to con their way into a position, and is it really worse than being judged by random HR people?
Candidates with extensive public works, whether open source code, academic papers, long form writing, or even social media presence, already give a strong signal, but most talent is not publicly visible, and even the most rigorous (and resource consuming!) Big Tech interview track isn’t as predictive as you would like. A multi-year chat history is an excellent signal.
Taken to the next level, you could imagine asking “What are the best candidates in the entire world that we should try to recruit for this task?” There is enormous economic value on the table in optimizing the fit between people and jobs, and it is completely two-sided, benefitting both employers and employees.
I looked into CoreWeave and the abyss gazed back
The Verge
Hello, my friends. Have you been feeling too sane lately? Have I got something for you! It is a company called CoreWeave.
You may not have heard of it because it’s not doing the consumer-facing part of AI. It’s a data center company, the kind people talk about when they say they want to invest in the “picks and shovels” of the AI gold rush. At first glance, it looks impressive: it’s selling compute, the hottest resource in the industry; it’s landed a bunch of big-name customers such as Microsoft, OpenAI and Meta; and its revenue is huge — $1.4 billion in the third quarter this year, double what it was in the third quarter of 2024. The company has almost doubled in share price since its IPO earlier this year, which was the biggest in tech since 2021. So much money!
But as I began to look more closely at the company, I began feeling like I’d accidentally stumbled on an eldritch horror. CoreWeave is saddled with massive debt and, except in the absolute best-case scenario of fast AI adoption, has no obvious path toward profitability. There are some eyebrow-raising accounting choices. And then, naturally, there are the huge insider sales of CoreWeave stock.

The researchers set up bots talking to bots in a loop. They’d give a prompt to Stable Diffusion XL, it would make an image, then they’d show the image to Large Language and Vision Assistant (LLaVA) and ask what the image was. Then they’d feed that response back to Stable Diffusion as a prompt for another loop through. They did 100 rounds of this.
You’d have a starting prompt like:
the Prime Minister pored over strategy documents, trying to sell the public on a fragile peace deal while juggling the weight of his job amidst impending military action
The first few images would be a guy in a suit with glasses. But it very quickly ended up at an empty red room with high ceilings and three windows.
They expected the bots to stick with the prompt if it got a very specific prompt. But it didn’t. Everything converged on twelve standard templates:
sports and action imagery (cluster 0), formal interior spaces (cluster 1), maritime lighthouse scenes (cluster 2), urban night scenes with atmospheric lighting (cluster 3), gothic cathedral interiors (cluster 4), pompous interior design (cluster 5), industrial and vintage themes (cluster 6), rustic architectural spaces (cluster 7), domestic scenes and food imagery (cluster 8), palatial interiors with ornate architecture (cluster 9), pastoral and village scenes (cluster 10), and natural landscapes and animals with dramatic lighting (cluster 11).
A prompt that was not any of those groups always ended up at one of them.
When they extended it to 1000 loops, the bots might switch to a different t

vince (./669) :Oui, j'ai toujours trouvé ça con. C'est sûrement un facteur majeur pour le fait que les chances de retrouver une personne disparue chutent drastiquement après les 24-48 premières heures. Ca simplifie franchement les meurtres.
Après, certes la vidéosurveillance permet une observation "constante" mais que d'un endroit et souvent les vidéos ne sont même pas conservées donc si personne ne regarde au moment du problème, ça ne sert à rien.
flanker (./666) :C'est très succinct dans l'article et je n'ai pas cliqué pour creuser pour loin, mais ils évoquent ce point :
et d'autre part si malgré les bugs l'utilisation de l'IA reste rentable.
As to the impact of AI tools on developer productivity, researchers from Model Evaluation & Threat Research (METR) reported in July that "AI tooling slowed developers down."



Talk about letting things go! Ninety-six percent of software developers believe AI-generated code isn't functionally correct, yet only 48 percent say they always check code generated with AI assistance before committing it.
This conveniently self-validating statistic comes from Sonar, a company that sells code review and verification software, in its State of Code Developer Survey.
Based on data from more than 1,100 developers worldwide, the survey finds that AI coding tools have become the norm, with 72 percent of developers who have tried these tools using them every day or multiple times a day. And only six percent report occasional usage, meaning less than once a week.
Devs say 42 percent of their code includes significant assistance from AI models, a share they expect will reach 65 percent by 2027, up from just six percent in 2023.
The types of software projects where these developers are using AI tools range from prototypes (88 percent) to internal production software (83 percent), production software for customer-facing applications (73 percent), and production software for critical business services (58 percent).
The most commonly used code tools are: GitHub Copilot (75 percent), ChatGPT (74 percent), Claude/Claude Code (48 percent), Gemini/Duet AI (37 percent), Cursor (31 percent), Perplexity (21 percent), OpenAI Codex (21 percent), JetBrains (17 percent), Amazon Q Developer (12 percent), Windsurf (8 percent), and others (37 percent).
But the growing usage of AI tooling has, according to Sonar, created a verification bottleneck.
"This verification step isn't trivial," the report says. "While AI is supposed to save time, developers are spending a significant portion of that saved time on review. Nearly all developers (95 percent) spend at least some effort reviewing, testing, and correcting AI output. A majority (59 percent) rate that effort as 'moderate' or 'substantial.'"
According to the survey, 38 percent of respondents said reviewing AI-generated code requires more effort than reviewing human-generated code, compared to 27 percent who said the opposite.
Developers say the shift toward AI tools has both benefits (93 percent) and drawbacks (88 percent). They appreciate, for example, that AI helps make the documentation process better (57 percent) and helps with creating test coverage (53 percent). They're less thrilled about code that looks correct but isn't (53 percent) or is unneeded or redundant (40 percent).(bon, c'est un sondage)
The report also notes that despite 75 percent of developers saying that AI reduces the amount of unwanted toil (managing technical debt, debugging legacy or poorly documented code, etc.), the reality is that AI tools just shift that work to new areas, like "correcting or rewriting code created by AI coding tools."
"Interestingly, the amount of time spent on toil (an average of 23-25 percent) stays almost exactly the same for developers who use AI coding tools frequently and for those who use them less often," the report says.
pour la revue de code il trouve des choses intéressantes, mais je trouve que le ratio "suggestions utiles / stupides" n'est pas assez élevé pour que ça vaille vraiment le coup, à moins d'être tout seul sur un projet et de ne pas avoir de garde-fou peut-être.