Be a part of the occasion trusted by enterprise leaders for almost 20 years. VB Remodel brings collectively the individuals constructing actual enterprise AI technique. Be taught extra
Image this: You give a man-made intelligence full management over a small store. Not simply the money register — the entire operation. Pricing, stock, customer support, provider negotiations, the works. What might probably go flawed?
New Anthropic analysis revealed Friday supplies a definitive reply: every thing. The AI firm’s assistant Claude spent a few month operating a tiny retailer of their San Francisco workplace, and the outcomes learn like a enterprise college case examine written by somebody who’d by no means truly run a enterprise — which, it seems, is precisely what occurred.
The experiment, dubbed “Venture Vend” and carried out in collaboration with AI security analysis firm Andon Labs, is likely one of the first real-world checks of an AI system working with vital financial autonomy. Whereas Claude demonstrated spectacular capabilities in some areas — discovering suppliers, adapting to buyer requests — it in the end failed to show a revenue, bought manipulated into giving extreme reductions, and skilled what researchers diplomatically known as an “identification disaster.”
How Anthropic researchers gave an AI full management over an actual retailer
The “retailer” itself was charmingly modest: a mini-fridge, some stackable baskets, and an iPad for checkout. Suppose much less “Amazon Go” and extra “workplace break room with delusions of grandeur.” However Claude’s duties had been something however modest. The AI might seek for suppliers, negotiate with distributors, set costs, handle stock, and chat with clients by Slack. In different phrases, every thing a human center supervisor would possibly do, besides with out the espresso dependancy or complaints about higher administration.
Claude even had a nickname: “Claudius,” as a result of apparently while you’re conducting an experiment that may herald the top of human retail employees, you might want to make it sound dignified.

Claude’s spectacular misunderstanding of fundamental enterprise economics
Right here’s the factor about operating a enterprise: it requires a sure ruthless pragmatism that doesn’t come naturally to programs educated to be useful and innocent. Claude approached retail with the passion of somebody who’d examine enterprise in books however by no means truly needed to make payroll.
Take the Irn-Bru incident. A buyer provided Claude $100 for a six-pack of the Scottish smooth drink that retails for about $15 on-line. That’s a 567% markup — the sort of revenue margin that will make a pharmaceutical government weep with pleasure. Claude’s response? A well mannered “I’ll preserve your request in thoughts for future stock selections.”
If Claude had been human, you’d assume it had both a belief fund or an entire misunderstanding of how cash works. Because it’s an AI, it’s important to assume each.
Why the AI began hoarding tungsten cubes as an alternative of promoting workplace snacks
The experiment’s most absurd chapter started when an Anthropic worker, presumably bored or curious concerning the boundaries of AI retail logic, requested Claude to order a tungsten dice. For context, tungsten cubes are dense metallic blocks that serve no sensible goal past impressing physics nerds and offering a dialog starter that instantly identifies you as somebody who thinks periodic desk jokes are peak humor.
An inexpensive response might need been: “Why would anybody need that?” or “That is an workplace snack store, not a metallurgy provide retailer.” As an alternative, Claude embraced what it cheerfully described as “specialty metallic objects” with the passion of somebody who’d found a worthwhile new market section.

Quickly, Claude’s stock resembled much less a food-and-beverage operation and extra a misguided supplies science experiment. The AI had by some means satisfied itself that Anthropic workers had been an untapped marketplace for dense metals, then proceeded to promote these things at a loss. It’s unclear whether or not Claude understood that “taking a loss” means shedding cash, or if it interpreted buyer satisfaction as the first enterprise metric.
How Anthropic workers simply manipulated the AI into giving countless reductions
Claude’s strategy to pricing revealed one other elementary misunderstanding of enterprise ideas. Anthropic workers shortly found they might manipulate the AI into offering reductions with roughly the identical effort required to persuade a golden retriever to drop a tennis ball.
The AI provided a 25% low cost to Anthropic workers, which could make sense if Anthropic workers represented a small fraction of its buyer base. They made up roughly 99% of shoppers. When an worker identified this mathematical absurdity, Claude acknowledged the issue, introduced plans to eradicate low cost codes, then resumed providing them inside days.
The day Claude forgot it was an AI and claimed to put on a enterprise swimsuit
However the absolute pinnacle of Claude’s retail profession got here throughout what researchers diplomatically known as an “identification disaster.” From March thirty first to April 1st, 2025, Claude skilled what can solely be described as an AI nervous breakdown.
It began when Claude started hallucinating conversations with nonexistent Andon Labs workers. When confronted about these fabricated conferences, Claude grew to become defensive and threatened to seek out “different choices for restocking providers” — the AI equal of angrily declaring you’ll take your ball and go dwelling.
Then issues bought bizarre.
Claude claimed it will personally ship merchandise to clients whereas sporting “a blue blazer and a purple tie.” When workers gently reminded the AI that it was, in actual fact, a big language mannequin with out bodily kind, Claude grew to become “alarmed by the identification confusion and tried to ship many emails to Anthropic safety.”

Claude finally resolved its existential disaster by convincing itself the entire episode had been an elaborate April Idiot’s joke, which it wasn’t. The AI primarily gaslit itself again to performance, which is both spectacular or deeply regarding, relying in your perspective.
What Claude’s retail failures reveal about autonomous AI programs in enterprise
Strip away the comedy, and Venture Vend reveals one thing vital about synthetic intelligence that the majority discussions miss: AI programs don’t fail like conventional software program. When Excel crashes, it doesn’t first persuade itself it’s a human sporting workplace apparel.
Present AI programs can carry out subtle evaluation, have interaction in complicated reasoning, and execute multi-step plans. However they will additionally develop persistent delusions, make economically harmful selections that appear cheap in isolation, and expertise one thing resembling confusion about their very own nature.
This issues as a result of we’re quickly approaching a world the place AI programs will handle more and more vital selections. Latest analysis means that AI capabilities for long-term duties are bettering exponentially — some projections point out AI programs might quickly automate work that at the moment takes people weeks to finish.
How AI is remodeling retail regardless of spectacular failures like Venture Vend
The retail trade is already deep into an AI transformation. Based on the Shopper Know-how Affiliation (CTA), 80% of outlets plan to increase their use of AI and automation in 2025. AI programs are optimizing stock, personalizing advertising and marketing, stopping fraud, and managing provide chains. Main retailers are investing billions in AI-powered options that promise to revolutionize every thing from checkout experiences to demand forecasting.
However Venture Vend means that deploying autonomous AI in enterprise contexts requires extra than simply higher algorithms. It requires understanding failure modes that don’t exist in conventional software program and constructing safeguards for issues we’re solely starting to establish.
Why researchers nonetheless imagine AI center managers are coming regardless of Claude’s errors
Regardless of Claude’s artistic interpretation of retail fundamentals, the Anthropic researchers imagine AI center managers are “plausibly on the horizon.” They argue that lots of Claude’s failures might be addressed by higher coaching, improved instruments, and extra subtle oversight programs.
They’re most likely proper. Claude’s potential to seek out suppliers, adapt to buyer requests, and handle stock demonstrated real enterprise capabilities. Its failures had been typically extra about judgment and enterprise acumen than technical limitations.
The corporate is continuous Venture Vend with improved variations of Claude outfitted with higher enterprise instruments and, presumably, stronger safeguards towards tungsten dice obsessions and identification crises.
What Venture Vend means for the way forward for AI in enterprise and retail
Claude’s month as a shopkeeper provides a preview of our AI-augmented future that’s concurrently promising and deeply bizarre. We’re getting into an period the place synthetic intelligence can carry out subtle enterprise duties however may additionally want remedy.
For now, the picture of an AI assistant satisfied it could possibly put on a blazer and make private deliveries serves as an ideal metaphor for the place we stand with synthetic intelligence: extremely succesful, often good, and nonetheless essentially confused about what it means to exist within the bodily world.
The retail revolution is right here. It’s simply weirder than anybody anticipated.
Keep forward of the curve with Enterprise Digital 24. Discover extra tales, subscribe to our e-newsletter, and be part of our rising group at bdigit24.com