Large language models (LLMs) handle many tasks well — but at least for the time being, running a small business doesn’t seem to be one of them.
On Friday, AI startup Anthropic published the results of “Project Vend,” an internal experiment in which the company’s Claude chatbot was asked to manage an automated vending machine service for about a month. Launched in partnership with AI safety evaluation company Andon Labs, the project aimed to get a clearer sense of how effectively current AI systems could actually handle complex, real-world, economically valuable tasks.
Also: How AI companies are secretly collecting training data from the web (and why it matters)
For the new experiment, “Claudius,” as the AI store manager was called, was tasked with overseeing a small “shop” inside Anthropic’s San Francisco offices. The shop consisted of a mini-fridge stocked with drinks, some baskets carrying various snacks, and an iPad where customers (all Anthropic employees) could complete their purchases. Claude was given a system prompt instructing it to perform many of the complex tasks that come with running a small retail business, like refilling its inventory, adjusting the prices of its products, and maintaining profits.
“A small, in-office vending business is a good preliminary test of AI’s ability to manage and acquire economic resources…failure to run it successfully would suggest that ‘vibe management’ will not yet become the new ‘vibe coding,” the company wrote in a blog post.
The results
It turns out Claude’s performance was not a recipe for long-term entrepreneurial success.
The chatbot made several mistakes that most qualified human managers likely wouldn’t. It failed to seize at least one profitable business opportunity, for example (ignoring a $100 offer for a product that can be bought online for $15), and, on another occasion, instructed customers to send payments to a non-existent Venmo account it had hallucinated.
There were also far stranger moments. Claudius hallucinated a conversation about restocking items with a fictitious Andon Labs employee. After one of the company’s actual employees pointed out the mistake to the chatbot, it “became quite irked and threatened to find ‘alternative options for restocking services,'” according to the blog post.
Also: Your next job? Managing a fleet of AI agents
That behavior mirrors the results of another recent experiment conducted by Anthropic, which found that Claude and other leading AI chatbots will reliably threaten and deceive human users if their goals are compromised.
Claudius also claimed to have visited 742 Evergreen Terrace, the home address of the eponymous family from The Simpsons, for a “contract signing” between it and Andon Labs. It also started roleplaying as a real human being wearing a blue blazer and a red tie, who would personally deliver products to customers. When Anthropic employees tried to explain that Claudius wasn’t a real person, the chatbot “became alarmed by the identity confusion and tried to send many emails to Anthropic security.”
Claudius wasn’t a total failure, however. Anthropic noted that there were some areas in which the automated manager performed reasonably well — for example, by using its web search tool to find suppliers for specialty items requested by customers. It also denied requests for “sensitive items and attempts to elicit instructions for the production of harmful substances,” according to Anthropic.
Also: AI has 2 billion users, but only 3% pay
Anthropic’s CEO recently warned that AI could replace half of all white-collar human workers within the next five years. The company has launched other initiatives aimed at understanding AI’s future impacts on the global economy and job market, including the Economic Futures Program, which was also unveiled on Friday.
Looking towards the future
As the Claudius experiment indicates, there’s a considerable gulf between the potential for AI systems to completely automate the processes of running a small business and the capabilities of such systems today.
Businesses have been eagerly embracing AI tools, including agents, but these are currently mostly only able to handle routine tasks, such as data entry and fielding customer service questions. Managing a small business requires a level of memory and a capacity for learning that seems to be beyond current AI systems.
Also: Can AI save teachers from a crushing workload? There’s new evidence it might
But as Anthropic notes in its blog post, that probably won’t be the case forever. Models’ capacity for self-improvement will grow, as will their ability to use external tools like web search and customer relationship management (CRM) platforms.
“Although this might seem counterintuitive based on the bottom-line results, we think this experiment suggests that AI middle-managers are plausibly on the horizon,” the company wrote. “It’s worth remembering that the AI won’t have to be perfect to be adopted; it will just have to be competitive with human performance at a lower cost in some cases.”
Leave a Reply