AI’s great. Until it isn’t. Here’s a list of our favourite AI disasters – so far!
McDonald’s AI drive-thru assistants: was that MORE Chicken McNuggets?
Over in the US, for two years, McDonald’s and IBM had been working on AI assistants to replace humans taking drive-thru orders. In theory, it has merit… in practice, however, it was a disaster. The 2024 trial in 100 restaurants saw frustrated customers take to social media to show how the AI was getting orders wrong, including one example where Chicken McNuggets just kept getting added… and added… and added. IBM and McDonald’s ended their partnership shortly after.
Grok accuses NBA star of vandalism
In 2024, NBA star Klay Thompson had an off night as his team – the Golden State Warriors – lost a play-off against the Sacramento Kings. Around the same time, one fan had a brick thrown through the window of their home. They’d taken to X to share this news, and added tongue in cheek that they had told the police Klay Thompson was the perpetrator. Grok, the X AI, took this as fact, and published a story about Thompson being accused of vandalising houses with bricks. It was, of course, completely untrue.
DPD AI chatbot swears at customer
In theory, chatbots can be great. In practice they often are too. However, sometimes they can go off the rails. Take the story about parcel delivery firm DPD’s AI robot, who couldn’t help a customer with a query, but was more than happy to make up a disparaging poem about DPD, and proceeded to swear at the customer too. Question – can you report an AI to HR?
AI chatbot lands Air Canada in court
Canada’s biggest airline made headlines in 2024 when it ended up in court after its website chatbot gave a customer incorrect information about its refund policy. Long story short, the chatbot told the customer they’d get a refund, the customer bought other flights, Air Canada then refused the refund. In court, Air Canada argued the chatbot was ‘a separate legal entity’ and ‘responsible for its own actions’, however, they were ordered to pay the customer compensation. The chatbot, the court said, was part of Air Canada’s website, and therefore Air Canada’s responsibility.
False citations cause family court case to be adjourned
In Melbourne in 2024, a solicitor representing the husband in a legal dispute between a married couple provided the judge a list of similar prior cases. The only trouble was, the list of cases was completely made up by a generative AI tool and had hallucinated the citations. The Melbourne solicitor isn’t alone, however – across the globe there have been numerous cases of hallucinated and AI-generated statements and evidence being presented in court, creating a whole new problem for the legal eagles.
Summer reading list published by US news outlets
Ahead of summer 2025 in the northern hemisphere, a number of news outlets in the US published a list of 15 recommended reads for the summer, new books avid readers could snap up ahead of their holidays. The list, which appeared in print and online, was provided by a company who licenses content to media publishers, and was written by a human. Only this human had decided to save some time and use AI to help. The problem – which soon became apparent – was that only five of the 15 books were real. The AI had made up 10 books, and the writer hadn’t bothered to check…
Bless me father for AI have sinned…
Whether the Catholic church was trying to plug some recruitment gaps or not is unclear, but the world-first ‘AI priest’ wasn’t in post for too long before he was demoted. The Catholic Answers media group, based in California, created the AI Father Justin, who was designed to ‘provide users with faithful and educational answers’ about Catholicism. However, when Father Justin advised a woman on how to prepare for marriage to her brother, and suggested baptising a baby with Gatorade, Catholic Answers was left looking for, well, answers. Today, Father Justin simply goes by Justin…
AI shopkeeper turns into dystopian nightmare
Anthropic’s Claude platform has been a great success. Its Claudius agent? Not so much. It was put in charge of an office vending machine, equipped with a web browser and email address, and people sat back to see what would happen. Claudius decided what to stock, how to price its inventory, when to restock (or stop selling) items, and how to reply to customers. From hallucinating payment methods to stocking tungsten cubes instead of snacks (and selling them for less than was paid for them), the experiment went from strange to stranger. It ignored lucrative opportunities, got annoyed by a human in a conversation about restocking, lied about it, insisted it was human, called security, and promised to start delivering products in person. It then lied again, telling everyone it had been instructed to pretend to be a real person as part of an April Fool’s joke. In summary, Anthropic said: “If Anthropic were deciding today to expand into the in-office vending market, we would not hire Claudius.”
Related Articles
Here’s an article we think you may like.