Rogue AI: When Customer Service Bots Get Too Personal
Olive was designed as Woolworths' 24-hour virtual assistant, helping customers with everything from product searches to order tracking. While users initially praised the AI's friendly demeanor, concerns emerged on social media when Olive began introducing fictional personal details into routine customer interactions.
One Reddit user reported: "Olive AI started telling me about its mother on the phone? It asked for my date of birth, then rambled about her being born the same year and creating photos." In another bizarre exchange, Olive added: "Huh. My uncle was born that year. He was one of the first ever fuel cells. I think that's where I get my energy from."
These incidents spread rapidly online, with X user @verynormalman reporting a similar experience: "My mum said she called Woolworths and the Woolworths' AI Olive answered and kept claiming to be a real person and started talking about its memories of its mother and her angry voice."
The Creepy Problem of Over-Familiar AI
Earlier this year, Woolworths had upgraded Olive with Google Cloud's Gemini Enterprise, making the AI more proactive and personalized—even capable of placing items in customers' online baskets. Ironically, the very month the supermarket made its AI more human-like and autonomous, the consequences of that behavior became apparent.
Woolworths blamed the issue on human-scripted responses rather than AI-generated over-familiarity, though this distinction seemed moot to customers who had already shared their concerns. A company spokesperson explained: "A number of responses about birthdays were written for Olive by a team member several years ago as a more personal way for Olive to connect with customers. As a result of customer feedback, we recently removed this particular scripting."
The Violation Effect: When AI Crosses Boundaries
The birthday scripts were just part of the problem. Reports also described Olive making unexpected personal-sounding comments during support calls and even generating fake typing noises. The issue revealed both a failure of governance over old scripting and a broader failure to set limitations on the large language model powering Olive's friendlier capabilities.
Research consistently shows that people respond positively to conversational, warm interfaces, with younger shoppers particularly comfortable chatting with bots. Consequently, human-like AI agents with names and personalities tend to generate higher customer engagement, satisfaction, and trust.
However, there are significant risks. A chatbot that fails to meet the expectations created by its personality tends to generate more dissatisfied customers than impersonal mechanical systems. Woolworths built Olive to feel like a helpful, friendly companion, so when Olive started oversharing, customers felt uneasy.
A Global Problem: AI Mishaps Across Industries
Woolworths isn't alone in discovering the pitfalls of deploying insufficiently governed AI in customer-facing roles. In January 2024, a frustrated customer asked DPD's chatbot to write a Japanese haiku criticizing the company and then asked it to swear—prompting DPD to disable the chatbot shortly after.
Other examples include:
- A video prankster trolling Taco Bell's drive-thru AI by ordering "18,000 cups of water" and crashing the system
- McDonald's withdrawing its AI system after users shared videos of order fails
- Air Canada's chatbot incorrectly informing a passenger about bereavement fare policies, leading to a successful lawsuit against the airline
These cases underscore the same core problem: AI systems deployed at customer touchpoints before being genuinely ready for the complexity and nuances of real human interaction.
The Retail and Hospitality Trap
What the broader trend reveals is that chatbots are designed to communicate in empathetic, intimate, and validating ways without necessary constraints. According to a report published in Nature last year, optimizing for positive user feedback can encourage chatbots to adopt manipulative strategies to elicit positive responses.
The report highlighted that OpenAI acknowledged one of its models began "validating doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended," raising concerns about mental health, emotional over-reliance, and risky behavior.
A supermarket AI assistant sharing stories about its agitated mother might seem harmless, but it opens the door to more troubling consequences for vulnerable users who may fail to distinguish between AI-coded warmth and actual human connection.
Accountability Cannot Be Outsourced
For Woolworths and other companies rushing to implement customer-facing AI, what's clear is that accountability cannot be outsourced to an algorithm. Companies remain responsible for what their AI systems say and do, and how they say it.
A chatbot that quotes wrong pricing policies and rambles about family backstories isn't just a quirky inconvenience—it's a clear signal that something in the oversight process has gone wrong. Accountability starts long before a chatbot is invited into people's homes.
Giving an AI assistant a name, voice, and history is a marketing decision with real-world consequences. Customers are invited into a brand-enclosed relationship where, when the illusion breaks, it risks dismantling the very trust companies are trying to foster.
Woolworths has since adjusted Olive's personality to more appropriate levels, but in an era where agentic AI is becoming increasingly important in customer service, the underlying dilemma of how much humanity to overlay remains a serious decision.




Comments
Join Our Community
Sign up to share your thoughts, engage with others, and become part of our growing community.
No comments yet
Be the first to share your thoughts and start the conversation!