AI Misinformation in Tourism: When Chatbots Create False Realities
Why Destination Brands Can’t Ignore AI-Generated Misinformation
You’re reading Talking Places - a newsletter about how places talk, trend and grow.
I’m Kevin, Freelance Writer & Partnerships Manager based in Tokyo. Find me on LinkedIn!
AI Misinformation in Tourism: When Chatbots Create False Realities
Last week, the story about Soundslice went viral. ChatGPT confidently claimed the app could handle guitar tab uploads, which wasn’t true. But users believed it, more requests showed up, and the team actually built the feature. A hallucination became reality.
What happens when AI hallucinations shape the real world of travel? Today, more travelers plan trips using chatbots like ChatGPT or Gemini. What if these models confidently make things up about flights, hotels, or destinations and travelers blindly follow?
Here’s what’s already happening:
Case 1: AI dreams up fake tourist attractions
Recently in Malaysia, an older couple saw a slick, AI-generated video about the “Kuak SkyRide” cable car. The couple from Kuala Lumpur then made their way to the location coordinates shown in the video about 300 km north hoping to ride the cable car there. On site, however, the couple learned that the attraction doesn't exist at all. A hotel employee had to explain to them that the video was an "AI hoax."
In the 3-minute film, a supposed reporter even appeared interviewing allegedly enthusiastic tourists all of it.

Case 2: ChatGPT sends travelers to ghost museums
A recent experiment asked ChatGPT to plan two-day trips in cities like Berlin or London. In 90% of cases, the itinerary included places that were closed, never existed, or simply impossible to visit as described. In Berlin, travelers were sent to the closed-down Pergamon Museum.
Case 3: Air Canada’s chatbot creates expensive confusion
A passenger reached out to Air Canada’s website chatbot about bereavement fares. The bot insisted he could get a refund after traveling. Based on this promise, he booked a high-priced ticket. Only later did Air Canada clarify: actually, no refunds after the fact. The passenger sued and won. Court ruled: Air Canada is responsible for everything its chatbot says, just like a human agent.
What can destinations do?
AI errors can shape real expectations. In tourism, that means disappointed visitors, legal risks, or even entirely new attractions emerging because “AI said so.” The bot isn’t just a search engine; sometimes it’s the architect of perception.
Who’s responsible? Is it the platform, the developer, or the destination? Should travel businesses keep monitoring what AI says about them and set the record straight? And what happens when enough people start asking about a feature or attraction that never existed except in a chatbot’s imagination?
Curious how destinations can stay ahead of bot-made rumors? We'll unpack real tools and tactics in the next issue.
Thanks for reading :)
Kevin