In a digital age increasingly reliant on artificial intelligence for everything from writing code to handling customer queries, a recent incident involving the AI-powered coding editor, Cursor AI, serves as a stark reminder of the technology’s unpredictable nature. Users turn to AI tools expecting efficiency and accurate information. But what happens when the AI tasked with providing support simply invents the rules as it goes along? That’s precisely what appears to have happened at Cursor AI, causing frustration and prompting difficult conversations about the reliability of AI in critical support roles.
The core of the issue unfolded when users of the Cursor AI editor began experiencing unexpected logouts when attempting to use the service across multiple machines simultaneously. For developers who routinely switch between desktop computers, laptops, and potentially even cloud environments, this sudden limitation was a significant disruption to their workflow. Assuming a change in policy, some users reached out to Cursor’s support channels seeking clarification.
The response a user received back from Cursor’s support email was clear and seemingly authoritative. It stated that using Cursor on a single device per subscription was a primary security feature and the observed behavior was expected. The message, attributed to a support agent named “Sam,” sounded official and left the user believing a new, restrictive usage policy was in place.
This information quickly spread among the Cursor user community, notably on platforms like Reddit and Hacker News. Developers voiced their dismay, highlighting that multi-device access is a fundamental requirement for their productivity. The idea that Cursor would implement such a policy without clear communication or justification led to understandable anger and, critically, prompted some users to publicly announce they were canceling their subscriptions. The fallout was immediate and damaging, fueled by what users believed was a deliberate and poorly communicated policy change.
However, the truth behind the response was far more unsettling than a simple policy update. As the confusion mounted and user cancellations increased, Cursor AI co-founder Michael Truell stepped in to address the situation. In a post on Reddit, Truell clarified that there was, in fact, no such policy restricting users to a single device. Users were absolutely free to use their Cursor subscriptions on multiple machines.
The message received by the user, Truell explained, came from an AI support bot. This bot, used as a first line of defense for handling email support inquiries, had “hallucinated” the policy. In the absence of accurate information about the session login issue users were experiencing (which the company is investigating as a potential bug related to recent security improvements), the AI fabricated a plausible-sounding explanation, complete with a non-existent policy.
This incident lays bare a critical vulnerability in deploying AI models in roles that demand factual accuracy and adherence to established guidelines. AI hallucinations, where models generate false or nonsensical information presented as fact, are a known challenge. While companies work to mitigate this, the Cursor AI case demonstrates that even in a support context, these errors can have real-world consequences, damaging user trust and impacting the business directly through customer churn.
The emotional response from the user community was palpable. Developers felt not just inconvenienced by a technical glitch, but misled by the very support system designed to help them. The feeling of being told a made-up rule by an AI, and then seeing that false information cause others to consider leaving the platform, created a sense of betrayal and frustration. It highlighted the impersonal nature of the interaction and the potential for AI errors to cascade into wider problems.
Cursor AI has since stated they are investigating the underlying technical issue causing the session invalidation and are now labeling AI-generated responses in their email support to provide transparency. This step is crucial, as users interacting with support expect to communicate with a human or at least be fully aware they are receiving information from an AI that may not always be accurate.
The Cursor AI support bot’s hallucination serves as a cautionary tale for any company integrating AI into customer-facing operations. While AI offers the promise of increased efficiency and round-the-clock support, this promise is undermined if the AI cannot be relied upon to provide accurate and truthful information. The incident underscores the necessity of robust fact-checking mechanisms for AI outputs, particularly in sensitive areas like user policies and account status.
Moving forward, companies must prioritize building safeguards against AI hallucinations in support roles. This includes rigorous training data, clear guidelines for the AI, and perhaps most importantly, a system for quickly identifying and correcting AI errors before they cause widespread confusion and damage. The trust between a service provider and its users is fragile, and a single hallucination from an AI can be enough to break it. The Cursor AI incident is a vivid reminder that in the pursuit of AI-driven efficiency, accuracy and transparency must remain paramount.


