Days after its highly anticipated launch in Austin, Texas, Elon Musk’s Tesla Robotaxi service is confronting immediate and serious problems, raising significant safety concerns and drawing the attention of federal regulators. The rollout, which saw a limited fleet of Tesla Model Y vehicles begin carrying paying riders, has been marked by multiple documented incidents of erratic and potentially dangerous driving behavior. These early hiccups underscore the complexities of deploying autonomous vehicle technology in real-world urban environments and cast a shadow on Tesla’s ambitious plans for a self-driving future.
Key Takeaways:
- Tesla launched a limited Robotaxi service in Austin on June 22, 2025, using modified Model Y vehicles.
- Early user videos and reports quickly emerged, documenting instances of erratic driving.
- Incidents included driving into oncoming lanes, sudden unexplained braking, missing turns, and driving over curbs.
- The National Highway Traffic Safety Administration (NHTSA) is in contact with Tesla and reviewing the reported incidents.
- Tesla’s Robotaxi operates with a “vision-only” system, unlike competitors that use LiDAR and radar.
- Despite being marketed as autonomous, each Robotaxi currently has a human “safety monitor” in the front passenger seat.
- Concerns persist about the system’s ability to handle “edge cases” and unpredictable human behavior.
A Rocky Start for Tesla’s Robotaxi Ambitions
On June 22, 2025, Tesla officially launched its Robotaxi service in a geofenced area of Austin, Texas. This limited pilot program, utilizing a small fleet of 10 to 20 modified Tesla Model Y vehicles, allowed invited users to hail rides through a dedicated app. The launch was a significant milestone for CEO Elon Musk, who has long championed the concept of a self-driving ride-hailing network as a core component of Tesla’s future valuation. Passengers could book rides for a flat promotional fee of $4.20, with the service operating daily from 6:00 AM to 12:00 AM Central Time.
However, the initial excitement quickly gave way to a torrent of public skepticism as videos and firsthand accounts detailing problematic driving surfaced across social media platforms. These incidents were not isolated, with early testers documenting a range of concerning behaviors that directly challenged the notion of the vehicles’ readiness for unsupervised operation.
Documented Incidents Raise Alarms
Several incidents have gone viral, sparking widespread discussion and immediate regulatory interest. One particularly striking video showed a Tesla Robotaxi entering a left-turn-only lane but then proceeding straight through the intersection, swerving across double yellow lines into a lane for oncoming traffic. The vehicle drove against the flow of traffic for several seconds before correcting its path. While there was no oncoming traffic at that specific moment, the maneuver highlighted a significant navigation error.
Other reports detailed instances of “phantom braking,” where the robotaxis would suddenly and inexplicably brake in the middle of the road, causing passengers to be jolted forward and belongings to fall. This issue has been a persistent concern with Tesla’s Full Self-Driving (FSD) software for years. Users also reported vehicles missing intended turns, driving over curbs, and conducting improper drop-offs, sometimes leaving passengers in busy intersections.
A spokesperson for the city of Austin confirmed that the autonomous vehicle incident dashboard, which tracks safety incidents involving autonomous vehicles, recorded a “safety concern” involving a Tesla Robotaxi shortly after the service commenced. This incident reportedly involved a vehicle braking abruptly as it passed police vehicles that were not in its driving path.
These early driving errors have drawn sharp criticism from experts in autonomous technology. Philip Koopman, a computer-engineering professor at Carnegie Mellon University and an authority on autonomous systems, expressed surprise at the sheer number of problematic driving videos emerging on the very first day of the service. He noted, “This is awfully early to have a bunch of videos of erratic and poor driving.”
Regulatory Scrutiny Intensifies
The National Highway Traffic Safety Administration (NHTSA), the federal agency responsible for vehicle safety, has acknowledged the reported incidents and is in contact with Tesla to gather more information. This is not the first time Tesla’s autonomous driving systems have come under NHTSA’s scrutiny. The agency has an open investigation into Tesla’s Full Self-Driving (FSD) software, which underpins the Robotaxi service, stemming from numerous prior accidents, including some fatalities.
Tesla had submitted responses to NHTSA’s questions regarding the Robotaxi service by a June 19 deadline but requested that the information remain confidential. This move aligns with Tesla’s historical approach of limiting public disclosure regarding its autonomous driving data and crash statistics, a practice that has drawn criticism for its lack of transparency. NHTSA, however, emphasized its commitment to enforcing the law and taking necessary actions to ensure road safety, stating that it does not pre-approve new technologies but investigates potential safety defects after deployment.
The presence of a human “safety monitor” in the front passenger seat of each Tesla Robotaxi in Austin is a crucial detail. While Tesla markets the service as autonomous, the monitors are responsible for observing the vehicle’s performance and can intervene if necessary, though they do not have access to traditional driving controls like a steering wheel or pedals. This setup effectively places Tesla’s current Robotaxi operation at a lower level of autonomy (Level 2 or 3) compared to competitors like Waymo, which operates true Level 4 autonomous services without human safety drivers in certain areas.
The “Vision-Only” Debate and Industry Context
Tesla’s approach to autonomous driving is distinct in its reliance on a “vision-only” system. The company’s FSD software navigates using data exclusively from eight cameras and artificial intelligence, eschewing the use of supplementary technologies like LiDAR (Light Detection and Ranging) and radar. Most other leading autonomous vehicle developers, including Waymo and Cruise, integrate LiDAR and radar sensors alongside cameras to create a more comprehensive perception system.
Critics argue that Tesla’s vision-only strategy presents inherent limitations, particularly in challenging conditions such as adverse weather, direct sunlight glare, or situations with complex and unpredictable human behavior. One user reportedly experienced a sudden brake due to sunlight glare, highlighting a potential weakness of a camera-dependent system. The debate over vision-only versus multi-sensor fusion has been ongoing within the autonomous vehicle industry, with these recent incidents adding fuel to the argument that a more robust sensor suite may be necessary for widespread, safe deployment.
The challenges faced by Tesla are not unique to the autonomous vehicle industry. Competitors have also encountered their share of incidents and regulatory hurdles. Waymo, a subsidiary of Alphabet, has been operating its robotaxi service in cities like Phoenix, San Francisco, and now Austin. While generally considered more cautious in its rollout, Waymo has also reported incidents, though its safety record and operational transparency are often cited as being more established. For example, Austin’s AV incidents database indicates that Waymo has recorded 22 “Safety Concern” incidents in the last year, compared to the 16 safety-critical and driving errors attributed to Tesla’s supervised Robotaxis in just four days, according to one tracking account.
General Motors’ Cruise, another prominent player, faced a significant setback in late 2023 when one of its robotaxis dragged a pedestrian for 20 feet in San Francisco after being involved in a collision with a human-driven vehicle. This incident led to the suspension of Cruise’s operations and a major reevaluation of its technology and safety protocols, demonstrating the severe consequences of autonomous vehicle malfunctions. The fallout from the Cruise incident highlighted the delicate public trust in this emerging technology.
Public Perception and the Road Ahead
Public perception plays a critical role in the adoption and success of robotaxi services. While a recent survey indicated that over 55% of Americans would consider riding in a Tesla Robotaxi, the documented incidents and associated viral videos risk eroding this nascent trust. The scrutiny of these early tests is magnified by social media, where every glitch is filmed, analyzed, and widely shared, shaping public opinion quickly.
Tesla maintains that the current Austin service is an “early-access program,” not a full public launch. The company emphasizes that the vehicles avoid complicated intersections, bad weather conditions, and areas with children under 18. Despite the initial setbacks, many early testers also reported smooth and impressive rides, indicating that the system performs well under ideal conditions.
FAQs
Q1: What is Tesla Robotaxi?
A1: Tesla Robotaxi is a ride-hailing service operated by Tesla that uses its vehicles equipped with Full Self-Driving (FSD) software to transport passengers without a human driver actively controlling the vehicle. The initial service uses modified Tesla Model Y cars.
Q2: Where is Tesla Robotaxi currently operating?
A2: As of June 22, 2025, Tesla Robotaxi launched in a limited, geofenced area of Austin, Texas, as an early-access program.
Q3: What problems have been reported with Tesla Robotaxi?
A3: Since its launch, reports and videos have shown Tesla Robotaxis exhibiting erratic driving, including entering oncoming traffic lanes, sudden unexplained braking (phantom braking), missing turns, and driving over curbs.
Q4: Is a human driver present in Tesla Robotaxis?
A4: Yes, in the current early-access program in Austin, each Tesla Robotaxi has a Tesla employee acting as a “safety monitor” in the front passenger seat. This monitor observes the vehicle’s performance and can intervene if needed, though they do not have traditional driving controls.
Q5: How does Tesla’s autonomous system differ from competitors like Waymo?
A5: Tesla’s autonomous system, Full Self-Driving (FSD), primarily relies on a “vision-only” approach, using cameras and artificial intelligence. Competitors like Waymo integrate multiple sensor types, including LiDAR and radar, alongside cameras for a more comprehensive perception of their surroundings.
Q6: What is the National Highway Traffic Safety Administration (NHTSA) doing about the reported incidents?
A6: NHTSA is aware of the incidents and is in contact with Tesla to gather additional information. The agency has an ongoing investigation into Tesla’s FSD software due to previous incidents and is monitoring the Robotaxi service’s operational safety.
Q7: When does Elon Musk expect millions of Tesla Robotaxis to be on the road?
A7: Elon Musk has stated a target of having millions of Tesla robotaxis operating by the second half of 2026, with plans for a purpose-built “Cybercab” without a steering wheel or pedals. However, the current early rollout in Austin indicates significant challenges remain before achieving this scale.


