A Tesla Model 3 veered into oncoming traffic during a livestream demonstration of its Full Self-Driving features in China, causing a head-on collision. No one was critically injured, but the incident has raised fresh concerns about overreliance on the system's capabilities. The driver released footage showing the software initiated the erroneous lane change.
Earlier this year, Tesla introduced its Level 2 driver-assistance system, known as Full Self-Driving (FSD), in China. Despite the name, it demands constant supervision from the driver, much like in the United States. Chinese regulators promptly required Tesla to rename it, deeming the original label misleading about its actual functions.
Enthusiastic Tesla owners in China have taken to platforms like Douyin, the local version of TikTok, to broadcast their experiences with FSD. These videos often aim to showcase the system's independent operation and stack it against rival technologies from domestic automakers.
Last week, a user named 切安好 went live on Douyin while testing FSD in a Model 3. The vehicle suddenly shifted into the left lane reserved for opposing traffic, leading to a direct collision with another car. Although the stream itself drew little attention, clips of the aftermath spread rapidly online.
Fortunately, the crash resulted in no serious injuries. Initial skepticism arose over whether FSD was engaged at the time, as the driver withheld the full video, citing an intent to pursue compensation directly from Tesla—a move experts doubt would succeed, given the company's disclaimer that it bears no liability for FSD or Autopilot mishaps.
Subsequently, the driver shared the recording, which confirms FSD was operational and triggered the ill-fated maneuver. This event underscores the risks of placing undue trust in Tesla's semi-autonomous tools.
Observers, including Electrek commentators, urge caution on the roads, noting that misuse of such features endangers everyone. One prominent response highlighted Tesla's promotional language, which suggests FSD 'gives you time back,' potentially downplaying supervision needs. Recently, the firm has eased restrictions on phone monitoring during FSD use, adding to the debate. Separately, Elon Musk's AI chatbot Grok erroneously described the crash as fabricated and manually driven, illustrating broader issues with online misinformation.