Chinese AI pioneer SenseTime is leveraging its computer vision roots to lead the next phase of AI, shifting towards multimodal systems and embodied intelligence in the physical world. Co-founder and chief scientist Lin Dahua stated that this approach mirrors Google's, starting with vision capabilities as the core and adding language to build true multimodal systems.
SenseTime, a Hong Kong-listed company long regarded as one of the world's leading facial recognition providers, is seeking a new role in the generative AI era that began with ChatGPT's launch three years ago. In an interview with the Post on Wednesday, co-founder and chief scientist Lin Dahua explained that the company's longstanding expertise in vision-based AI positions it strongly to lead in embodied intelligence, robotics, and AI agents operating in real-world environments, amid growing debates on the limits of large language models (LLMs).
"Our strategic approach is somewhat similar to Google’s in the United States, which primarily focuses on multimodal AI including the latest Nano Banana Pro. They also start with vision capabilities as the core, then add language abilities to create real multimodal systems," said Lin, who is also an associate professor of information engineering at the Chinese University of Hong Kong.
Extending the comparison to Google—which has deep capabilities across the AI stack, including its own TPU chips for training models—Lin noted that SenseTime's decision as early as 2018 to build large-scale data centres laid a solid foundation for its ambitions. As of August, the company's total computing power stood at about 25,000 petaflops, up 8.7 per cent since the start of the year, after surging 92 per cent over the whole of 2024.
This pivot signals SenseTime's shift from hype to more hardware-focused investments, aiming to regain its edge in multimodal, real-world AI.