At What Stage Has AI Technology Currently Developed?

  • 2025-07-28


At What Stage Has AI Technology Currently Developed?

  The pace of AI technology advancement can be described as "one day in the human world, one year in AI." Wan Weixing believes that the foundational and application-layer technologies of AI Agents have been developing rapidly since 2024. He argues that in the field of Agents, "a single day can bring entirely new changes."

  Industry insiders generally agree that 2025 will mark the "Year of the AI Agent." With capabilities like deep reasoning, autonomous planning, decision-making, and execution, AI is undergoing a paradigm shift in its development path. The trend is shifting from "I ask, AI answers" to "I ask, AI does," gradually being applied across various business scenarios.

  Smart terminal Agents are a key focus for LeapStar. According to company representatives, over the past two years, LeapStar’s self-developed Step series of large models have been integrated into flagship products of leading companies in automotive, embodied intelligence, and IoT. More than 50% of top domestic smartphone brands collaborate with LeapStar on AI Agents. On July 25, ahead of the 2025 World AI Conference, LeapStar unveiled its next-generation foundational model—Step 3—in Shanghai.

  LeapStar representatives noted that the explosion of Agents requires two key conditions: multimodal capabilities and slow-thinking abilities, both of which saw breakthrough progress in 2024. Thus, 2025 will witness a large-scale emergence of Agents, making it the "Year of the AI Agent."

  Wan Weixing also pointed out that the release of OpenAI’s O1 and O3 models, along with Deepseek’s R1 model, indicates that large models are maturing in multimodal support and chain-of-thought (CoT) reasoning, paving the way for Agent deployment. Recently, OpenAI, Manus, and others have launched Agent-related applications, expanding real-world use cases. Additionally, the declining cost of Tokens (digital identifiers used in IT for authentication, security, and data integrity) and the availability of core APIs for third-party integration are enhancing Agent experiences.

  "This year, we’ve seen many Agent-based products go live, and users can clearly feel the transformative interaction brought by Agentic AI (more autonomous than AI Agents), along with improvements in efficiency and convenience," Wan said. He believes that edge devices hold significant potential in the Agent space this year due to their perceptual and personalized
advantages, prompting industries to deploy certain modules and solutions on the edge.

  "While models are becoming more powerful—supporting multimodal inputs and chain-of-thought reasoning—there’s still a gap in solving complex problems," Wan noted. For instance, an AI Agent might make mistakes in real-world scenarios like ordering coffee, such as selecting the wrong flavor or getting stuck mid-process.

 

  Wan emphasized that current AI Agents are not yet fully autonomous. "In specific vertical applications, Agents can automate most tasks, but human intervention is still needed at the final stage," he explained. This intervention might involve error correction or completing the payment step in an ordering process. Specifically, Agents still need improvements in "task reliability and accuracy," "generalizability," and "ecosystem integration."

Go Back Top