VE4001S2T2B1V Standing in the perspective of industry development, Intel has made a three-stage summary of the development of AIGC and large language models: The first phase is the Age of AI Co-Pilots, the second phase is the Age of AI Agents, and the third phase is the Age of AI Functions.
“From China’s point of view, our innovation speed is very fast, and many AI Agent functions may have seen the development signs at present, and the cases of AI Agent have actually slowly emerged around us.” Intel China network and edge division chief technology officer, Intel senior chief AI engineer Zhang Yu told Caixin reporter.
For example, he said, telecom enterprise network security and network operations products can already analyze network log files based on a large model, so that network managers can take actions based on the analysis results in a timely manner. Sachin Katti said he expects to see more AI agents emerge in the nVE4001S2T2B1V ext one to two years.
Intel executives mentioned at the conference that China’s generative artificial intelligence market is expected to reach $3.3 billion this year. According to Gartner, by 2026, 80% of global enterprises will use generative AI, and 50% of global edge deployments will include AI. STL Partners reports that the global edge services market will reach $445 billion by 2030, with AI being the largest edge workload.
The edge AI market continues to heat up, and what are the actual reasons driving companies to deploy at the edge?
“One is the security of the data, whether it is safe to put the data in the cloud, or it is more reasonable at the edge.” Second, with the increasing amount of data at the edge, the entire transmission bandwidth is a problem. Although the domestic transmission bandwidth construction in the entire infrastructure is the world’s most advanVE4001S2T2B1V ced, when a large amount of data is generated at the edge, it may still cause network storms, and we still need to further optimize network management and data transmission strategies. Third, real-time, a lot of things can only be done at the edge to solve the real-time requirements.” Guo Wei, vice president of Intel Corporation’s marketing group and general manager of Intel China’s Network and edge and channel data center Division, analyzed three reasons to Caixin reporter.
Sachin Katti told CaiAP that at present, AI mainly runs in the cloud, but with edge devices generating a lot of data locally, the cost of transferring all data to the cloud has become quite high. “Some factories may not be able to afford the cost of training very large, industry-scale models, which can have trillions of parameters. So they tend to train medium-size models and tailor them to their own data.”
The reporter learned that the current large model training, reasoning, invocation, deployment are completed in the cloud, the computing power (cluster) scale, network stability, energy efficiency and other aspects are put forward extremely high requirements, as the large model enters the competition landing stage, it requires all analysis and processing to be completed locally.