Chip war: Chinese start-up aims to break Nvidia's grip on AI with new model framework

South China Morning Post
03-16

A new artificial intelligence (AI) framework developed by teams associated with China's Tsinghua University is said to be able to reduce reliance on Nvidia chips for AI model inference, marking the latest effort by the country to enhance technological self-sufficiency.

Chitu, a high-performance inference framework for large language models (LLMs), can operate on chips made in China, challenging the dominance of Nvidia's Hopper series graphics processing units (GPUs) in supporting certain models, such as DeepSeek-R1, according to a joint statement by start-up Qingcheng.AI and a team led by computer science professor Zhai Jidong at Tsinghua University on Friday.

AI frameworks serve as the building blocks of sophisticated, intelligent AI models, offering a collection of libraries and tools that enable developers to design, train and validate complex models efficiently.

Do you have questions about the biggest topics and trends from around the world? Get the answers with SCMP Knowledge, our new platform of curated content with explainers, FAQs, analyses and infographics brought to you by our award-winning team.

The Chitu framework, which has been open-sourced since Friday, supports mainstream models, including those from DeepSeek and Meta Platforms' Llama series, according to the company.

When tested with the full-strength version of DeepSeek-R1 using Nvidia's A800 GPUs, the framework achieved a 315 per cent increase in model inference speed while reducing GPU usage by 50 per cent compared to foreign open-source frameworks, the company said.

The initiative is part of a broader effort by Chinese AI companies to lessen dependence on Nvidia, whose high-performance GPUs are subject to US export controls. Nvidia is banned by Washington from selling its advanced H100 and H800 chips from the Hopper series to China-based clients.

The rise of Hangzhou-based DeepSeek, which has developed its AI models at a fraction of the cost and computational resources used by Western peers, has also raised questions about a potential decline in demand for Nvidia GPUs.

Qingcheng.AI was founded in 2023 by Zhai and his students from Tsinghua University, with Zhai serving as chief scientist.

Backed by Beijing's municipal fund for the AI industry, the start-up has partnered with top Chinese GPU manufacturers, including Moore Threads, Enflame and Iluvatar CoreX, according to CEO Tang Xiongchao in an interview with Chinese media last year.

Other tech companies in China have also intensified their efforts to reduce reliance on foreign technology following the DeepSeek momentum.

In February, Infinigence AI - a computing infrastructure platform provider supported by talent from Tsinghua and funding from major Chinese tech firms - announced it was working to foster collaboration among the country's seven leading AI chip developers: Biren Technology, Hygon Information Technology, Moore Threads, MetaX, Enflame, Iluvatar CoreX and Huawei Technologies' Ascend.

In a recent research paper, scientists from ByteDance, the parent company of TikTok, reported a 170 per cent increase in LLM training efficiency using an optimised system. The new system has already been implemented in some of ByteDance's production environments, achieving "savings of millions of GPU hours", the company said.

This article originally appeared in the South China Morning Post (SCMP), the most authoritative voice reporting on China and Asia for more than a century. For more SCMP stories, please explore the SCMP app or visit the SCMP's Facebook and Twitter pages. Copyright © 2025 South China Morning Post Publishers Ltd. All rights reserved.

Copyright (c) 2025. South China Morning Post Publishers Ltd. All rights reserved.

免責聲明:投資有風險,本文並非投資建議,以上內容不應被視為任何金融產品的購買或出售要約、建議或邀請,作者或其他用戶的任何相關討論、評論或帖子也不應被視為此類內容。本文僅供一般參考,不考慮您的個人投資目標、財務狀況或需求。TTM對信息的準確性和完整性不承擔任何責任或保證,投資者應自行研究並在投資前尋求專業建議。

熱議股票

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10