Alibaba touts new AI model superior to DeepSeek’s and Meta’s

Bloomberg
29 Jan

Alibaba's latest AI model may score better than Meta's and DeepSeek's.

Alibaba published benchmark scores and touted what it called world-leading performance with its new artificial intelligence model release.

The upgraded Qwen 2.5 Max edition scored better than Meta Platforms' Llama and DeepSeek’s V3 model in various tests, according to figures in Alibaba Cloud’s announcement on WeChat. Alongside Tencent Holdings and Baidu, Alibaba has poured significant resources into its cloud services segment and is engaged in a hot contest to recruit China’s AI developers to use its tools.

DeepSeek, a 20-month-old start-up that was founded in Alibaba’s home city, Hangzhou, became a global sensation this week and figures prominently as the first benchmark that Alibaba appears to now measure itself against. Alibaba Cloud also shared scores that suggest its AI beats OpenAI and Anthropic’s models in certain benchmarks.

Cloud service providers like Alibaba and Tencent have slashed their pricing in recent months in the effort to win over more users. DeepSeek has already contributed to that price war, alongside a half dozen other promising AI startups in China that have secured funding at unicorn valuations.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10