华泰证券:DeepSeek有望加速模型训练与CUDA解耦

美港电讯
21 Feb

【华泰证券:DeepSeek有望加速模型训练与CUDA解耦】金十数据2月21日讯,华泰证券研究认为,DeepSeek在V3中使用了相比CUDA更底层的PTX 来优化硬件算法,PTX是CUDA编译的中间代码,在CUDA和最终机器码之间起到桥梁作用。而NSA则使用了OpenAl提出的Triton编程语言高效编写GPU代码,Triton的底层可调用CUDA,也可调用其他GPU语言,包括AMD的rocm以及国产算力芯片语言,如寒武纪的思元590芯片和海光信息的深算一号(DCU)内置的HYGON ISA指令集。LLM的训练短期内虽未完全脱离CUDA 生态,但DeepSeek NSA的推出使其初步呈现出与CUDA解耦的趋势,并为后续适配更多类型的算力芯片奠定基础。以异腾为代表的国产算力已经很好的适配了DeepSeek-R1 等国产模型,并取得了高效推理的效果,华泰证券认为,伴随海外算力的受限,针对国产算力的优化或将有持续进展,值得重视。

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10