新浪科技讯 2月25日晚间消息,阿里云宣布旗下视觉生成基座模型万相2.1(Wan)开源,此次开源采用最宽松的Apache2.0协议,14B和1.3B两个参数规格的全部推理代码和权重全部开源,同时支持文生视频和图生视频任务,全球开发者可在Github、HuggingFace和魔搭社区下载体验。
据介绍,14B万相模型在指令遵循、复杂运动生成、物理建模、文字视频生成等方面表现突出,在权威评测集VBench中,万相2.1以总分86.22%的成绩超越Sora、Luma、Pika等国内外模型,稳居榜首位置。1.3B版本测试结果不仅超过了更大尺寸的开源模型,甚至还接近部分闭源模型,同时能在消费级显卡运行,仅需8.2GB显存就可以生成高质量视频,适用于二次模型开发和学术研究。
在算法设计上,万相基于主流DiT架构和线性噪声轨迹Flow Matching范式,研发了高效的因果3D VAE、可扩展的预训练策略等。以3D VAE为例,为了高效支持任意长度视频的编码和解码,万相在3D VAE的因果卷积模块中实现了特征缓存机制,从而代替直接对长视频端到端的编解码过程,实现了无限长1080P视频的高效编解码。此外,通过将空间降采样压缩提前,在不损失性能的情况下进一步减少了29%的推理时内存占用。
万相团队的实验结果显示,在运动质量、视觉质量、风格和多目标等14个主要维度和26个子维度测试中,万相均达到了业界领先表现,并且斩获5项第一。(文猛)
责任编辑:何俊熹
Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.