格隆汇4月18日|阿里巴巴周四在声明中,该公司发布了最新的开源人工智能模型,只需要提供两张照片作为首帧和尾帧便能自动生成视频。该模型基于该公司的Wan2.1基础模型架构,为短视频创作者提供了更高的创作自由度。这款被称为Wan2.1-FLF2V-14B的模型可在HuggingFace、GitHub以及阿里云的魔搭社区找到。用户可在通义万相官网上免费制作时长5秒的视频。
Source Link格隆汇4月18日|阿里巴巴周四在声明中,该公司发布了最新的开源人工智能模型,只需要提供两张照片作为首帧和尾帧便能自动生成视频。该模型基于该公司的Wan2.1基础模型架构,为短视频创作者提供了更高的创作自由度。这款被称为Wan2.1-FLF2V-14B的模型可在HuggingFace、GitHub以及阿里云的魔搭社区找到。用户可在通义万相官网上免费制作时长5秒的视频。
Source LinkDisclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.