- Wan: Open and Advanced Large-Scale Video Generative Models
👍 Multiple Tasks: Wan2 1 excels in Text-to-Video, Image-to-Video, Video Editing, Text-to-Image, and Video-to-Audio, advancing the field of video generation 👍 Visual Text Generation: Wan2 1 is the first video model capable of generating both Chinese and English text, featuring robust text generation that enhances its practical applications
- GitHub - k4yt3x video2x: A machine learning-based video super . . .
A machine learning-based video super resolution and frame interpolation framework Est Hack the Valley II, 2018 - k4yt3x video2x
- Lightricks LTX-Video: Official repository for LTX-Video - GitHub
LTX-Video is the first DiT-based video generation model that can generate high-quality videos in real-time It can generate 30 FPS videos at 1216×704 resolution, faster than it takes to watch them It can generate 30 FPS videos at 1216×704 resolution, faster than it takes to watch them
- DepthAnything Video-Depth-Anything - GitHub
This work presents Video Depth Anything based on Depth Anything V2, which can be applied to arbitrarily long videos without compromising quality, consistency, or generalization ability Compared with other diffusion-based models, it enjoys faster inference speed, fewer parameters, and higher
- Create your first video in Google Vids
Optional: To make changes to your video clip, click Edit prompt To add the video clip to your Vid: Hover over the generated video Click Insert This adds the video clip to your canvas At the bottom of the Vids window, in the timeline, the video clip has its own object track Learn more about generating video clips Start by recording a video
- GitHub - kijai ComfyUI-WanVideoWrapper
ReCamMaster: WanVideo2_1_recammaster mp4 TeaCache (with the old temporary WIP naive version, I2V): Note that with the new version the threshold values should be 10x higher
- stepfun-ai Step-Video-T2V - GitHub
We present Step-Video-T2V, a state-of-the-art (SoTA) text-to-video pre-trained model with 30 billion parameters and the capability to generate videos up to 204 frames To enhance both training and inference efficiency, we propose a deep compression VAE for videos, achieving 16x16 spatial and 8x temporal compression ratios
- HunyuanVideo: A Systematic Framework For Large Video . . . - GitHub
We present HunyuanVideo, a novel open-source video foundation model that exhibits performance in video generation that is comparable to, if not superior to, leading closed-source models In order to train HunyuanVideo model, we adopt several key technologies for model learning, including data
|