-
Notifications
You must be signed in to change notification settings - Fork 263
Description
Describe the bug
the example.py is stuck.
INFO 12-06 17:24:26 [init.py:109] ROCm platform is unavailable: No module named 'amdsmi'
WARNING 12-06 17:24:26 [logger.py:122] By default, logger.info(..) will only log from the local main process. Set logger.info(..., is_local_main_process=False) to log from all processes.
INFO 12-06 17:24:26 [init.py:47] CUDA is available
Starting FastVideo example...
Set attention backend to VIDEO_SPARSE_ATTN
Loading model...
INFO 12-06 17:24:27 [multiproc_executor.py:41] Use master port: 60331
INFO 12-06 17:24:29 [init.py:109] ROCm platform is unavailable: No module named 'amdsmi'
WARNING 12-06 17:24:29 [logger.py:122] By default, logger.info(..) will only log from the local main process. Set logger.info(..., is_local_main_process=False) to log from all processes.
INFO 12-06 17:24:29 [init.py:47] CUDA is available
INFO 12-06 17:24:30 [parallel_state.py:976] Initializing distributed environment with world_size=1, device=cuda:0
INFO 12-06 17:24:30 [parallel_state.py:788] Using nccl backend for CUDA platform
[W1206 17:24:30.791664765 ProcessGroupNCCL.cpp:929] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS is the default now, this environment variable is thus deprecated. (function operator())
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
INFO 12-06 17:24:30 [utils.py:517] Downloading model snapshot from HF Hub for FastVideo/FastWan2.1-T2V-1.3B-Diffusers...
Fetching 29 files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 29/29 [00:00<00:00, 21275.99it/s]
INFO 12-06 17:24:30 [utils.py:524] Downloaded model to /home/loopstring/.cache/huggingface/hub/models--FastVideo--FastWan2.1-T2V-1.3B-Diffusers/snapshots/75640eb8d44c1d5f9dd4c7824ecfb39bf8e4d476
INFO 12-06 17:24:30 [init.py:43] Model path: /home/loopstring/.cache/huggingface/hub/models--FastVideo--FastWan2.1-T2V-1.3B-Diffusers/snapshots/75640eb8d44c1d5f9dd4c7824ecfb39bf8e4d476
INFO 12-06 17:24:30 [utils.py:591] Diffusers version: 0.33.0.dev0
INFO 12-06 17:24:30 [init.py:53] Building pipeline of type: basic
INFO 12-06 17:24:30 [pipeline_registry.py:163] Loading pipelines for types: ['basic']
INFO 12-06 17:24:30 [pipeline_registry.py:219] Loaded 9 pipeline classes across 1 types
INFO 12-06 17:24:30 [profiler.py:191] Torch profiler disabled; returning no-op controller
INFO 12-06 17:24:30 [composed_pipeline_base.py:83] Loading pipeline modules...
INFO 12-06 17:24:30 [utils.py:512] Model already exists locally at /home/loopstring/.cache/huggingface/hub/models--FastVideo--FastWan2.1-T2V-1.3B-Diffusers/snapshots/75640eb8d44c1d5f9dd4c7824ecfb39bf8e4d476
INFO 12-06 17:24:30 [composed_pipeline_base.py:207] Model path: /home/loopstring/.cache/huggingface/hub/models--FastVideo--FastWan2.1-T2V-1.3B-Diffusers/snapshots/75640eb8d44c1d5f9dd4c7824ecfb39bf8e4d476
INFO 12-06 17:24:30 [utils.py:591] Diffusers version: 0.33.0.dev0
INFO 12-06 17:24:30 [composed_pipeline_base.py:267] Loading pipeline modules from config: {'_class_name': 'WanDMDPipeline', '_diffusers_version': '0.33.0.dev0', 'scheduler': ['diffusers', 'UniPCMultistepScheduler'], 'text_encoder': ['transformers', 'UMT5EncoderModel'], 'tokenizer': ['transformers', 'T5TokenizerFast'], 'transformer': ['diffusers', 'WanTransformer3DModel'], 'vae': ['diffusers', 'AutoencoderKLWan']}
INFO 12-06 17:24:30 [composed_pipeline_base.py:310] Loading required modules: ['text_encoder', 'tokenizer', 'vae', 'transformer', 'scheduler']
INFO 12-06 17:24:30 [component_loader.py:595] Loading scheduler using diffusers from /home/loopstring/.cache/huggingface/hub/models--FastVideo--FastWan2.1-T2V-1.3B-Diffusers/snapshots/75640eb8d44c1d5f9dd4c7824ecfb39bf8e4d476/scheduler
INFO 12-06 17:24:30 [composed_pipeline_base.py:344] Loaded module scheduler from /home/loopstring/.cache/huggingface/hub/models--FastVideo--FastWan2.1-T2V-1.3B-Diffusers/snapshots/75640eb8d44c1d5f9dd4c7824ecfb39bf8e4d476/scheduler
INFO 12-06 17:24:30 [component_loader.py:595] Loading text_encoder using transformers from /home/loopstring/.cache/huggingface/hub/models--FastVideo--FastWan2.1-T2V-1.3B-Diffusers/snapshots/75640eb8d44c1d5f9dd4c7824ecfb39bf8e4d476/text_encoder
INFO 12-06 17:24:30 [component_loader.py:223] HF Model config: {'architectures': ['UMT5EncoderModel'], 'classifier_dropout': 0.0, 'd_ff': 10240, 'd_kv': 64, 'd_model': 4096, 'decoder_start_token_id': 0, 'dense_act_fn': 'gelu_new', 'dropout_rate': 0.1, 'eos_token_id': 1, 'feed_forward_proj': 'gated-gelu', 'initializer_factor': 1.0, 'is_encoder_decoder': True, 'is_gated_act': True, 'layer_norm_epsilon': 1e-06, 'num_decoder_layers': 24, 'num_heads': 64, 'num_layers': 24, 'output_past': True, 'pad_token_id': 0, 'relative_attention_max_distance': 128, 'relative_attention_num_buckets': 32, 'scalable_attention': True, 'tie_word_embeddings': False, 'use_cache': True, 'vocab_size': 256384}
Loading safetensors checkpoint shards: 0% Completed | 0/5 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 20% Completed | 1/5 [00:00<00:01, 2.03it/s]
Loading safetensors checkpoint shards: 40% Completed | 2/5 [00:01<00:02, 1.40it/s]
Loading safetensors checkpoint shards: 60% Completed | 3/5 [00:02<00:01, 1.16it/s]
Loading safetensors checkpoint shards: 80% Completed | 4/5 [00:03<00:00, 1.09it/s]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:04<00:00, 1.02it/s]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:04<00:00, 1.12it/s]
INFO 12-06 17:24:35 [component_loader.py:270] Loading weights took 4.57 seconds
INFO 12-06 17:25:22 [composed_pipeline_base.py:344] Loaded module text_encoder from /home/loopstring/.cache/huggingface/hub/models--FastVideo--FastWan2.1-T2V-1.3B-Diffusers/snapshots/75640eb8d44c1d5f9dd4c7824ecfb39bf8e4d476/text_encoder
INFO 12-06 17:25:22 [component_loader.py:595] Loading tokenizer using transformers from /home/loopstring/.cache/huggingface/hub/models--FastVideo--FastWan2.1-T2V-1.3B-Diffusers/snapshots/75640eb8d44c1d5f9dd4c7824ecfb39bf8e4d476/tokenizer
INFO 12-06 17:25:22 [component_loader.py:374] Loading tokenizer from /home/loopstring/.cache/huggingface/hub/models--FastVideo--FastWan2.1-T2V-1.3B-Diffusers/snapshots/75640eb8d44c1d5f9dd4c7824ecfb39bf8e4d476/tokenizer
INFO 12-06 17:25:23 [component_loader.py:383] Loaded tokenizer: T5TokenizerFast
INFO 12-06 17:25:23 [composed_pipeline_base.py:344] Loaded module tokenizer from /home/loopstring/.cache/huggingface/hub/models--FastVideo--FastWan2.1-T2V-1.3B-Diffusers/snapshots/75640eb8d44c1d5f9dd4c7824ecfb39bf8e4d476/tokenizer
INFO 12-06 17:25:23 [component_loader.py:595] Loading transformer using diffusers from /home/loopstring/.cache/huggingface/hub/models--FastVideo--FastWan2.1-T2V-1.3B-Diffusers/snapshots/75640eb8d44c1d5f9dd4c7824ecfb39bf8e4d476/transformer
INFO 12-06 17:25:23 [component_loader.py:439] transformer cls_name: WanTransformer3DModel
INFO 12-06 17:25:23 [component_loader.py:474] Loading model from 1 safetensors files: ['/home/loopstring/.cache/huggingface/hub/models--FastVideo--FastWan2.1-T2V-1.3B-Diffusers/snapshots/75640eb8d44c1d5f9dd4c7824ecfb39bf8e4d476/transformer/diffusion_pytorch_model.safetensors']
INFO 12-06 17:25:23 [component_loader.py:481] Loading model from WanTransformer3DModel, default_dtype: torch.bfloat16
INFO 12-06 17:25:23 [fsdp_load.py:95] Loading model with default_dtype: torch.bfloat16
INFO 12-06 17:25:23 [cuda.py:124] Trying FASTVIDEO_ATTENTION_BACKEND=VIDEO_SPARSE_ATTN
INFO 12-06 17:25:23 [cuda.py:126] Selected backend: AttentionBackendEnum.VIDEO_SPARSE_ATTN
INFO 12-06 17:25:23 [cuda.py:176] Using Video Sparse Attention backend.
INFO 12-06 17:25:23 [cuda.py:124] Trying FASTVIDEO_ATTENTION_BACKEND=VIDEO_SPARSE_ATTN
INFO 12-06 17:25:23 [cuda.py:126] Selected backend: None
INFO 12-06 17:25:23 [cuda.py:240] Cannot use FlashAttention-2 backend because the flash_attn package is not found. Make sure that flash_attn was built and installed (on by default).
INFO 12-06 17:25:23 [cuda.py:247] Using Torch SDPA backend.
Loading safetensors checkpoint shards: 0% Completed | 0/1 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00, 4.99it/s]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00, 4.95it/s]
Reproduction
run example.py on README.
Environment
WSL, cuda130, rtx5090