Update dependency diffusers to v0.36.0 #89
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
==0.35.2→==0.36.0Release Notes
huggingface/diffusers (diffusers)
v0.36.0: Diffusers 0.36.0: Pipelines galore, new caching method, training scripts, and more 🎄Compare Source
The release features a number of new image and video pipelines, a new caching method, a new training script, new
kernels- powered attention backends, and more. It is quite packed with a lot of new stuff, so make sure you read the release notes fully 🚀New image pipelines
New video pipelines
New
kernels-powered attention backendsThe
kernelslibrary helps you save a lot of time by providing pre-built kernel interfaces for various environments and accelerators. This release features three newkernels-powered attention backends:varlenvariant)varlenvariant)This means if any of the above backend is supported by your development environment, you should be able to skip the manual process of building the corresponding kernels and just use:
For more details, check out the documentation.
TaylorSeer cache
TaylorSeer is now supported in Diffusers, delivering upto 3x speedups with negligible-to-none quality compromise. Thanks to @toilaluan for contributing this in #12648. Check out the documentation here.
New training script
Our Flux.2 integration features a LoRA fine-tuning script that you can check out here. We provide a number of optimizations to help make it run on consumer GPUs.
Misc
AttentionMixin: Making certain compatible models subclass from theAttentionMixinclass helped us get rid of 2K LoC. Going forward, users can expect more such refactorings that will help make the library leaner and simpler. Check out #12463 for more details.All commits
VAETesterMixinto consolidate tests for slicing and tiling by @sayakpaul in #12374AutoencoderMixinto abstract common methods by @sayakpaul in #12473upper()by @sayakpaul in #12479lodestones/Chroma1-HDby @josephrocca in #12508local_dirby @DN6 in #12381testing_utils.pyby @DN6 in #12621test_save_load_float16by @kaixuanliu in #12500SanaImageToVideoPipelinesupport by @lawrence-cj in #12634AutoencoderKLWan'sdim_multdefault value back to list by @dg845 in #12640kernelsby @sayakpaul in #12439record_streamin group offloading is not working properly by @KimbingNg in #12721AttentionMixinfor compatible classes by @sayakpaul in #12463upcast_vaein SDXL based pipelines by @DN6 in #12619from_single_fileby @hlky in #12756Significant community contributions
The following contributors have made significant changes to the library over the last release:
AutoencoderKLWan'sdim_multdefault value back to list (#12640)local_dir(#12381)testing_utils.py(#12621)upcast_vaein SDXL based pipelines (#12619)SanaImageToVideoPipelinesupport (#12634)Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.