DPO (Direct Preference Optimization) LoRA for XL and 1.5 - OpenRail++
Updated 2/14/2025
About this model
This model provides DPO (Direct Preference Optimization) LoRA for Stable Diffusion XL and 1.5, finetuned based on human-chosen images to improve image quality and prompt adherence. These LoRA, licensed under OpenRail++, can be integrated with other fine-tuned Stable Diffusion models to enhance their ability to accurately reflect the given prompt.
Version: SDXL - V1.0
DPO (Direct Preference Optimization) LoRA for XL and 1.5 - OpenRail++ SDXL - V1.0
Posts with this model
No posts found for this model version. Be the first to create something with it!


