Models/DPO (Direct Preference Optimization) LoRA for XL and 1.5 - OpenRail++

DPO (Direct Preference Optimization) LoRA for XL and 1.5 - OpenRail++

Updated 2/14/2025

Tags
Type
LORA
Stats
237
1
Base Model
SD
Created
12/24/2023

About this model

This model provides DPO (Direct Preference Optimization) LoRA for Stable Diffusion XL and 1.5, finetuned based on human-chosen images to improve image quality and prompt adherence. These LoRA, licensed under OpenRail++, can be integrated with other fine-tuned Stable Diffusion models to enhance their ability to accurately reflect the given prompt.

Version: SD 1.5 - V1.0

DPO (Direct Preference Optimization) LoRA for XL and 1.5 - OpenRail++ SD 1.5 - V1.0

Posts with this model

No posts found for this model version. Be the first to create something with it!