| 일 | 월 | 화 | 수 | 목 | 금 | 토 |
|---|---|---|---|---|---|---|
| 1 | 2 | 3 | 4 | 5 | 6 | 7 |
| 8 | 9 | 10 | 11 | 12 | 13 | 14 |
| 15 | 16 | 17 | 18 | 19 | 20 | 21 |
| 22 | 23 | 24 | 25 | 26 | 27 | 28 |
- 코테
- rectified flow matching models
- video generation
- ddim inversion
- VirtualTryON
- diffusion model
- video editing
- 프로그래머스
- 3d generation
- Concept Erasure
- memorization
- flow matching models
- flow models
- flow matching
- diffusion
- rectified flow
- Machine Unlearning
- rectified flow models
- visiontransformer
- 3d editing
- inversion
- 네이버 부스트캠프 ai tech 6기
- image generation
- BOJ
- Programmers
- 논문리뷰
- unlearning
- diffusion models
- Python
- image editing
- Today
- Total
목록분류 전체보기 (129)
평범한 필기장
Paper : https://arxiv.org/abs/2410.02355 AlphaEdit: Null-Space Constrained Knowledge Editing for Language ModelsLarge language models (LLMs) often exhibit hallucinations due to incorrect or outdated knowledge. Hence, model editing methods have emerged to enable targeted knowledge updates. To achieve this, a prevailing paradigm is the locating-then-editing approach,arxiv.orgAbstractModel Editing ..
Paper : https://arxiv.org/abs/2509.17786 Accurate and Efficient Low-Rank Model Merging in Core SpaceIn this paper, we address the challenges associated with merging low-rank adaptations of large neural networks. With the rise of parameter-efficient adaptation techniques, such as Low-Rank Adaptation (LoRA), model fine-tuning has become more accessible. Wharxiv.orgAbstract 본 논문은 LoRA들을 common alig..
Paper : https://arxiv.org/abs/2502.18461 K-LoRA: Unlocking Training-Free Fusion of Any Subject and Style LoRAsRecent studies have explored combining different LoRAs to jointly generate learned style and content. However, existing methods either fail to effectively preserve both the original subject and style simultaneously or require additional training. In this parxiv.orgAbstract 다른 LoRA들을 조합하는..
Paper : https://arxiv.org/abs/2505.23758 LoRAShop: Training-Free Multi-Concept Image Generation and Editing with Rectified Flow TransformersWe introduce LoRAShop, the first framework for multi-concept image editing with LoRA models. LoRAShop builds on a key observation about the feature interaction patterns inside Flux-style diffusion transformers: concept-specific transformer features activatar..
Paper : https://arxiv.org/abs/2412.08629 FlowEdit: Inversion-Free Text-Based Editing Using Pre-Trained Flow ModelsEditing real images using a pre-trained text-to-image (T2I) diffusion/flow model often involves inverting the image into its corresponding noise map. However, inversion by itself is typically insufficient for obtaining satisfactory results, and therefore marxiv.orgAbstract 기존 T2I mod..
Paper : https://arxiv.org/abs/2503.12356 Localized Concept Erasure for Text-to-Image Diffusion Models Using Training-Free Gated Low-Rank AdaptationFine-tuning based concept erasing has demonstrated promising results in preventing generation of harmful contents from text-to-image diffusion models by removing target concepts while preserving remaining concepts. To maintain the generation capabilit..
Paper : https://arxiv.org/abs/2504.12782 Set You Straight: Auto-Steering Denoising Trajectories to Sidestep Unwanted ConceptsEnsuring the ethical deployment of text-to-image models requires effective techniques to prevent the generation of harmful or inappropriate content. While concept erasure methods offer a promising solution, existing finetuning-based approaches suffer fromarxiv.orgAbstract ..
Paper : https://arxiv.org/abs/2503.19783 Fine-Grained Erasure in Text-to-Image Diffusion-based Foundation ModelsExisting unlearning algorithms in text-to-image generative models often fail to preserve the knowledge of semantically related concepts when removing specific target concepts: a challenge known as adjacency. To address this, we propose FADE (Fine grained Aarxiv.orgAbstract 기존 concept e..
Paper : https://arxiv.org/abs/2507.12283 FADE: Adversarial Concept Erasure in Flow ModelsDiffusion models have demonstrated remarkable image generation capabilities, but also pose risks in privacy and fairness by memorizing sensitive concepts or perpetuating biases. We propose a novel \textbf{concept erasure} method for text-to-image diffusionarxiv.orgAbstract 본 논문은 trajectory-aware finetuning s..
Paper : https://arxiv.org/abs/2410.05664 Holistic Unlearning Benchmark: A Multi-Faceted Evaluation for Text-to-Image Diffusion Model UnlearningAs text-to-image diffusion models gain widespread commercial applications, there are increasing concerns about unethical or harmful use, including the unauthorized generation of copyrighted or sensitive content. Concept unlearning has emerged as a promisi..