| 일 | 월 | 화 | 수 | 목 | 금 | 토 |
|---|---|---|---|---|---|---|
| 1 | 2 | 3 | ||||
| 4 | 5 | 6 | 7 | 8 | 9 | 10 |
| 11 | 12 | 13 | 14 | 15 | 16 | 17 |
| 18 | 19 | 20 | 21 | 22 | 23 | 24 |
| 25 | 26 | 27 | 28 | 29 | 30 | 31 |
- video generation
- unlearning
- VirtualTryON
- rectified flow
- BOJ
- 코테
- rectified flow models
- diffusion models
- memorization
- Machine Unlearning
- inversion
- video editing
- image editing
- 프로그래머스
- Concept Erasure
- flow matching models
- rectified flow matching models
- diffusion model
- 네이버 부스트캠프 ai tech 6기
- Programmers
- visiontransformer
- flow models
- diffusion
- 3d generation
- image generation
- flow matching
- Python
- 3d editing
- ddim inversion
- 논문리뷰
- Today
- Total
목록diffusion models (18)
평범한 필기장
Paper : https://arxiv.org/abs/2412.08629 FlowEdit: Inversion-Free Text-Based Editing Using Pre-Trained Flow ModelsEditing real images using a pre-trained text-to-image (T2I) diffusion/flow model often involves inverting the image into its corresponding noise map. However, inversion by itself is typically insufficient for obtaining satisfactory results, and therefore marxiv.orgAbstract 기존 T2I mod..
Paper : https://arxiv.org/abs/2504.12782 Set You Straight: Auto-Steering Denoising Trajectories to Sidestep Unwanted ConceptsEnsuring the ethical deployment of text-to-image models requires effective techniques to prevent the generation of harmful or inappropriate content. While concept erasure methods offer a promising solution, existing finetuning-based approaches suffer fromarxiv.orgAbstract ..
Paper : https://arxiv.org/abs/2503.19783 Fine-Grained Erasure in Text-to-Image Diffusion-based Foundation ModelsExisting unlearning algorithms in text-to-image generative models often fail to preserve the knowledge of semantically related concepts when removing specific target concepts: a challenge known as adjacency. To address this, we propose FADE (Fine grained Aarxiv.orgAbstract 기존 concept e..
Paper : https://arxiv.org/abs/2305.10120 Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative ModelsThe recent proliferation of large-scale text-to-image models has led to growing concerns that such models may be misused to generate harmful, misleading, and inappropriate content. Motivated by this issue, we derive a technique inspired by continual learniarxiv.orgAbst..
Paper : https://arxiv.org/abs/2410.17594 How to Continually Adapt Text-to-Image Diffusion Models for Flexible Customization?Custom diffusion models (CDMs) have attracted widespread attention due to their astonishing generative ability for personalized concepts. However, most existing CDMs unreasonably assume that personalized concepts are fixed and cannot change over time. Morearxiv.orgAbstract ..
Paper : https://arxiv.org/abs/2505.11131 One Image is Worth a Thousand Words: A Usability Preservable Text-Image Collaborative Erasing FrameworkConcept erasing has recently emerged as an effective paradigm to prevent text-to-image diffusion models from generating visually undesirable or even harmful content. However, current removal methods heavily rely on manually crafted text prompts, making i..
paper : https://arxiv.org/abs/2410.12557 One Step Diffusion via Shortcut ModelsDiffusion models and flow-matching models have enabled generating diverse and realistic images by learning to transfer noise to data. However, sampling from these models involves iterative denoising over many neural network passes, making generation slow aarxiv.orgAbstract본 논문은 shortcut model을 제안한다. 이는 single network를..
Paper : https://arxiv.org/abs/2411.00113 A Geometric Framework for Understanding Memorization in Generative ModelsAs deep generative models have progressed, recent work has shown them to be capable of memorizing and reproducing training datapoints when deployed. These findings call into question the usability of generative models, especially in light of the legal andarxiv.orgAbstract본 논문은 memori..
Paper : https://arxiv.org/abs/2406.03537 A Geometric View of Data Complexity: Efficient Local Intrinsic Dimension Estimation with Diffusion ModelsHigh-dimensional data commonly lies on low-dimensional submanifolds, and estimating the local intrinsic dimension (LID) of a datum -- i.e. the dimension of the submanifold it belongs to -- is a longstanding problem. LID can be understood as the number ..
Paper : https://arxiv.org/abs/2404.04650 InitNO: Boosting Text-to-Image Diffusion Models via Initial Noise OptimizationRecent strides in the development of diffusion models, exemplified by advancements such as Stable Diffusion, have underscored their remarkable prowess in generating visually compelling images. However, the imperative of achieving a seamless alignment betwearxiv.orgAbstract 모든 ra..