일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 |
- segmenation map generation
- transformer
- 프로그래머스
- noise optimization
- DP
- 코테
- video generation
- BOJ
- Programmers
- inversion
- image editing
- 3d editing
- diffusion
- diffusion models
- flipd
- masactrl
- 논문리뷰
- visiontransformer
- video editing
- Vit
- flow matching
- rectified flow
- 네이버 부스트캠프 ai tech 6기
- 3d generation
- 코딩테스트
- VirtualTryON
- Python
- segmentation map
- diffusion model
- memorization
- Today
- Total
목록전체 글 (102)
평범한 필기장

Paper : https://arxiv.org/abs/2411.00113 A Geometric Framework for Understanding Memorization in Generative ModelsAs deep generative models have progressed, recent work has shown them to be capable of memorizing and reproducing training datapoints when deployed. These findings call into question the usability of generative models, especially in light of the legal andarxiv.orgAbstract본 논문은 memori..

Paper : https://arxiv.org/abs/2406.03537 A Geometric View of Data Complexity: Efficient Local Intrinsic Dimension Estimation with Diffusion ModelsHigh-dimensional data commonly lies on low-dimensional submanifolds, and estimating the local intrinsic dimension (LID) of a datum -- i.e. the dimension of the submanifold it belongs to -- is a longstanding problem. LID can be understood as the number ..

Paper : https://arxiv.org/abs/2412.07517 FireFlow: Fast Inversion of Rectified Flow for Image Semantic EditingThough Rectified Flows (ReFlows) with distillation offers a promising way for fast sampling, its fast inversion transforms images back to structured noise for recovery and following editing remains unsolved. This paper introduces FireFlow, a simple yet effarxiv.orgAbstract Rectified Flow..

Paper : https://arxiv.org/abs/2209.03003 Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified FlowWe present rectified flow, a surprisingly simple approach to learning (neural) ordinary differential equation (ODE) models to transport between two empirically observed distributions π_0 and π_1, hence providing a unified solution to generative modelingarxiv.orgAbstract 본 논문..

Paper : https://arxiv.org/abs/2404.04650 InitNO: Boosting Text-to-Image Diffusion Models via Initial Noise OptimizationRecent strides in the development of diffusion models, exemplified by advancements such as Stable Diffusion, have underscored their remarkable prowess in generating visually compelling images. However, the imperative of achieving a seamless alignment betwearxiv.orgAbstract 모든 ra..

Paper : https://arxiv.org/abs/2411.16738 Classifier-Free Guidance inside the Attraction Basin May Cause MemorizationDiffusion models are prone to exactly reproduce images from the training data. This exact reproduction of the training data is concerning as it can lead to copyright infringement and/or leakage of privacy-sensitive information. In this paper, we present aarxiv.orgAbstract해결하려는 문제 D..

Paper : https://arxiv.org/abs/2407.21720 Detecting, Explaining, and Mitigating Memorization in Diffusion ModelsRecent breakthroughs in diffusion models have exhibited exceptional image-generation capabilities. However, studies show that some outputs are merely replications of training data. Such replications present potential legal challenges for model owners, espearxiv.orgAbstract문제 : 생성모델의 몇 o..

Project Page : https://rf-inversion.github.io/ Litu Rout1,2 Yujia Chen2 Nataniel Ruiz2 Constantine Caramanis1 Sanjay Shakkottai1Wen-Sheng Chu2 1 The University of Texas at Austin, 2 Google ICLR 202" data-og-host="rf-inversion.github.io" data-og-source-url="https://rf-inversion.github.io/" data-og-url="https://rf-inversion.github.io/" data-og-image=""> RF-InversionSemantic Image Inversion and ..

Paper : https://arxiv.org/abs/2309.06380 InstaFlow: One Step is Enough for High-Quality Diffusion-Based Text-to-Image GenerationDiffusion models have revolutionized text-to-image generation with its exceptional quality and creativity. However, its multi-step sampling process is known to be slow, often requiring tens of inference steps to obtain satisfactory results. Previous attemparxiv.orgAbstr..

Paper : https://arxiv.org/abs/2210.02747 Flow Matching for Generative ModelingWe introduce a new paradigm for generative modeling built on Continuous Normalizing Flows (CNFs), allowing us to train CNFs at unprecedented scale. Specifically, we present the notion of Flow Matching (FM), a simulation-free approach for training CNFs basearxiv.org(성민혁 교수님 강의 자료 참고 : https://www.youtube.com/watch?v=B4F..