일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | |||
5 | 6 | 7 | 8 | 9 | 10 | 11 |
12 | 13 | 14 | 15 | 16 | 17 | 18 |
19 | 20 | 21 | 22 | 23 | 24 | 25 |
26 | 27 | 28 | 29 | 30 | 31 |
- segmenation map generation
- video generation
- 3d editing
- diffusion models
- segmentation map
- magdiff
- dreammotion
- controlnext
- visiontransformer
- diffusion model
- Programmers
- diffusion
- score distillation
- transformer
- VirtualTryON
- style align
- 프로그래머스
- 네이버 부스트캠프 ai tech 6기
- 코테
- video editing
- image editing
- emerdiff
- controllable video generation
- 논문리뷰
- Vit
- Python
- BOJ
- 코딩테스트
- DP
- 3d generation
- Today
- Total
목록video generation (3)
평범한 필기장
Paper : https://arxiv.org/abs/2408.06070 ControlNeXt: Powerful and Efficient Control for Image and Video GenerationDiffusion models have demonstrated remarkable and robust abilities in both image and video generation. To achieve greater control over generated results, researchers introduce additional architectures, such as ControlNet, Adapters and ReferenceNet, to intearxiv.orgGithub : https://g..
Paper : https://arxiv.org/abs/2403.07420 DragAnything: Motion Control for Anything using Entity RepresentationWe introduce DragAnything, which utilizes a entity representation to achieve motion control for any object in controllable video generation. Comparison to existing motion control methods, DragAnything offers several advantages. Firstly, trajectory-based isarxiv.orgProject Page : https://..
Paper : https://arxiv.org/abs/2311.17338 MagDiff: Multi-Alignment Diffusion for High-Fidelity Video Generation and EditingThe diffusion model is widely leveraged for either video generation or video editing. As each field has its task-specific problems, it is difficult to merely develop a single diffusion for completing both tasks simultaneously. Video diffusion sorely relyinarxiv.org1. Introduc..