일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 |
- controllable video generation
- 네이버 부스트캠프 ai tech 6기
- BOJ
- diffusion model
- 논문리뷰
- visiontransformer
- segmentation map
- controlnext
- 3d editing
- video generation
- score distillation
- Python
- magdiff
- 코딩테스트
- DP
- transformer
- diffusion models
- 프로그래머스
- emerdiff
- image editing
- 3d generation
- diffusion
- segmenation map generation
- VirtualTryON
- Vit
- Programmers
- video editing
- dreammotion
- 코테
- masactrl
- Today
- Total
목록video generation (3)
평범한 필기장
Paper : https://arxiv.org/abs/2408.06070 ControlNeXt: Powerful and Efficient Control for Image and Video GenerationDiffusion models have demonstrated remarkable and robust abilities in both image and video generation. To achieve greater control over generated results, researchers introduce additional architectures, such as ControlNet, Adapters and ReferenceNet, to intearxiv.orgGithub : https://g..
Paper : https://arxiv.org/abs/2403.07420 DragAnything: Motion Control for Anything using Entity RepresentationWe introduce DragAnything, which utilizes a entity representation to achieve motion control for any object in controllable video generation. Comparison to existing motion control methods, DragAnything offers several advantages. Firstly, trajectory-based isarxiv.orgProject Page : https://..
Paper : https://arxiv.org/abs/2311.17338 MagDiff: Multi-Alignment Diffusion for High-Fidelity Video Generation and EditingThe diffusion model is widely leveraged for either video generation or video editing. As each field has its task-specific problems, it is difficult to merely develop a single diffusion for completing both tasks simultaneously. Video diffusion sorely relyinarxiv.org1. Introduc..