일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 | 29 |
30 |
- Visual Autoregressive
- Python
- transformer
- 3d gaussian splatting
- 3d editting
- Programmers
- magic clothing
- VirtualTryON
- autoregressive
- instructany2pix
- DP
- dreamfusion
- 코딩테스트
- 코테
- insturctnerf2nerf
- 논문리뷰
- text2room
- novel view synthesis
- diffusion
- sonicdiffusion
- 3d generation
- Vit
- visiontransformer
- 프로그래머스
- sound-to-image generation
- text-to-image diffusion
- BOJ
- objectdrop
- text-to-video diffusion
- 네이버 부스트캠프 ai tech 6기
- Today
- Total
목록2024/06 (6)
평범한 필기장
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/vo0fs/btsIfGRUOxe/AaZ5PjIYRTfIlfr9Dq8zAk/img.png)
https://arxiv.org/abs/2209.14988 DreamFusion: Text-to-3D using 2D DiffusionRecent breakthroughs in text-to-image synthesis have been driven by diffusion models trained on billions of image-text pairs. Adapting this approach to 3D synthesis would require large-scale datasets of labeled 3D data and efficient architectures for denoiarxiv.org1. Introduction Diffusion model은 다양한 다른 modality에서 적용되는데 성..
네이버 부스트캠프 수료를 하고 약 3개월이 되간다.... 부스트캠프를 시작하기 전에 내가 상상하던 모습은 아니긴하다ㅎㅎ 부스트캠프를 시작하기 전에 1년 휴학을 하고 세웠던 계획은 부스트캠프를 통해 엄청난 성장을 하고 복학 전까지 인턴을 하는 것이 목표였으나 역시 인생은 계획대로 되지 않는다. 부스트캠프를 시작하면서 한 다짐만큼 내가 열심히 하지 않은 탓이 아닐까.. 그래도 그 만큼의 성장은 아니였지만 당연히 5~6개월간 나름대로 성장은 했고 또 많은 경험을 했고 또 많은 사람들은 만나면서 정말 잊지 못할 경험을 한 것은 사실이다. 주변에 네이버 부스트캠프 ai tech를 하고 싶어하는 사람들이 좀 있는데 나는 정말 x100 강추한다. 아무튼 수료하면서 수료 후기? 포스팅을 할려고 했지만 나의 귀차니즘으로..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/dqQ2O1/btsH4IoGJcr/tgvbRQ3Ygy4CkeXLlpXiB1/img.png)
https://arxiv.org/abs/2303.12789 Instruct-NeRF2NeRF: Editing 3D Scenes with InstructionsWe propose a method for editing NeRF scenes with text-instructions. Given a NeRF of a scene and the collection of images used to reconstruct it, our method uses an image-conditioned diffusion model (InstructPix2Pix) to iteratively edit the input images whiarxiv.org1. Introduction NeRF와 같은 3D reconstruction 기술..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/bO8vTP/btsHVyN1Lu3/VNEmR0r8kkQ1evJvz3x7Y0/img.png)
https://arxiv.org/abs/2303.11989 Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image ModelsWe present Text2Room, a method for generating room-scale textured 3D meshes from a given text prompt as input. To this end, we leverage pre-trained 2D text-to-image models to synthesize a sequence of images from different poses. In order to lift these outparxiv.org요약다루는 task : 2D Text-to-Image m..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/b1Iw9N/btsHRNkWQdN/PVHkNMJvuldk7U1x0uOWxk/img.png)
https://arxiv.org/abs/2308.04079 3D Gaussian Splatting for Real-Time Radiance Field RenderingRadiance Field methods have recently revolutionized novel-view synthesis of scenes captured with multiple photos or videos. However, achieving high visual quality still requires neural networks that are costly to train and render, while recent faster methoarxiv.org1. Introduction NeRF 기반의 방식들은 high quali..
![](http://i1.daumcdn.net/thumb/C150x150/?fname=https://blog.kakaocdn.net/dn/cwq4CD/btsHNCKvNVJ/ys07tvkpW3mAFRfvTGiD61/img.png)
https://arxiv.org/abs/2003.08934 NeRF: Representing Scenes as Neural Radiance Fields for View SynthesisWe present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-conarxiv.org 랩실 인턴을 시작하기 전에 연구 주제에 대한 레퍼런스..