[NeurIPS 2022] Towards Robust Blind Face Restoration with Codebook Lookup Transformer
-
Updated
Oct 9, 2024 - Python
[NeurIPS 2022] Towards Robust Blind Face Restoration with Codebook Lookup Transformer
Pytorch implementation of Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors
[CVPR 2023] | RIDCP: Revitalizing Real Image Dehazing via High-Quality Codebook Priors
PyTorch codes for "Real-World Blind Super-Resolution via Feature Matching with Implicit High-Resolution Priors", ACM MM2022 (Oral)
Official Implement of Multi-Stage Multi-Codebook (MSMC) TTS
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
SimVQ: Addressing Representation Collapse in Vector Quantized Models with One Linear Layer
Implements VQGAN+CLIP for image and video generation, and style transfers, based on text and image prompts. Emphasis on ease-of-use, documentation, and smooth video creation.
Start here
Zero-Shot Text-to-Image Generation VQGAN+CLIP Dockerized
OCR-VQGAN, a discrete image encoder (tokenizer and detokenizer) for figure images in Paper2Fig100k dataset. Implementation of OCR Perceptual loss for clear text-within-image generation. Fork from VQGAN in CompVis/taming-transformers
Implementation of Binary Latent Diffusion
NTIRE 2022 - Image Inpainting Challenge
VQ-VAE/GAN implementation in pytorch-lightning
Streamlit Tutorial (ex: stock price dashboard, cartoon-stylegan, vqgan-clip, stylemixing, styleclip, sefa)
Fast and controllable text-to-image model.
[ICLR 2024] DAEFR: Dual Associated Encoder for Face Restoration
Traditional deepdream with VQGAN+CLIP and optical flow. Ready to use in Google Colab.
Add a description, image, and links to the vqgan topic page so that developers can more easily learn about it.
To associate your repository with the vqgan topic, visit your repo's landing page and select "manage topics."