Skip to content

This project explores the intersection of NLP and CV, showcasing the potential of leveraging three powerful models – SAM, Stable Diffusion, and Grounding DINO – to edit manipulate images through textual commands.

Notifications You must be signed in to change notification settings

imane0x/Text-Based-Image-Editing

Repository files navigation

Text-Based Image Editing

This project explores the intersection of Natural Language Processing (NLP) and Computer Vision through advanced text-based image editing techniques. By leveraging three powerful models—Grounding DINO, Segment Anything Model (SAM), and Stable Diffusion—we aim to create an intuitive system that allows users to modify images based on textual input.

Grounding DINO: Facilitates object detection and localization within images based on text prompts, allowing for precise object targeting.

Segment Anything Model (SAM): Provides robust segmentation capabilities to isolate specific objects or regions within an image for editing.

Stable Diffusion: Handles image generation and enhancement, enabling creative transformations based on text instructions.

Prerequisites

Before you begin, ensure you have the following installed on your system:

Getting Started

1. Build the Docker Image

docker build -t image_editing .

2. Run the Docker Container with GPU Support

docker run --gpus all -p 7860:7860 image_editing

Demo

  • Replacing lamb with wolf.

About

This project explores the intersection of NLP and CV, showcasing the potential of leveraging three powerful models – SAM, Stable Diffusion, and Grounding DINO – to edit manipulate images through textual commands.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published