Prompt Studio's primary reason for existence is so you can develop the necessary prompts for document data extraction super efficiently. It is a purpose-built environment that makes this not just easy for youβbut, lot of fun! The document sample, its variants, the prompts you're developing, outputs from different LLMs, the schema you're developing, costing details of the extraction and various tools that let you measure the effectiveness of your prompts are just a click away and easily accessible. Prompt Studio is designed for effective and high speed development and iteration of prompts for document data extraction. Welcome to IDP 2.0!
Automate critical business processes that involve complex documents with a human in the loop. Go beyond RPA with the power of Large Language Models.
π Step 1: Add documents to no-code Prompt Studio and do prompt engineering to extract required fields
π Step 2: Configure Prompt Studio project as API deployment or configure input source and output destination for ETL Pipeline
π Step 3: Deploy Workflows as unstructured data APIs or unstructured data ETL Pipelines!
- 8GB RAM (recommended)
- Linux or MacOS (Intel or M-series)
- Docker
- Docker Compose (if you need to install it separately)
- Git
Next, either download a release or clone this repo and do the following:
β
./run-platform.sh
β
Now visit http://frontend.unstract.localhost in your browser
β
Use user name and password unstract
to login
That's all there is to it!
See user guide for more details on managing the platform.
Another really quick way to experience Unstract is by signing up for our hosted version.
Unstract comes well documented. You can get introduced to the basics of Unstract, and learn how to connect various systems like LLMs, Vector Databases, Embedding Models and Text Extractors to it. The easiest way to wet your feet is to go through our Quick Start Guide where you actually get to do some prompt engineering in Prompt Studio and launch an API to structure varied credit card statements!
Provider | Status | |
---|---|---|
Qdrant | β Working | |
Weaviate | β Working | |
Pinecone | β Working | |
PostgreSQL | β Working | |
Milvus | β Working |
Provider | Status | |
---|---|---|
OpenAI | β Working | |
Azure OpenAI | β Working | |
Google PaLM | β Working | |
Ollama | β Working |
Provider | Status | |
---|---|---|
Unstract LLMWhisperer | β Working | |
Unstructured.io Community | β Working | |
Unstructured.io Enterprise | β Working | |
LlamaIndex Parse | β Working |
Provider | Status | |
---|---|---|
Snowflake | β Working | |
Amazon Redshift | β Working | |
Google BigQuery | β Working | |
PostgreSQL | β Working | |
MySQL | β Working | |
MariaDB | β Working | |
Microsoft SQL Server | β Working |
Contributions are welcome! Please see CONTRIBUTING.md for further details to get started easily.
- On Slack, join great conversations around LLMs, their ecosystem and leveraging them to automate the previously unautomatable!
- Follow us on X/Twitter
- Follow us on LinkedIn
Do copy the value of ENCRYPTION_KEY
config in either backend/.env
or platform-service/.env
file to a secure location.
Adapter credentials are encrypted by the platform using this key. Its loss or change will make all existing adapters inaccessible!
In full disclosure, Unstract integrates Posthog to track usage analytics. As you can inspect the relevant code here, we collect the minimum possible metrics. Posthog can be disabled if desired by setting REACT_APP_ENABLE_POSTHOG
to false
in the frontend's .env file.