ChatGPT-Plus is an application utilizing the official ChatGPT API.
Demo / Report Issues / Development / Deployment Vercel
Do you like this project? Please give it a Star ⭐️
or share it with your friends to help improve it!
- Introduction
- Features
- Principle
- Online Development
- Installation and Operation
- Package Deployment
- Additional Information
- FAQ
- Contribute
- Thanks
- Sponsorship
- License
ChatGPT-Plus client is an application utilizing the official ChatGPT API from OpenAI's ChatGPT.
- 📦A complete ChatGPT client.
- 🚀Built using Nextjs & Nestjs, fast to start.
- 📱Responsive design, supports mobile access.
- 🌈Supports multiple themes, light/dark modes.
- 🌍Internationalization support. Chinese and English are supported.
- 📦Supports custom prompt words, view online recommended prompt words.
- 🎨Uses CSS-in-JS technology, supports theme customization.
- 📦Supports Docker & Vercel deployment.
There are two methods provided for accessing the ChatGPT API. To use this module in Node.js, you must choose between two methods:
Method | Free? | Robust? | Quality? |
---|---|---|---|
ChatGPTAPI |
❌ No | ✅ Yes | ✅️ Real ChatGPT model |
ChatGPTUnofficialProxyAPI |
✅ Yes | ☑️ Maybe | ✅ Real ChatGPT |
-
ChatGPTAPI
- Uses thegpt-3.5-turbo-0301
model with the official OpenAI ChatGPT API (official and powerful, but not free). You can override the model, completion parameters, and system messages to fully customize your assistant. -
ChatGPTUnofficialProxyAPI
- Use an unofficial proxy server to access the backend API of ChatGPT by bypassing Cloudflare (lightweight compared toChatGPTAPI
, but relies on third-party servers and has rate limiting).
These two methods have very similar APIs, so switching between them should be straightforward.
Note: We strongly recommend using ChatGPTAPI
because it uses the API supported by OpenAI. We may stop supporting ChatGPTUnofficialProxyAPI
in future releases.
The request principle uses the functional module provided by chatgpt-api.
You can use Gitpod for online development:
Alternatively, clone to local development and follow the steps below:
# clone the project
git clone https://github.com/zhpd/chatgpt-plus.git
If you do not have a git environment, you can directly download the zip package, unzip it and enter the project directory
This project is developed based on Node.js, which requires Node.js 14.0+ environment. Make sure you're using
node >= 18
sofetch
is available (ornode >= 14
if you install a fetch polyfill).
The project uses the API officially provided by OpenAI and requires an Api Key and AccessToken.
- OpenAI official registration application address: https://platform.openai.com/, which requires scientific Internet access
- Obtain
ApiKey
orAccessToken
through other methods Click Here to View
After successful application, fill in the APIKey and AccessToken in chatgpt-plus/service/.env
file.
It is recommended to use the VSCode editor for development, install the
ESLint
andPrettier
plugins, and enableFormat On Save
in the settings.
Configure the port and interface request address in the root directory .env file.
You can copy the .env.example
file in the root directory and modify it directly (rename the file to .env
)
Environment Variable | Default Value | Description |
---|---|---|
PORT |
3000 |
The port number |
NEXT_PUBLIC_API_URL |
http://localhost:3002 |
The API endpoint URL |
Configuration File
Modify the existing .env.example
in the root directory directly and change the file name to .env
.
# port
PORT=3000
# api url
NEXT_PUBLIC_API_URL=http://localhost:3002
# enter the project directory
cd chatgpt-plus
# install dependency
npm install
# develop
npm run dev
After running successfully, you can access it through
http://localhost:3000
.
Configure the port and API Key/AccessToken in .env
under the service
folder.
Environment Variable | Default Value | Description |
---|---|---|
PORT |
3002 |
The port number |
OPENAI_API_KEY |
- | API_KEY |
OPENAI_ACCESS_TOKEN |
- | ACCESS_TOKEN |
API_REVERSE_PROXY |
https://api.pawan.krd/backend-api/conversation |
Proxy |
TIMEOUT_MS |
60000 | Timeout in milliseconds |
Configuration File
Modify the existing .env.example
in the service
directory directly and change the file name to .env
.
# service/.env
# OpenAI API Key - https://platform.openai.com/overview
OPENAI_API_KEY=
# change this to an `accessToken` extracted from the ChatGPT site's `https://chat.openai.com/api/auth/session` response.
OPENAI_ACCESS_TOKEN=
# Reverse Proxy default 'https://bypass.churchless.tech/api/conversation'
API_REVERSE_PROXY=
# timeout
TIMEOUT_MS=100000
# enter the project directory
cd chatgpt-plus
# enter the service directory
cd service
# install dependency
npm install
# develop
npm run dev
After running successfully, the backend service can run normally.
Docker environment is required for deployment using Docker.
Use the configuration file in the docker-compose
folder to pull and run.
Deploy with a single click with Vercel.
- Code packaging
- Enter the root folder of the project
- Modify
API_URL
in.env
file in the root directory to your actual backend interface public network address - runnpm install
to install the dependency - runnpm run build
to package the code. - Running and deployment
- Copy the files in the
dist
folder to theFront-end Service
directory on your website - Enter thedist
folder - runnpm run start
to start the service
- Code packaging
- Enter the
service
folder - runnpm install
to install the dependency - runnpm run build
to package the code. - Running and deployment
- Copy the files in the
service/dist
folder to theBackend Service
directory on your website - Enter theservice/dist
folder - runnpm run start
to start the service
Note: If you do not want to package, you can directly copy the
service
folder to the server to runnpm install
andnpm run start
to start the service.
Configuration File
You can access it by setting the OPENAI_API_KEY
key in env for backend service:
# R OpenAI API Key
OPENAI_API_KEY =
This project uses the OpenAI API provided by the official website, so you first need to apply for an OpenAI account.
- OpenAI Official Account Registration Address: https://platform.openai.com/
- After successful registration, obtain the API key through https://platform.openai.com/account/api-keys.
Configuration File
You can access it by setting the OPENAI_ACCESS_TOKEN
key in env for backend service:
# change this to an `accessToken` extracted from the ChatGPT
OPENAI_ACCESS_TOKEN =
You need to get an OpenAI access token from the ChatGPT web application. You can use either of the following methods, which require an email
and password
and return an access token:
-
Node.js Libraries
-
Python Libraries
Although these libraries work with accounts authenticated with an email and password (for example, they do not support accounts authenticated via Microsoft/Google).
In addition, you can manually obtain an accessToken
by logging in to the ChatGPT Web application and opening https://chat.openai.com/api/auth/session
, which will return a JSON object containing your accessToken
string.
The access token has an expiration time of several days.
Note: Using reverse proxies exposes your access token to a third party. This does not have any adverse effects, but consider the risks before using this method.
Configuration File
You can overwrite the reverse proxy by adding the API_REVERSE_PROXY
key in env for the backend service:
# Reverse Proxy
API_REVERSE_PROXY =
Known reverse proxies run by community members include:
Reverse Proxy URL | Author | Throttle rate | Last Checked |
---|---|---|---|
https://bypass.churchless.tech/api/conversation |
@acheong08 | 5 req/10 seconds by IP | 3/24/2023 |
https://api.pawan.krd/backend-api/conversation |
@PawanOsman | 50 req / 15 seconds (~3 r/s) | 3/23/2023 |
Note: Information on the reverse proxy working method is currently not disclosed to prevent OpenAI from disabling access.
Q: If I only use the front-end page, where can I change the request interface?
A: In the .env
file in the root directory, modify the API_URL
field.
Q: Why is there no typewriter effect on the front-end?
A: One possible reason is that when using Nginx reverse proxy, if the buffer is enabled, Nginx will attempt to buffer a certain amount of data from the backend before sending it to the browser. Try adding proxy_buffering off;
after the reverse proxy parameters, and then reload Nginx. Similar configuration for other web servers.
Thanks to all the contributors who have contributed to this project!
- Many thanks to the supporters and all other contributors to the project! 💪
- Special thanks to the original reference projects created by @transitive-bullshit chatgpt-api and @Chanzhaoyu chatgpt-web for providing ideas.👍
- Many thanks to OpenAI for creating ChatGPT 🔥
If you find this project helpful, please give it a Star ⭐️ or share it with your friends. Your support is my greatest motivation!
MIT © zhpd