-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improvement: Increasing Token Usage Efficiency (In Progress) #678
Comments
Checkout our latest X thread outlining some of the upcomming dev priorities related to error fixing and token efficiency, specifically: breaking error loops All landing over this week & next, ETAs for each included in thread! |
📣 Major Update on Automated Bug Fixing: We have shipped a big improvement to the automatic debugging "Fix" feature in Bolt. Due to this improvement, you should now see a higher percentage of automated fix attempts complete successfully! Note: There will always be bugs that require manual intervention until AI models improve significantly from here, but this should be a noticeable improvement today! |
Hello, How can I connect two separate WebContainers running frontend and backend services on StackBlitz? As a follow-up to my previous question - if I run the frontend and backend in separate WebContainers: How can I configure CORS and proxy settings to allow communication between them? |
Is there any update on this? I'm blocked on a project, and I've actually deleted a bunch of files to continue with the most critical for MVP, but I can't even reduce the prompt nor the feature set of my app to continue using Bolt. That being said, I'm super invested in what you are doing, and I wanted to take the time to say thank you for creating this incredible leap in development speed! |
I created a collapsible menu + few troubleshooting it cost me 500,000 tokens for that simple FEature! . you should review how Bolt consumes token |
Background
Large language models (LLMs) decode text through tokens—frequent character sequences within text/code. Under the hood Bolt.new is powered mostly by Anthropic's Sonnet 3.5 AI model, so using Bolt consumes tokens that we must purchase from Anthropic.
Our goal is for Bolt to use as few tokens as possible to accomplish each task, and here's why: 1) AI model tokens are one of our largest expenses and if less are used, we save money, 2) so that users can get more done with Bolt and become fans/advocates, and 3) ultimately so we can attract more users and continue investing in improving the platform!
When users interact with Bolt, tokens are consumed in 3 primary ways: chat messages between the user and the LLM, the LLM writing code, and the LLM reading the existing code to capture any changes made by the user.
There are numerous product changes that we are working on to increase token usage efficiency, and in the meantime there are many tips and tricks you can implement in your workflow to be more token efficient:
Upcoming Improvements
Optimizing token usage is a high priority for our team, and we are actively exploring several R&D initiatives aimed at improving token usage efficiency automatically behind the scenes. In the meantime, we will be shipping multiple features that improve the user experience in the near term including controlling which files it is able to modify via locking and targeting (shipped) and improving the automated debugging feature (shipped). These improvements, paired with the tips below should help you manage your tokens more efficiently. Subscribe to this issue to be notified when those new features land.
While we work on these improvements, here are some strategies you can use to maximize token usage efficiency today:
Avoid Repeated Automated Error "Fix" Attempts
Continuously clicking the automatic "fix" button can lead to unnecessary token consumption. After each attempt, review the result and refine your next request if needed. There are programming challenges that the AI cannot solve automatically, so it is a good idea to do some research and intervene manually if it fails to fix automatically.
Leverage the Rollback Functionality
Use the rollback feature to revert your project to a previous state without consuming tokens. This is essentially and undo button that can take you back to any prior state of your project, This can save time and tokens if something goes wrong with your project. Keep in mind that there is no "redo" function though, so be sure you want to revert before using this feature because it is final: all changes made after the rollback point will be permanently removed.
Crawl, Walk, Run
Make sure the basics of your app are scaffolded before describing the details of more advanced functionality for your site.
Use Specific and Focused Prompts
When prompting the AI, be clear and specific. Direct the model to focus on certain files or functions rather than the entire codebase, which can improve token usage efficiency. This approach is not a magic fix, but anecdotally we've seen evidence that it helps. Some specific prompting strategies that other users have reported as helpful are below, and a ton more can be found in the comment thread below:
Understand Project Size Impact
As your project grows, more tokens are required to keep the AI in sync with your code. Larger projects (and longer chat conversations) demand more resources for the AI to stay aware of the context, so it's important to be mindful of how project size impacts token usage.
Advanced Strategy: Reset the AI Context Window
If the AI seems stuck or unresponsive to commands, consider refreshing the Bolt.new chat page in your browser. This resets the LLM’s context window, clears out prior chat messages, and reloads your code in a fresh chat session. This will clear the chat, so you will need to remind the AI of any context not already captured in the code, but it can help the AI regain focus when it is overwhelmed due to the context window being full.
We appreciate your patience during this beta period and look forward to updating this thread as we ship new functionality and improvements to increase token usage efficiency!
The text was updated successfully, but these errors were encountered: