Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does increasing code context generate better tests? #179

Open
EmbeddedDevops1 opened this issue Oct 8, 2024 · 0 comments
Open

Does increasing code context generate better tests? #179

EmbeddedDevops1 opened this issue Oct 8, 2024 · 0 comments
Labels
experiment Experimentation needed good first issue Good for newcomers prompt engineering Involves rewording or restructuring the prompt handed to the LLM

Comments

@EmbeddedDevops1
Copy link
Collaborator

As it currently stands we include the source file under test and the existing unit tests. If we increase the context (such as including imports or imported functions) would that generate better tests?

Note: This can be tested very easily using the --included-files option.

@EmbeddedDevops1 EmbeddedDevops1 added good first issue Good for newcomers prompt engineering Involves rewording or restructuring the prompt handed to the LLM experiment Experimentation needed labels Oct 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
experiment Experimentation needed good first issue Good for newcomers prompt engineering Involves rewording or restructuring the prompt handed to the LLM
Projects
Status: No status
Development

No branches or pull requests

1 participant