Is self reflection necessary to generate good tests? #180
Labels
experiment
Experimentation needed
prompt engineering
Involves rewording or restructuring the prompt handed to the LLM
Should there be an intermediary step that the LLM takes to reflect on the code and first write tests in natural language? After it has written the natural language test it can take that and translate it into a real test.
It seems like
o1
from OpenAI seems to be doing some sort of reflection/introspection first before giving a response. Maybe this is needed as well for generating unit tests or, perhaps, it is overkill?The text was updated successfully, but these errors were encountered: