-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Enhancement] Enhance ML inference processor for input processing #3193
Comments
The highest-level requirement is that a user should never be forced/constrained to having to create a custom model interface to support a specific search pipeline or use case. The user should be able to use the natural interface for a model (eg. Bedrock Claude V3's natural interface is the messaging or converse API) for all possible pipelines and use cases. This allows the user to share the model across any use case or pipeline. If you don't do this, then there is a scalability issue from a operational and usability (likely performance as well) because for each model, there will be the need to manage N models with unique interfaces for every N unique pipelines. A proper design should only require a single model with a common (natural) interface to support N unique pipelines. The current design does not satisfy these requirements. It's evident in RAG use cases. This is due to problems withe coupling of functionality and the order of operations of how fields are mapped and processed into a LLM prompt. A simple design that decouples preprocessing and post processing logic from the core processor functionality should suffice. The processing flow can simply be: i. (optional) preprocess: perform data transform on input data ii. map transform output to model and execute inference iii. (optional) post process: perform data transform on inference output data. This simple design will guarantee the requirements are satisfied. |
Thanks @dylan-tong-aws , I had offline discussion with Tyler yesterday. He has another proposal which can also solve the concern "a user should never be forced/constrained to having to create a custom model interface to support a specific search pipeline or use case": Tyler prefers something like this in input map part
Rather than have such thing in input map
And config prompt in model config
Correct me if wrong, @ohltyler |
Regarding option 3
This implies a flexible and consistent model interface, which implies flexible and consistent keys in the input map / output map configurations, regardless of the use case. This is because the keys are the model interface inputs/outputs. See the ML processor documentation. LLM inputs typically include a freeform text input as part of the API. See Anthropic messages API's content field. For prompt building, the prompt would be passed via this freeform text input. Therefore, the ideal model interface includes a text input (note the current preset connector/model is set up this way as well, with an inputs field. Therefore, the key to the input map should be this freeform text input. Therefore, the value should be the freeform input. Therefore, for prompt building use cases, the user should be able to build out this freeform prompt as a value to this input map. @ylwu-amzn I agree with the above Option 3 as you laid out as one option. Options 1/2 also seem reasonable. I am indifferent on a solution, and suggest to go with the one that makes the most sense from an implementation perspective, and what the limitations are for ML processors etc. If it is not reasonable to expand the ML processor functionality, then it is simply not reasonable, and one of the other solutions should be pursued. |
@ohltyler checkout the model_input field in ml inference processor configs,
We can load the prompt field in model_input and skip input mapping if you want. that serves the same purpose. |
@mingshl, @ylwu-amzn, I'm fine with whatever approach or syntax that @ohltyler proposes. Tyler represents an end user, so if he approves of the usability, I'm good with it. I only expect the solution to address the aforementioned requirement. |
Updated the issue description with sample use cases to explain the work flow of using ml inference search response processor and adding the two proposals. |
What's the problem?
Currently, in using ml inference processors, when the users choose a model and then try to set input_map, users try to load the input map’s key following the model interface. This design works for the simple model interfaces when the documents have the exact format matching the model interface.
However, If the input parameter of model interface can't be set to the document field name after using json path, users need to change the model interface to match the document field. Constructing prompt is one of the complex use case. Prompt is usually mixing with some static instructions and content from the documents. It’s not reusable if configuring different model interfaces for different prompt engineering use cases.
Sample Use Case -Bedrock Claude V3 :
0. Index
Using music document for the following use case. Every document has two fields “persona” and “query”:
1. Model:
Using Bedrock Claude V3 model as an example: This is the model api, usually it’s required a
message
fieldThis is optional system prompt, the
system
field is optional2. Blueprint:
In OpenSearch, we have the blueprint as (currently, it’s missing the system field), the ideal blueprint is going to be:
Sample Predict with prompt requesting within 60 words:
##returnning
Sample Predict with prompt requesting within 10 words:
##returnning
3. Model Interface for Bedrock:
we have predefine model interface for bedrock that requires a parameters.input field:
4. Prompt
Now, when configuring prompt, we put in the model config field
{"model_config":{"system": "You are an ${parameters.role}, tell me about ${parameters.inputs}, Ensure tha you can generate a short answer less than 10 words."}}
But when we load the model interface, it required field inputs , but the role field is optional model input field, the system prompt field is also optional model input field.
5. Input_map
Now the current design, we can try to load the model interface keys into input_map key, but we also need to map
role
field as well. But it’s not in the model interfacehere is the full config of search pipeline:
Pain point:
The pain point to format document field to match model interface
If we want to load the model input field match as the document field, users need to modify the model interface .
Proposal:
Proposal 1: Allow String Substitutions in input_map field
remove the prompt setting with model_config in ml inference processor, and have the prompt(system) field config in the input_map similar to below:
To benchmark, this is the predict API command:
Pro:
Cons:
optional field not showing in the model interface:
the model interface is requiring a inputs field, and the system field is optional, we can’t preload the system field in the input map’s key
regex pattern limitation:
when we apply the string substitution in input_map, the questions cannot happen to have the same pattern. for example, I was asking a question,
can you explain this code to me String templateString = "The ${animal} jumped over the ${target}."; animal = "quick brown fox"; "target"= "lazy dog" ” , but in the document, there is a field
target="giant lion"in this case, the string in the question is interpreted in the input map is
The quick brown fox jumped over the giant lion
, but in the question the string should meant:
The quick brown fox jumped over the lazy dog
the pattern of ${} in the string will always substitute when there happens to have the same document name
Proposal 2: transform model input format , apply setting in model_input field
in ml inference processor, there is a
model_input
config parameters, that can help format the model_inputs. There is a rerank example which usemodel_input
in the doc.model_input
can bypass the response_body format in the connector setting.In this example, the model_input field can construct like this :
the input map will be the same as the placeholder as
Pros:
model_input
field is already in used to format local model format input. This is added for local models to format local model format with less pre-processing functions.Cons:
TBD
The text was updated successfully, but these errors were encountered: