Should I add field level data fetchers if my document type fields all come from a single Mongo document? #757
-
Hi everyone, I'm trying to figure out what the best practice would be for my particular use case. I have a document with three fields, the fields, sys and data fields are all on the same Mongo document so I currently have one data fetcher for the Document. This handles returning the fields, sys and data by making a single db call. I use a async data fetcher with a mapped data loader to retrieve the "anotherField" documents. Here is my actual question, the data field on the document requires some additional mapping / processing, should this be done in the document data fetcher that I have or should I have a data field datafetcher that handles the mapping of the document data as well. My ultimate goal here is to reduce processing time and make my api much faster. Thanks in advance, I appreciate any guidance on this. Below is an example of what my schema looks like for the Document Type.
These are 2 queries that I would like to support - shown below.
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
I would do the extra processing in the document data fetcher. That's the unit of data you're loading from your database anyway, so there's no point in trying to split that up in multiple datafetchers. If the processing is CPU expensive, you could consider to make the datafetcher async, and run the post processing on a worker pool using completable futures. However, that's an optimization that I would only use if you actually get any gains out of parallelizing this, otherwise it just complicates things for no reason. |
Beta Was this translation helpful? Give feedback.
I would do the extra processing in the document data fetcher. That's the unit of data you're loading from your database anyway, so there's no point in trying to split that up in multiple datafetchers.
If the processing is CPU expensive, you could consider to make the datafetcher async, and run the post processing on a worker pool using completable futures. However, that's an optimization that I would only use if you actually get any gains out of parallelizing this, otherwise it just complicates things for no reason.