-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gradient descent based fitting for non-gaussian distributions #10
Comments
I'd like to start implementing this soon. However, I think it's worhwhile discussing how we want to do this. At first thought the following issues come to mind:
|
Implementing a baysian calculation for P(Model|Data) would allow for a general maximum likelyhood calculation with a known confidence interval. I'd suggest defaulting to a flat prior which may be overridden by the user. |
After discussing with @hartytp, I think there are really 3 issues:
|
This post seems to suggest the right thing. |
Regarding these, as mentioned before, the only case where the distribution really ends up mattering is for acquisition-time-limited problems like state tomography. For that, established codes to do MLE/Bayesian estimation already exist. ( For other online calibration problems, it's typically easier to just acquire a bit more data – and if one is data-rate-bound, then adaptive/online Bayesian methods (choosing e.g. Ramsey delay times, etc. based on prior data) where 1/N scaling in the error can often be achieved are the way to go. |
Edited to be intelligible |
It would be usefult to have a parameter inference function that does not assume gaussian errors.
Common examples are: binomial dristribution & Poisson distribution
The text was updated successfully, but these errors were encountered: