Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed error with log likelihood #858

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

lazypanda1
Copy link
Contributor

When I use log_likelihood criticism method on the bayesian regression example, it fails with an error (message shown below):

Program:

....

  # CRITICISM

  # Plot posterior samples.
  sns.jointplot(qb.params.eval()[FLAGS.nburn:FLAGS.T:FLAGS.stride],
                qw.params.eval()[FLAGS.nburn:FLAGS.T:FLAGS.stride])
  plt.show()

  # Posterior predictive checks.
  y_post = ed.copy(y, {w: qw, b: qb})

 ....

  print("Log likelihood on test data:")
  print(ed.evaluate('log_lik', data={X: X_test, y_post: y_test}))

  print("Displaying prior predictive samples.")
  n_prior_samples = 10

  ....

if __name__ == "__main__":
  tf.app.run()

Error:

 File "bayesian_regression.py", line 70, in main
    print(ed.evaluate('log_lik', data={X: X_test, y_post: y_test}))
  File "/usr/local/lib/python2.7/site-packages/edward/criticisms/evaluate.py", line 219, in evaluate
    evaluations += [log_likelihood(y_true, n_samples, output_key, feed_dict, sess)]
  File "/usr/local/lib/python2.7/site-packages/edward/criticisms/evaluate.py", line 478, in log_likelihood
    tensor = tf.reduce_mean(output_key.log_prob(y_true))
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/distributions/distribution.py", line 718, in log_prob
    return self._call_log_prob(value, name)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/distributions/distribution.py", line 700, in _call_log_prob
    return self._log_prob(value, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/distributions/normal.py", line 189, in _log_prob
    return self._log_unnormalized_prob(x) - self._log_normalization()
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/distributions/normal.py", line 207, in _log_unnormalized_prob
    return -0.5 * math_ops.square(self._z(x))
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/distributions/normal.py", line 232, in _z
    return (x - self.loc) / self.scale
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.py", line 925, in binary_op_wrapper
    y = ops.convert_to_tensor(y, dtype=x.dtype.base_dtype, name="y")
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 946, in convert_to_tensor
    as_ref=False)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1036, in internal_convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 879, in _TensorTensorConversionFunction
    (dtype.name, t.dtype.name, str(t)))
ValueError: Tensor conversion requested dtype float64 for Tensor with dtype float32: 'Tensor("copied/Normal_2/loc:0", shape=(40,), dtype=float32)'

It seems like the log_likelihood module was missing the cast to float32. I have added it in this change and also refactored the code a little.

@dustinvtran
Copy link
Member

Thanks for sharing. Is this something that should be fixed in the code base? It seems like the inputs to evaluate should just have compatible dtype.

@lazypanda1
Copy link
Contributor Author

I also thought so, but the dtype is not used anywhere in the code except the placeholder. i tried making it float64, but the program crashed with that as well.
Also, this does not happen when i try mse or other methods, since they are already casting the values in evaluate.py. Hence, I thought log_lik might also need this.
What do you think?

@dustinvtran
Copy link
Member

Can you provide a minimal reproducible example? What happens if you downcast the NumPy array to float32?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants