You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @menyifang
Thanks for the great work. I want to know how to evaluate the contextual score. I ran the contextual_similarity.py in the "cx" folder and modified the path to the results from test.py. Also, I make sure that the return images of DualDataset are source images(ground truth) and generated images. However, the result of the contextual score does not seem to be the average of each data. It seems that line 180: cx += float(loss_layer(ref_single, fake_single)) sums up the contextual score of one batch and does not divide by the number of iteration(8570/48) later.
The text was updated successfully, but these errors were encountered:
Hi @menyifang
Thanks for the great work. I want to know how to evaluate the contextual score. I ran the contextual_similarity.py in the "cx" folder and modified the path to the results from test.py. Also, I make sure that the return images of DualDataset are source images(ground truth) and generated images. However, the result of the contextual score does not seem to be the average of each data. It seems that line 180: cx += float(loss_layer(ref_single, fake_single)) sums up the contextual score of one batch and does not divide by the number of iteration(8570/48) later.
The text was updated successfully, but these errors were encountered: