You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I change the 'normalization_method' in 'config.yaml' to 'min_max', the percentage of results should be in the range 0-100%, but some results are over 100%.
What causes the normalized score to exceed 100%?
I would also like to know what the resulting percentages represent.
I found out that the percentage is calculated from the normalized anomaly score from "#751", but how is the actual percentage calculated?
Please tell me the percentage when the anomaly score is 0.75 and when it is 0.25. (Assuming a threshold of 0.50)
The 'Config.yaml' settings for the custom dataset are as follows.
Config.yaml
dataset:
name: tube
format: folder
path: ./datasets/tube5
normal_dir: good
abnormal_dir: contamination
normal_test_dir: null
task: classification
mask: #optional
extensions: null
split_ratio: 0.2
train_batch_size: 32
test_batch_size: 32
eval_batch_size: 32
inference_batch_size: 32
num_workers: 8
image_size: 256 # dimensions to which images are resized (mandatory)
center_crop: null # dimensions to which images are center-cropped after resizing (optional)
normalization: imagenet # data distribution to which the images will be normalized: [none, imagenet]
transform_config:
train: null
val: null
create_validation_set: false
test_split_mode: from_dir # options: [from_dir, synthetic]
test_split_ratio: 0.2 # fraction of train images held out testing (usage depends on test_split_mode)
val_split_mode: same_as_test # options: [same_as_test, from_test, synthetic]
val_split_ratio: 0.5 # fraction of train/test images held out for validation (usage depends on val_split_mode)
tiling:
apply: false
tile_size: 64
stride: null
remove_border_count: 0
use_random_tiling: False
random_tile_count: 16
visualization:
show_images: False # show images on the screen
save_images: True # save images to the file system
log_images: True # log images to the available loggers (if any)
image_save_path: null # path to which images will be saved
mode: simple # options: ["full", "simple"]
project:
seed: 42
path: ./results
logging:
logger: [] # options: [comet, tensorboard, wandb, csv] or combinations.
log_graph: false # Logs the model graph to respective logger.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
If I change the 'normalization_method' in 'config.yaml' to 'min_max', the percentage of results should be in the range 0-100%, but some results are over 100%.
What causes the normalized score to exceed 100%?
I would also like to know what the resulting percentages represent.
I found out that the percentage is calculated from the normalized anomaly score from "#751", but how is the actual percentage calculated?
Please tell me the percentage when the anomaly score is 0.75 and when it is 0.25. (Assuming a threshold of 0.50)
The 'Config.yaml' settings for the custom dataset are as follows.
Config.yaml
dataset:
name: tube
format: folder
path: ./datasets/tube5
normal_dir: good
abnormal_dir: contamination
normal_test_dir: null
task: classification
mask: #optional
extensions: null
split_ratio: 0.2
train_batch_size: 32
test_batch_size: 32
eval_batch_size: 32
inference_batch_size: 32
num_workers: 8
image_size: 256 # dimensions to which images are resized (mandatory)
center_crop: null # dimensions to which images are center-cropped after resizing (optional)
normalization: imagenet # data distribution to which the images will be normalized: [none, imagenet]
transform_config:
train: null
val: null
create_validation_set: false
test_split_mode: from_dir # options: [from_dir, synthetic]
test_split_ratio: 0.2 # fraction of train images held out testing (usage depends on test_split_mode)
val_split_mode: same_as_test # options: [same_as_test, from_test, synthetic]
val_split_ratio: 0.5 # fraction of train/test images held out for validation (usage depends on val_split_mode)
tiling:
apply: false
tile_size: 64
stride: null
remove_border_count: 0
use_random_tiling: False
random_tile_count: 16
model:
name: ganomaly
latent_vec_size: 100
n_features: 64
extra_layers: 0
add_final_conv: true
early_stopping:
patience: 3
metric: image_AUROC
mode: max
lr: 0.0002
beta1: 0.5
beta2: 0.999
wadv: 1
wcon: 50
wenc: 1
normalization_method: min_max
metrics:
image:
- F1Score
- AUROC
threshold:
method: adaptive #options: [adaptive, manual]
manual_image: null
visualization:
show_images: False # show images on the screen
save_images: True # save images to the file system
log_images: True # log images to the available loggers (if any)
image_save_path: null # path to which images will be saved
mode: simple # options: ["full", "simple"]
project:
seed: 42
path: ./results
logging:
logger: [] # options: [comet, tensorboard, wandb, csv] or combinations.
log_graph: false # Logs the model graph to respective logger.
optimization:
export_mode: null
PL Trainer Args. Don't add extra parameter here.
trainer:
accelerator: auto # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto">
accumulate_grad_batches: 1
amp_backend: native
auto_lr_find: false
auto_scale_batch_size: false
auto_select_gpus: false
benchmark: false
check_val_every_n_epoch: 2
default_root_dir: null
detect_anomaly: false
deterministic: false
devices: 1
enable_checkpointing: true
enable_model_summary: true
enable_progress_bar: true
fast_dev_run: false
gpus: null # Set automatically
gradient_clip_val: 0
ipus: null
limit_predict_batches: 1.0
limit_test_batches: 1.0
limit_train_batches: 1.0
limit_val_batches: 1.0
log_every_n_steps: 50
log_gpu_memory: null
max_epochs: 30
max_steps: -1
min_epochs: 30
min_steps: null
move_metrics_to_cpu: false
multiple_trainloader_mode: max_size_cycle
num_nodes: 1
num_processes: null
num_sanity_val_steps: 0
overfit_batches: 0.0
plugins: null
precision: 32
profiler: null
reload_dataloaders_every_n_epochs: 0
replace_sampler_ddp: true
strategy: null
sync_batchnorm: false
tpu_cores: null
track_grad_norm: -1
val_check_interval: 1.0
Beta Was this translation helpful? Give feedback.
All reactions