You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been facing a problem when I am ruining CT-SEG using Docker image: The 'temp' files are created, but process stops quickly and the segmentation files are not generated.
For exemple, when I enter:
docker run --rm -it -v dir_host:/data ubuntu:ctseg eval "spm_CTseg('/data/CT.nii',ct_result',true,true,true,true,1,2,0.0005)"
I realized that the process stops when it reaches 15.5GB of memory. Do you know if there is a way to limit or parallelize this processes within the Dockerfile , so it will not stop when it attempts to reaches the full RAM memory?
The text was updated successfully, but these errors were encountered:
I suspect you are giving CTseg a large image, which means that the RAM usage will be high. Unfortuntaly, there are no tricks available for decreasing memory use on the level of calling the algorithm (i.e., your docker run command). If you cannot increase the RAM, you could have a look at the utility function:
It allows you to downsample an image without breaking the affine matrix in the nifti header. For example, you could try setting the voxel size to 1 mm isotropic.
I have been facing a problem when I am ruining CT-SEG using Docker image: The 'temp' files are created, but process stops quickly and the segmentation files are not generated.
For exemple, when I enter:
docker run --rm -it -v dir_host:/data ubuntu:ctseg eval "spm_CTseg('/data/CT.nii',ct_result',true,true,true,true,1,2,0.0005)"
I realized that the process stops when it reaches 15.5GB of memory. Do you know if there is a way to limit or parallelize this processes within the Dockerfile , so it will not stop when it attempts to reaches the full RAM memory?
The text was updated successfully, but these errors were encountered: