Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in running CT-SEG (Docker Image) #17

Open
arturjpaulo opened this issue Jun 20, 2022 · 1 comment
Open

Error in running CT-SEG (Docker Image) #17

arturjpaulo opened this issue Jun 20, 2022 · 1 comment

Comments

@arturjpaulo
Copy link

I have been facing a problem when I am ruining CT-SEG using Docker image: The 'temp' files are created, but process stops quickly and the segmentation files are not generated.

For exemple, when I enter:

docker run --rm -it -v dir_host:/data ubuntu:ctseg eval "spm_CTseg('/data/CT.nii',ct_result',true,true,true,true,1,2,0.0005)"

I realized that the process stops when it reaches 15.5GB of memory. Do you know if there is a way to limit or parallelize this processes within the Dockerfile , so it will not stop when it attempts to reaches the full RAM memory?

MicrosoftTeams-image (1)

MicrosoftTeams-image

@brudfors
Copy link
Contributor

Hello @arturjpaulo

I suspect you are giving CTseg a large image, which means that the RAM usage will be high. Unfortuntaly, there are no tricks available for decreasing memory use on the level of calling the algorithm (i.e., your docker run command). If you cannot increase the RAM, you could have a look at the utility function:

function pth_out = modify_img_vx(pth_in,vx,odir,deg,bc)

It allows you to downsample an image without breaking the affine matrix in the nifti header. For example, you could try setting the voxel size to 1 mm isotropic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants