You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using InSilicoSeq to produce Illumina MiSeq reads for a draft genome (~4000 contigs) and this results in the generation of thousands of tmp files, which is causing issues on my cloud VM. This is not so much of a problem when using genomes consisting of a small number of contigs, but when using fasta files containing many contigs, it keeps crashing my system. It would be great if during the read generation process the temporary results could be written to a much smaller number of tmp files.
The text was updated successfully, but these errors were encountered:
Thank you for reporting this. I'll keeo this issue in mind and try to improve how temp files are handled in a future release.
Meanwhile if it's a big issue reducing the number of cpus iss is using will reduce the number of temp files created.
I'm using InSilicoSeq to produce Illumina MiSeq reads for a draft genome (~4000 contigs) and this results in the generation of thousands of tmp files, which is causing issues on my cloud VM. This is not so much of a problem when using genomes consisting of a small number of contigs, but when using fasta files containing many contigs, it keeps crashing my system. It would be great if during the read generation process the temporary results could be written to a much smaller number of tmp files.
The text was updated successfully, but these errors were encountered: