Skip to content

Commit

Permalink
enable storage optimization by default (#921)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: #921

Let's enable storage optimizations by default in TorchTNT in preparation of upcoming DCP guidance post

We don't need to change anything in Mitra as the default knob option sets it to False:
https://www.internalfb.com/code/fbsource/[99acb2db7d2b]/fbcode/content_understanding/framework/training/types.py?lines=40-47

Reviewed By: saumishr

Differential Revision: D64205385

fbshipit-source-id: 076f42ecbf04a5dd36bd8be5946991978944ee03
  • Loading branch information
JKSenthil authored and facebook-github-bot committed Oct 11, 2024
1 parent 1beb1f0 commit b9e7c1f
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions torchtnt/framework/callbacks/checkpointer_types.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@ class KnobOptions:
# use a more conservative number of concurrent IO operations per rank in Checkpointing
# the default value of 16 is too bandwidth hungry for most users
max_per_rank_io_concurrency: Optional[int] = None
# This is a no-op and for future use. This would enable storage efficiency optimizations:
# This would enable storage efficiency optimizations (model store):
# e.g. Compression, Batching, Quantization etc.
enable_storage_optimization: bool = False
enable_storage_optimization: bool = True


@dataclass
Expand Down

0 comments on commit b9e7c1f

Please sign in to comment.