You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Would be nice if every job didn't have to run on a e.g. 50gb pod when most jobs probably require way less.
e.g. a job that makes an HTTP API call should run on a 1gb node, not a 50gb node
If it were possible to schedule multiple jobs at the same time on the same pod (am I using this word correctly?) then this would be less of a problem, but still not ideal.
The text was updated successfully, but these errors were encountered:
A pod is a collection of containers that run together i.e. they are scheduled on the same machine, share a port namespace + can communicate via localhost, and share a memory namespace allowing for other IPC mechanisms. Each one of the jobs submitted by ketrew becomes a pod (that's my understanding at least).
It is possible to schedule multiple pods on the same machine at once. To do this, the cumulative resource requests of the pods must be less than the machine's available resources i.e. total requested CPUs + RAM must be less than physical CPUs + RAM.
Going forward, we should size our pods based on what they actually need rather than always asking for an entire box. This was a good way to get started quickly and saved us the time of thinking through every jobs resource requirements.
Would be nice if every job didn't have to run on a e.g. 50gb pod when most jobs probably require way less.
e.g. a job that makes an HTTP API call should run on a 1gb node, not a 50gb node
If it were possible to schedule multiple jobs at the same time on the same pod (am I using this word correctly?) then this would be less of a problem, but still not ideal.
The text was updated successfully, but these errors were encountered: