Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
-
None
Description
right now, we are choosing arbitrary instance sizes that we know to work well. Using this approach, we have to repeat effort per-provider. Also, service requirements aren't visible. I suggest we switch to a dynamically calculated system.
ex. instead of m1.small, do minRam( (-mx heap size + 25% overhead)*JVMs + overhead of os )
Attachments
Issue Links
- is related to
-
WHIRR-282 Set number of Hadoop slots based on hardware
- Resolved