-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Closed
Labels
:Core/Infra/CoreCore issues without another labelCore issues without another label:mlMachine learningMachine learningTeam:Core/InfraMeta label for core/infra teamMeta label for core/infra teamTeam:MLMeta label for the ML teamMeta label for the ML team
Description
Currently, if ml.use_auto_machine_memory_percent is set to true, the amount of available memory on an ML node is calculated as
NODE_MEMORY - JVM_HEAP_SIZE - 200MB OFF-HEAP MEMORY
Where JVM_HEAP_SIZE is configured on ES start, and the off-heap memory is estimated at 200MB as a fixed value.
Some empirical evidence suggests that the off-heap memory can be significantly larger, which can lead to the Java process being killed by the OOM-killer.
We need to re-evaluate whether the way the ML code is becoming aware of the available memory needs to be adjusted or changed.
Metadata
Metadata
Assignees
Labels
:Core/Infra/CoreCore issues without another labelCore issues without another label:mlMachine learningMachine learningTeam:Core/InfraMeta label for core/infra teamMeta label for core/infra teamTeam:MLMeta label for the ML teamMeta label for the ML team