Skip to content

Conversation

@robhoes
Copy link
Member

@robhoes robhoes commented Apr 8, 2025

Adds a new datamodel field in the pool class:

bool pool.ha_reboot_vm_on_internal_shutdown

The field is read/write, with the setter restricted to the pool-operator
role. The default of this new field is true to reflect the current
behaviour.

This field controls what happens when HA is enabled and a protected VM
is shut down internally (e.g. by pressing the shutdown button in
Windows):

  • true: the VM is automatically restarted.
  • false: the VM is not restarted and left halted – consistent with
    the behaviour of shutting the VM down through the API.

CLI:

xe pool-param-set uuid=... ha-reboot-vm-on-internal-shutdown=false

Whether an HA-protected VM is automatically (re)started depends on the
field VM.ha_always_run, which is managed by xapi. This field is set to
true when a protected VM is started, and to false when it is shut
down through the API, which prevents the HA monitor thread from
restarting it again. Setting the new pool-field to false does the
same thing is such a VM is shut down from inside when handling the event
in the xenopsd-events thread.

"Will update VM.allowed_operations because power_state has \
changed." ;
should_update_allowed_operations := true ;
(* Update ha_always_run before the power_state (if needed), to avoid racing
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so the ha monitor thread checks the field ha_always_run to determine whether it needs to restart a protected VM. Is it possible to add this logic into Helpers.vm_should_always_run (although this one does not have context) or xapi_ha_vm_failover.all_protected_vms?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the documentation of the field in the IDL reference ha_always_run as a dependency?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lindig VM.ha_always_run is really an internal implementation detail now and API clients shouldn't really look at it anymore. Initially, this field was the way to control protected VMs using the API, but it has been deprecated for a long time.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Vincent-lau I think it will be hard to encode this in some sort of new state without catching the shutdown event here.

robhoes added 2 commits April 8, 2025 14:47
Adds a new datamodel field in the pool class:

    bool pool.ha_reboot_vm_on_internal_shutdown

The field is read/write, with the setter restricted to the pool-operator
role. The default of this new field is `true` to reflect the current
behaviour.

This field controls what happens when HA is enabled and a protected VM
is shut down internally (e.g. by pressing the shutdown button in
Windows):

- `true`: the VM is automatically restarted.
- `false`: the VM is not restarted and left halted – consistent with
  the behaviour of shutting the VM down through the API.

CLI:

    xe pool-param-set uuid=... ha-reboot-vm-on-internal-shutdown=false

Whether an HA-protected VM is automatically (re)started depends on the
field `VM.ha_always_run`, which is managed by xapi. This field is set to
`true` when a protected VM is started, and to `false` when it is shut
down through the API, which prevents the HA monitor thread from
restarting it again. Setting the new pool-field to `false` does the
same thing is such a VM is shut down from inside when handling the event
in the xenopsd-events thread.

Signed-off-by: Rob Hoes <[email protected]>
@robhoes robhoes added this pull request to the merge queue Apr 10, 2025
Merged via the queue into xapi-project:master with commit 1d77bfc Apr 10, 2025
17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants