Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add option to control VM power states #327

Closed
Matty9191 opened this issue Dec 18, 2017 · 6 comments
Closed

Add option to control VM power states #327

Matty9191 opened this issue Dec 18, 2017 · 6 comments

Comments

@Matty9191
Copy link

Matty9191 commented Dec 18, 2017

Howdy,

I finally got a chance to play with the vsphere provider. The provider rocks and I want to thank you for all the hard work you did on i!!! We use cobbler to provision systems and I bumped into an issue this morning. Cobbler uses the mac address to connect a given system with a profile during the PXE boot process. I would like to be able to use the mac address assigned to the VM when it is created in my cobbler resource. The mac would be referenced similar to this:

mac_address = "${vsphere_virtual_machine.kub1_vm.network_interface.0.mac_address}"

Due to the way the graphs are created the cobbler profile is created after the VM is provisioned. That is causing PXE boots to fail. Would it be possible to add VM power control options to control the power state when the VM is provisioned and to power it on and off? That would allow me to create the VM in a powered down state, grab the MAC, create the cobbler profile with the MAC, power the machine up and have the entire provisioning process automated. I spent the past couple of days reading through the documentation and I couldn't find a way to do this. If there is a solution to this I sure would appreciate a pointer to the pertinent docs. If not, maybe you will consider adding power control support in the future?

Thanks again for your efforts!

  • Ryan

Terraform Version

$ /usr/local/bin/terraform -v
Terraform v0.11.1

  • provider.cobbler v1.0.0
  • provider.vsphere v1.1.1

vSphere Provider Version

$ /usr/local/bin/terraform -v
Terraform v0.11.1

  • provider.cobbler v1.0.0
  • provider.vsphere v1.1.1

Affected Resource(s)

vsphere_virtual_machine

Terraform Configuration Files

provider "vsphere" {
  user           = "${var.vsphere_user}"
  password       = "${var.vsphere_password}"
  vsphere_server = "${var.vsphere_server}"
  allow_unverified_ssl = true
}

provider "cobbler" {
  username = "${var.cobbler_username}"
  password = "${var.cobbler_password}"
  url      = "${var.cobbler_url}"
}

data "vsphere_datacenter" "dc" {
  name = "DC"
}

data "vsphere_datastore" "datastore" {
  name          = "san01"
  datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

data "vsphere_network" "network" {
  name          = "public"
  datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

data "vsphere_resource_pool" "pool" {
  name          = "ProdCluster/Resources"
  datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

resource "cobbler_system" "kub1" {
  name         = "kub1.prefetch.net"
  hostname     = "kub1.prefetch.net"
  profile      = "fedora27-x86_64"
  name_servers = ["192.168.1.2"]

  interface {
    name        = "eth0"
    mac_address = "${vsphere_virtual_machine.kub1_vm.network_interface.0.mac_address}"
    static      = true
    ip_address  = "192.168.1.50"
    netmask     = "255.255.255.0"
  }
}

resource "vsphere_virtual_machine" "kub1" {
  name             = "kub1"
  datastore_id     = "${data.vsphere_datastore.datastore.id}"
  resource_pool_id = "${data.vsphere_resource_pool.pool.id}"

  num_cpus = 1
  memory   = 4096
  guest_id = "rhel7_64Guest"

  network_interface {
    network_id = "${data.vsphere_network.network.id}"
  }

  disk {
    name = "kub1.vmdk"
    size = 36
  }
}
@vancluever
Copy link
Contributor

Hey @Matty9191, sorry, but this is something that I don't think will ever be implemented. I'm not going to go so far as to say that it doesn't have precedence anywhere else in TF-land, but the general philosophy is that an instance either has two states - on or gone. This was actually something we enforced for a brief period from 0.3.0 until 1.0.0 when we took an approach similar to the aws_instance resource in the AWS provider where the VM is always powered back on as needed.

Controlling power state would have implications for both provisioners, downstream resources, and the resource workflow itself and I'd rather not open that can of worms. What I would suggest instead is something that might need to be at least in part done anyway - use provisioners. Your first provisioner can shut down the VM with a CLI tool like govc, and the second can bring it back up as a provisioner in the cobbler_system resource.

If you are planning on powering this on some other way that doesn't involve provisioners, you can also statically manage the MAC address using the mac_address parameter in the network_interface sub-resource, but I'd almost say that the provisioner route would be the cleaner way.

Hope this helps!

@Matty9191
Copy link
Author

Hey Chris,

Thanks for the feedback. I'm doing something very similar to what you proposed above and appreciate the well written NO. ;) Thanks again for the feedback and I appreciate all the hard work you've done!

  • Ryan

@Matty9191
Copy link
Author

Hey @vancluever ,

I was experimenting with your suggestion above and I still think there is an issue. If I create a vm resource with the following local provisioner:

  provisioner "local-exec" {
    command = "/usr/local/bin/govc vm.power -off=true vm/${vsphere_virtual_machine.kub1_vm.name}"
  }

This will never run because the VM creation process fails since it can't PXE boot:

Error: Error applying plan:

1 error(s) occurred:

* vsphere_virtual_machine.kub1_vm: 1 error(s) occurred:

* vsphere_virtual_machine.kub1_vm: timeout waiting for a routeable interface

I was reading through the apply documentation and I don't see a clean way to do this since the vsphere_virtual_machine resource requires the VM to boot to move forward. If you get a spare minute could you shed a bit more light on how you foresee this process working?

@vancluever
Copy link
Contributor

Hey @Matty9191, the network waiter currently waits on a routeable IP address - this could mean an issue with customization or your VM does not have a default gateway. If you can't get past it you can always tweak wait_for_guest_net_timeout to get past that issue. You can read more about the waiters here.

Thanks!

@Matty9191
Copy link
Author

Thanks again @vancluever! wait_for_guest_net_timeout hit the spot! Everything is working now and I don't foresee any side effects from the routeable IP address given the way we plan to provision VMs. My plan is to use a provisioner to register the VM with our CMDB, provision the VM, PXE boot it with cobbler then let ansible finish the final configuration. As the very last step in the process I will update the CMDB with the fact that everything completed without error. Then our application deployment tool can take over. Killer job on terraform and the cobbler and vsphere providers!!!!!! I just provisioned 100 VMs as part of my POC and everything went off without a hitch! Terraform is ridiculously powerful!!! Nice work!

@vancluever
Copy link
Contributor

@Matty9191 thank you for the kind words and I'm glad everything is working for you now! Nice to hear feedback of this kind of scale 👍

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants