Enter Mayu & Yochu – Our Provisioning Tools For CoreOS On Bare Metal
Despite the ongoing growth of the public cloud market we see more and more companies operating their own bare metal. These are not only big tech companies, but even startups that require significant resources, which would get prohibitively expensive on current cloud providers. However, managing and provisioning your own bare metal is not easy and with the move towards “disposable” infrastructure (or treating your servers as cattle instead of pets), provisioning machines from scratch is not just a one-off task.
At Giant Swarm we run microservice infrastructures on various setups including our own bare metal. For making our lives easier we have developed several tools that help us automate provisioning and setting up infrastructures. Recently, GitHub published an interesting blog post about how they automated the provisioning and management of their bare metal cloud. This resonated with us a lot, as it reminded us of our own tooling. And as we thrive to be transparent and share our experience with the community, we are today announcing the open source release of two of our main provisioning tools, which can help you bootstrap your own bare metal machines.
Introducing Mayu and Yochu
Our goal was to have an automated way to bootstrap bare metal nodes with a pre-configured CoreOS and then customize them with different versions of fleet, etcd, and Docker. This is split up into two steps and with that two tools: Mayu, which bootstraps CoreOS nodes, and Yochu, which customizes CoreOS installations with different versions of the above-mentioned tools.
Bootstrap CoreOS Clusters with Mayu
Mayu is a simple tool that bootstraps network booting bare metal nodes with pre-configured CoreOS installations. You can also use it to bootstrap virtual hosts, e.g for testing in your local environment. For doing its job it acts in three server functions: as a DHCP server, as a PXE server, and as a TFTP server. Additionally, Mayu offers a TLS endpoint to manage the hosts, e.g. for collecting information about the hosts and the state they are in. It can run as a binary on any Linux machine with DNSmasq installed or as a self-contained Docker container on Docker-enabled hosts.
The final goal of a mayu-enabled deployment is a set of machines participating in a fleet cluster. To be able to assign different roles to the different nodes, you can define profiles. Each profile has a name, a quantity (defines the number of cluster nodes that should have this profile assigned) and a list of tags (the elements of this list will be directly mapped to fleet metadata tags). Once all the profiles’ quantities are matched, mayu will assign the profile “default” to the remaining nodes. The default profile might be used for example to assign new nodes to a “testing pool” first and then add them to a production cluster only after testing.
Further, you can customize the Cloud-Configs that the nodes get set up with by adding templates that get directly injected into the final Cloud-Configs. We use this for example to add SSH keys and to set up the nodes with the right network configuration.
When bootstrapping new clusters, Mayu saves the cluster state in form of JSON files to a cluster directory. By default Mayu treats this directory as a git repository committing every change, so that you have an audit log of every state change of each machine identified by its serial number.
Here is an example of a commit history:
2015-10-08 19:14:36 +0200 => d89aff66c71b: updated state to running 2015-10-08 19:13:28 +0200 => d89aff66c71b: updated state to installed 2015-10-08 19:10:54 +0200 => d89aff66c71b: updated host state to installing 2015-10-08 19:10:54 +0200 => d89aff66c71b: updated host connected nic 2015-10-08 19:10:54 +0200 => d89aff66c71b: updated host macAddress 2015-10-08 19:10:53 +0200 => d89aff66c71b: updated host profile and metadata 2015-10-08 19:10:53 +0200 => d89aff66c71b: updated host InternalAddr 2015-10-08 19:10:53 +0200 => d89aff66c71b: updated with predefined settings 2015-10-08 19:10:53 +0200 => d89aff66c71b: host created 2015-10-08 19:09:19 +0200 => generated etcd discovery url 2015-10-08 19:09:19 +0200 => initial commit
Mayu comes with its own
mayuctl client, which you can use to manage your clusters. It lets you list a catalogue of all your machines including their IPs, serial numbers, the profile they were set up with, their CoreOS version, their current state as well as last boot time. You can go deeper and request the status of a single machine, which reveals some additional information about that machine.
The example below shows the cluster node details provided by
mayuctl for a machine:
$ mayuctl status de75712c-20d6-4fda-89cc-205564159a3d Serial: de75712c-20d6-4fda-89cc-205564159a3d IP: 10.0.3.31 IPMI: <nil> Provider ID: Macs: 00:16:3e:6b:ad:2e Cabinet: 0 Machine on Cabinet: 0 Hostname: 0000e36d4651065a MachineID: 0000e36d4651065aa5e6f53350b81a98 ConnectedNIC: ens3 Profile: core State: "running" Metadata: role-core=true CoreOS: 681.2.0 Mayu: 0.7.1 Yochu: 0.18.0 Docker: 1.6.2 Etcd: v2.2.1-gs-1 Fleet: v0.11.3-gs-2 LastBoot: 2016-01-27 01:11:47.581283933 +0100 CET Enabled: true
mayuctl you can further set some of this information, like e.g. the Provider ID, to document your bare metal setup in more detail. You can also mark machines for reinstallation for example to provision them with a new CoreOS version.
Customize your CoreOS Installation with Yochu
In the snippet above you can already see the integration between Mayu and Yochu in that Mayu is able to show you which version of Yochu was deployed, but also that it shows you a Docker, etcd, and fleet version. As already mentioned in the introduction, Yochu is our tool to provision our fresh CoreOS machines with custom versions of these tools. “Why would I need custom versions of those?”, you might ask. In our experience there’s several use cases that might pop up in production environments. Most common might be provisioning a custom Docker version. Here there’s actually two scenarios: First, we might want to update our CoreOS version, but don’t want to also update the Docker version we’re using as we have that one tested out in our infrastructure already. This way an upgrade process can be more granular and controlled. Second, we might want to stick to our CoreOS version (e.g. stable), but need a newer Docker version, because of some added functionality or bugfix and we don’t want to wait until it’s available in the stable channel. Another more special use case is for example that we are maintaining a customized version of fleet (which we are working together with CoreOS on to get merged upstream) and want to use that instead of the one that comes with CoreOS by default. Yochu helps us deliver these custom binaries to our hosts transparently and without touching the actual CoreOS installation below it.
Yochu is set up as a single unit file that runs on every boot of the CoreOS machine. On boot it fetches the designated versions of Docker, fleet, and etcd and makes CoreOS use those instead of the default ones. Yochu can run in different environments (even AWS) and is not bound to Mayu. However, when run in union with Mayu it can fetch those tools directly from Mayu and thus does not require any additional storage or connectivity.
Bootstrap Some Metal, Everyone
We know that by now Mayu is not the only PXE-enabled bootstrapping tool out there. You might have read about the recently released coreos-baremetal, which is quite similar in functionality to the bootstrapping of bare metal machines we do with Mayu. And indeed both do help you bootstrap CoreOS clusters through PXE boot. However, in working with our own bare metal we added some functionality that helps us also keep track of and manage our machines with the same tool. You can see that especially in the git integration as well as the deep information that you can get with
mayuctl. Further, the mentioned integration with our Yochu expands the functionality towards even more customization of your CoreOS installation.
If that sounds good to you, check out Mayu and Yochu on GitHub. Play around with them and set up your own clusters. We have lots of ideas where we could go with these tools. If you have some feedback or ideas we are happy to hear from you. Feel free to contribute in form of issues and PRs. You can also join us in our mailing list (giantswarm) or on IRC (#giantswarm on freenode.org) to have a little chat.
If you are looking for a tool like Mayu but rather geared towards AWS, you just need to be a bit more patient. Our internal tool for that is just about to get open sourced. Also, if you’re looking for a simple way to test out Mayu or other PXE-enabled provisioning tools, we’ll be releasing our local VM automation tool soon, too.