the Consul/Vault clusters

1. Goals

We are going to create our Consul and Vault clusters:

In order to create early SSL communications, the Ansible playbook will create a PKI using Ansible modules: the CAs and the certs will be generated as plain files on a designated host (here the bastion host).
Of course, we will remove them after.

We are going to create a lot of users in Vault: the idea is to be able to partition the responsibilities at the maximum we can do.

In a real life use-case, you will be sharing the passwords only to the persons or projects that need to have them.

This implies a workflow where both the operators (you) and the projects must work together because the latter need the first to add new hosts to the Consul cluster.

2. Check that your configuration is here

Previously, you edited some files and put your own configuration (your domain, your AWS regions, etc.).
You can check that everything is here:

[bastion] ~/
$ cat ansible_playbooks/infrasecrets/inventories/demo/group_vars/all/project.yml

3. Prepare your environment

Activate the virtualenv (created by the previous Ansible playbook) and export the API Keys and Tokens you generated before.

[bastion] ~/
$ source ansible_virtualenv/bin/activate
$ export AWS_ACCESS_KEY_ID=CHANGE_WITH_KEY_ID
$ export AWS_SECRET_ACCESS_KEY=CHANGE_WITH_SECRET_KEY
$ export CLOUDFLARE_EMAILCLOUDFLARE_EMAIL=CHANGE_WITH_EMAIL
$ export CLOUDFLARE_TOKEN=CHANGE_WITH_TOKEN
$ export BOTO_USE_ENDPOINT_HEURISTICS=True

The last export is here to help the python library boto to work with some AWS regions.

Don’t forget to reactivate virtualenv and re-export the variables if you left your active session.

4. Terraform

We are first going to create the core-network (aka infra): it used by all the projects (except bastion).

[bastion] (ansible_virtualenv) ~/
$ cd ~/ansible_playbooks/infrasecrets/terraform/demo/core-network
$ terraform init
$ terraform apply -var-file '../vars_network.tf'
$ cd ..

We are then going to create the network used by the infrasecrets project (same VPC but other subnets).

[bastion] (ansible_virtualenv) ~/ansible_playbooks/infrasecrets/terraform/demo
$ cd infrasecrets/network
$ terraform init
$ terraform apply -var-file '../../vars_network.tf'
$ cd ../..

Finally we are going to create the Consul and Vault hosts:

[bastion] (ansible_virtualenv) ~/ansible_playbooks/infrasecrets/terraform/demo
$ cd infrasecrets/system
$ terraform init
$ terraform apply -var-file '../../vars_network.tf'
$ cd ../..

You can see the result in the AWS Console.

The instances: AWS Infrasecrets instances in Region1

AWS Infrasecrets instances in Region2

The network: AWS Infrasecrets network in Region1

AWS Infrasecrets network in Region2

5. Ansible

It’s time now to bootstrap our clusters with Ansible.

Terraform created the ~/ansible_playbooks/infrasecrets/inventories/demo/hosts_infrasecrets.lst and ~/ansible_playbooks/infrasecrets/inventories/demo/extra_vars_terraform.yml files with everything Ansible needs:

  • the private IP addresses of the Consul and Vault hosts
  • the name of the chosen primary datacenter for the Consul cluster: the name of region1 (for me it’s eu-west-3)
  • the list of the initial AWS regions: this will be used by the PKI to create a list of allowed domain names

To execute Ansible, you will need to replace the following Ansible extra-vars parameters:

  • my_aws_access_key_id: this is the Acces Key ID of the AWS ops account with read-only access (PUT THE R/O ACCOUNT ID)
  • my_aws_secret_access_key: this is the Secret Acces Key of the AWS ops account with read-only access (PUT THE R/O ACCOUNT KEY)

  • my_vault_admin_password: this is the password of the admin user that will be created in the 2 Vault clusters (all privileges)

  • my_vault_ops_password: this is the password of the ops user that will be created in the 2 Vault clusters (less privileges)

  • my_vault_a_create_approle_password: this is the password of the a-create-approle user that will be created in the 2 Vault clusters (only used to create AppRoles)

  • my_vault_a_deploy_role_password: this is the password of the a-deploy-role user that will be created in the 2 Vault clusters (only used to deploy AppRoles Role IDs)

  • my_vault_a_deploy_secret_password: this is the password of the a-deploy-secret user that will be created in the 2 Vault clusters (only used to deploy AppRoles Secret IDs and Secrets)

  • my_vault_a_deploy_echo_role_password: this is the password of the a-deploy-echo-role user that will be created in the 2 Vault clusters (only used to deploy “echo project” AppRoles Role IDs)

  • my_vault_a_deploy_echo_secret_password: this is the password of the a-deploy-echo-secret user that will be created in the 2 Vault clusters (only used to deploy “echo project” AppRoles Secret IDs and Secrets)

Write the passwords somewhere, we will need them later.


The last two passwords are not part of the project “infrasecrets” and are not useful to deploy the Consul and Vault clusters.
Thus they can be created later and in a real use-case they should be shared with the people from the “echo” project to let them deploy some part of their project (as we will see).

But we are creating them already as this will reduce the number of commands to execute.

[bastion] (ansible_virtualenv) ~/ansible_playbooks/infrasecrets/terraform/demo/infrasecrets/system
$ cd ~/ansible_playbooks/infrasecrets

[bastion] (ansible_virtualenv) ~/ansible_playbooks/infrasecrets
$ ansible-playbook BASTION_init_cluster.yml -i inventories/demo/hosts_infrasecrets.lst -D --force-handlers \
-e @inventories/demo/extra_vars_terraform.yml \
-e "my_aws_secret_access_key=CHANGE_WITH_SECRET_KEY" \
-e "my_aws_access_key_id=CHANGE_WITH_ACCESS_KEY" \
-e "my_vault_admin_password=CHANGE_WITH_ADMIN_PASSWORD" \
-e "my_vault_ops_password=CHANGE_WITH_OPS_PASSWORD" \
-e "my_vault_a_create_approle_password=CHANGE_WITH_CREATE_APPROLE_PASSWORD" \
-e "my_vault_a_deploy_role_password=CHANGE_WITH_DEPLOY_ROLE_PASSWORD" \
-e "my_vault_a_deploy_secret_password=CHANGE_WITH_DEPLOY_SECRET_PASSWORD" \
-e "my_vault_a_deploy_echo_role_password=CHANGE_WITH_DEPLOY_ECHO_ROLE_PASSWORD" \
-e "my_vault_a_deploy_echo_secret_password=CHANGE_WITH_DEPLOY_ECHO_SECRET_PASSWORD"

This could take at least 20 minutes, so second coffee break!

6. Save the summary

At the end of the execution of Ansible, a summary will list the following keys:

KEEP THEM SAFE!

7. Add the bastion host to the Consul cluster

We want the bastion host to be part of the Consul cluster: it will access all Consul and Vault resources (secrets, Approles, etc.).
So we will be able to use it to deploy all other hosts.

To execute Ansible, you will need to replace the following Ansible extra-vars parameters:

  • my_vault_ops_password: this is the password of the ops user
[bastion] (ansible_virtualenv) ~/ansible_playbooks/infrasecrets
$ ansible-playbook BASTION_install.yml \
-i inventories/demo/ec2.py -i inventories/demo/hosts_ec2.lst \
-D --force-handlers \
-e @inventories/demo/extra_vars_terraform.yml \
-e "my_vault_ops_password=CHANGE_WITH_OPS_PASSWORD" \
-l bastion

We are using an Ansible dynamic inventory script to discover the hosts we have just created.

Go here to see how Ansible will use the bastion to create and provision hosts.

8. Check the clusters

To check that everything is working, you can type:

[bastion] (ansible_virtualenv) ~/
$ consul members

It should return the list of all known Consul agents in the region1 datacenter.

9. Access the UIs

You can also access the different UIs. From your local machine, type:

(ansible_virtualenv) ~/demo_big_infra
$ ssh -L 18500:127.0.0.1:18500 -L 18200:127.0.0.1:18200 -L 28200:127.0.0.1:28200 persecutor@bastion-eu-west-3a-0.terror.ninja

Then you can use your browser and access all UIs.

Here Consul: Consul Region1

Consul Region2

Vault-Secrets: Vault-Secrets

Vault-PKI: Vault-PKI

10. Next page!

We are done, let’s create the AMIs for our echo service infrastructure