routing and activation

We have now deployed our hosts and our network (Cloudflare and AWS load-balancers are also up).
But the service is not yet available.

We still need to tell Consul to route the different requests between services.

1. Consul Intentions

First, we need to allow the communication between services using Connect Intentions.

To execute Ansible, you will need to replace the following Ansible extra-vars parameters:

  • my_vault_secrets_admin_password: this is the password of the ops user
[bastion] (ansible_virtualenv) ~/ansible_playbooks/echo
$ cd ~/ansible_playbooks/infrasecrets

[bastion] (ansible_virtualenv) ~/ansible_playbooks/infrasecrets
$ ansible-playbook BASTION_configs_consul.yml \
-i inventories/demo/hosts_infrasecrets.lst \
-D --force-handlers \
-e @inventories/demo/extra_vars_terraform_echo_socat_blue_one.yml \
-e "my_vault_secrets_admin_username=ops" \
-e "my_vault_secrets_admin_password=CHANGE_WITH_OPS_PASSWORD" \
-e "my_vault_secrets_admin_consul_role_name=vault-policy-intention" \
-t Project::infrasecrets::consul::login \
-t Project::infrasecrets::consul::intentions \
-l consul_server

Consul Intentions requires special privileges that cannot be filtered (like ACLs).
That’s why they need to be created with a privileged account.

Terraform created the inventories/demo/extra_vars_terraform_echo_socat_blue.yml files (in both echo and infrasecrets directories) with everything Ansible needs:

  • the intention

2. Consul Prepared Queries

We also need Prepared Queries as a complexe routing query to find socat services running the version of our choice and to manage fail-over between datacenters.

To execute Ansible, you will need to replace the following Ansible extra-vars parameters:

  • my_vault_secrets_admin_password: this is the password of the a-deploy-echo-secret user
[bastion] (ansible_virtualenv) ~/ansible_playbooks/infrasecrets
$ ansible-playbook BASTION_configs_consul.yml \
-i inventories/demo/hosts_infrasecrets.lst \
-D --force-handlers \
-e @inventories/demo/extra_vars_terraform_echo_socat_blue_one.yml \
-e "my_vault_secrets_admin_username=a-deploy-echo-secret" \
-e "my_vault_secrets_admin_password=CHANGE_WITH_DEPLOY_ECHO_SECRET_PASSWORD" \
-e "my_vault_secrets_admin_consul_role_name=vault-policy-echo-prepared-query" \
-t Project::infrasecrets::consul::login \
-t Project::infrasecrets::consul::prepared_queries \
-l consul_server

Terraform created the inventories/demo/extra_vars_terraform_echo_socat_blue.yml files (in both echo and infrasecrets directories) with everything Ansible needs:

  • the prepared queries: with the current socat service version

3. Light my fire

Now it’s time to check everything is working as expected!

Terraform is configured to put the region1 as the default region in the Cloudflare DNS Load-Balancer.
Thus in normal running conditions, the socat services in the region1 will answer.

The echo service will answer:

  • on port 8181 in clear text
  • on port 8143 in TLS
  • on port 8188 for the Cloudflare monitoring health-check

The socat services will:

  • echo any message
  • add a “header” with the interne IP address of the host and the “version” of the service (initially 1.0.0)

On your workstation, just launch:

[workstation] ~/
$ nc echo.terror.ninja 8181
ip-10-3-1-7+v1.0.0
test1
test1
^C

The first connection might be slow: so wait some seconds.

To test with SSL:

[workstation] ~/
$ openssl s_client -connect echo.terror.ninja:8143
CONNECTED(00000003)
depth=2 C = US, O = Amazon, CN = Amazon Root CA 1
verify return:1
depth=1 C = US, O = Amazon, OU = Server CA 1B, CN = Amazon
verify return:1
depth=0 CN = echo.terror.ninja
verify return:1
---
Certificate chain
 0 s:CN = echo.terror.ninja
   i:C = US, O = Amazon, OU = Server CA 1B, CN = Amazon
 1 s:C = US, O = Amazon, OU = Server CA 1B, CN = Amazon
   i:C = US, O = Amazon, CN = Amazon Root CA 1
 2 s:C = US, O = Amazon, CN = Amazon Root CA 1
   i:C = US, ST = Arizona, L = Scottsdale, O = "Starfield Technologies, Inc.", CN = Starfield Services Root Certificate Authority - G2
 3 s:C = US, ST = Arizona, L = Scottsdale, O = "Starfield Technologies, Inc.", CN = Starfield Services Root Certificate Authority - G2
   i:C = US, O = "Starfield Technologies, Inc.", OU = Starfield Class 2 Certification Authority
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIFfz...
-----END CERTIFICATE-----
subject=CN = echo.terror.ninja

issuer=C = US, O = Amazon, OU = Server CA 1B, CN = Amazon

---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: RSA
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 5472 bytes and written 455 bytes
Verification: OK
---
New, TLSv1.2, Cipher is ECDHE-RSA-AES128-GCM-SHA256
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES128-GCM-SHA256
    ...
    Verify return code: 0 (ok)
    Extended master secret: no
---
ip-10-3-1-27+v1.0.0
testssl1
testssl1
^C

Retry a couple of times and you should be able to see the two different IP addresses of the two socat hosts:

[workstation] ~/
$ nc echo.terror.ninja 8181
ip-10-3-1-27+v1.0.0
test2
test2
^C

$ nc echo.terror.ninja 8181
ip-10-3-1-7+v1.0.0
test3
test3
^C

4. Next page!

The initial configuration is working: let’s break it and test the fail-over.