appserver

Description

This example leverages private load balancing to enable a small number of pay-as-you-go virtual machines running nginx to serve an application.

The load balancer registers itself into a private dns zone that is created as part of the example.

The web servers simply return their name to show which server responded to a request. The front end of the balancer listens to port 80, and the back end of the balancer delivers all traffic to the nginx web servers.

This example also creates a jump host which can be used to issue a curl request against the private balancer front-end (this is how we validate the behavior in our continuous integration environment at Tuono).

This currently only supports AWS - Azure load balancing support will be added here once it is supported.

Concepts

The following concepts are present in this example:

  • DNS

  • HTTP

  • Linux

  • Load Balancing

  • Nginx

  • Userdata

  • Variables

  • Virtual Machine

Venues

This example is regularly tested against:

aws

Release Notes

2.0

  • Added a private DNS zone, and configured balancer to register a record in it. On AWS this is an A record ALIAS to the balancer for both public and private hosted zones.

1.0

  • Initial release.

Blueprint

---
variables:
  admin_username:
    description: >-
      The administrative username for SSH access to the linux virtual machines.
      Note that the web servers are not publicly accessible however they still
      require an administrative user configuration.
    type: string
    default: adminuser
  admin_public_key:
    description: >-
      The administrative public key for SSH access to the linux virtual machines.
      Note that the web servers are not publicly accessible however they still
      require an administrative user configuration.
    type: string
  availability_zones:
    description: >-
      Indicates how many availability zones to spread the virtual machines across.
    type: integer
    default: 2
    min: 1
  app_servers:
    description: >-
      The number of web servers to create.
    type: integer
    default: 2
    min: 1

location:
  region:
    my-region:
      aws: eu-west-3
      azure: northeurope
  folder:
    tuono-appserver:
      region: my-region

networking:
  network:
    appserver:
      dns: testzone
      range: 10.0.0.0/16
      scope: public
  dns:
    testzone:
      fqdn: ci.eng.tuono.dev
  subnet:
    appzone-(( count )):
      # application servers live here
      count: (( availability_zones ))
      range: 10.0.10(( count )).0/24
      network: appserver
      firewall: backend-access
      zone: (( count ))
    jumpzone:
      # jumphost lives here
      range: 10.0.1.0/24
      network: appserver
      firewall: jumpzone-access
      scope: public
  firewall:
    backend-access:
      rules:
        - services: internal-http-dev
          from: networking.network.appserver
          to: self
        - services: internal-http-dev
          from: self
          to: networking.network.appserver
    jumpzone-access:
      rules:
        - from: any
          to: self
          protocols: jumpaccess
  protocol:
    jumpaccess:
      ports:
        - port: 22
          proto: tcp
  service:
    dmz-http:
      # local traffic (from the jump host)
      port: 80
      protocol: http
    internal-http-dev:
      # traffic for the app servers internally
      port: 8080
      protocol: http
  balancer:
    appserver-balancer:
      network: appserver
      scope: private
      purpose: testing
      routes:
        - from: dmz-http
          to: internal-http-dev
      dns:
        # server.ci.eng.tuono.dev
        domain: testzone
        hostname: server

compute:
  image:
    bitnami:
      publisher: bitnami
      product: nginxstack
      sku: 1-9
      venue:
        aws:
          image_id: ami-06491c3a2d933c1ca

  vm:
    appserver:
      count: (( app_servers ))
      cores: 1
      memory: 1 GB
      image: bitnami
      nics:
        default:
          ips:
            - private:
                type: dynamic
          firewall: backend-access
          provides: internal-http-dev
          subnet: appzone-(( 1 + ((count - 1) % availability_zones) ))
      # until ch3883 is resolved on azure you also need to specify the zone here
      zone: (( 1 + ((count - 1) % availability_zones) ))
      configure:
        admin:
          # required to initiate a deploy on Azure, but ignored on AWS
          # when userdata is present
          username: (( admin_username ))
          public_key: (( admin_public_key ))
        userdata:
          # why not use cloud-init?
          # answer: the azure bitnami nginx marketplace image has disabled cloud-init
          #         so we have to use a shell script to make a portable example
          type: shell
          content: |
            #!/bin/sh

            ### debugging mode so we can see what is happening in the logs
            set -x

            ### set up administrative user (idempotent)
            userid=$(id -u (( admin_username )))
            if [ -z "$userid" ]; then
                set -e
                adduser --gecos "" --disabled-password (( admin_username ))
                cd ~(( admin_username ))
                mkdir .ssh
                chmod 700 .ssh
                echo "(( admin_public_key ))" > .ssh/authorized_keys
                chmod 600 .ssh/authorized_keys
                chown -R (( admin_username )).(( admin_username )) .ssh
                usermod -aG sudo (( admin_username ))
                echo "(( admin_username ))   ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
                set +e
            fi

            ### set up nginx by replacing the home page
            echo 'appserver-(( count ))' > /opt/bitnami/nginx/html/index.html

            ### make it run on port 8080
            sed -i 's/listen  80;/listen  8080;/' /opt/bitnami/nginx/conf/nginx.conf

            ### restart nginx due to the port change
            /opt/bitnami/ctlscript.sh restart nginx

    jumphost:
      # Right now this is really just a SSH target for a curl command to check the balancer
      # Eventually we want to use DNS so this becomes a proxy to the private balancer for testing
      cores: 1
      memory: 1 GB
      image: bitnami
      nics:
        default:
          ips:
            - private:
                type: dynamic
              public:
                type: dynamic
          firewall: jumpzone-access
          consumes: networking.service.dmz-http
          subnet: jumpzone
      configure:
        admin:
          # required to initiate a deploy on Azure, but ignored on AWS
          # when userdata is present
          username: (( admin_username ))
          public_key: (( admin_public_key ))
        userdata:
          # why not use cloud-init?
          # answer: the azure bitnami nginx marketplace image has disabled cloud-init
          #         so we have to use a shell script to make a portable example
          type: shell
          content: |
            #!/bin/sh

            ### debugging mode so we can see what is happening in the logs
            set -x

            ### set up administrative user (idempotent)
            userid=$(id -u (( admin_username )))
            if [ -z "$userid" ]; then
                set -e
                adduser --gecos "" --disabled-password (( admin_username ))
                cd ~(( admin_username ))
                mkdir .ssh
                chmod 700 .ssh
                echo "(( admin_public_key ))" > .ssh/authorized_keys
                chmod 600 .ssh/authorized_keys
                chown -R (( admin_username )).(( admin_username )) .ssh
                usermod -aG sudo (( admin_username ))
                echo "(( admin_username ))   ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
                set +e
            fi

Last updated

Was this helpful?