zendphp in the cloud
August 23, 2021

Cloud Orchestration With ZendPHP

PHP Development

In a previous blog post, we discussed PHP orchestration with ZendPHP Docker images and noted that the term orchestration is:

The process of describing the resources that make up a computing system, and how they interact.

In this post, we'll switch from discussing containerization and instead focus on virtual machine images and cloud deployment.

Back to top

What Is the Cloud?

You've heard of the Cloud at this point, and likely know some of the major enterprise cloud Providers: Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Digital Ocean, and more. These providers bring together a number of services that essentially let you rent out computing resources in their data centers. The services generally include:

  • A "compute cloud" made up of "machine images." These are virtual machines as a service.
  • Relational databases.
  • Filesystem storage.
  • Caching services (e.g., Redis, Memcached, and more).
  • Networking services.
  • Load balancing services.

And often much, much more; at the time of writing this post, the AWS console menu spans four columns and still requires vertical scrolling to list everything it provides!

Most commonly, consumers deploying applications to cloud providers will use a combination of machine images, databases, and caching, and wire it all together using networking services.

On top of these services, cloud providers provide web APIs and command line tools consuming these APIs to allow you to manage and automate the services you consume. These tools give consumers orchestration capabilities.

Read More >> The Impact of PHP ARM Architecture Support

Back to top

Provisioning Tools

 

Cloud machine images provide a basis on which to deploy your application. At a minimum, they include an operating system installation. Some may install and configure additional services, such as a language runtime, a web server, etc. But unless you have built the image yourself, they are never the full application, which means you must provision your application and any dependencies it may have yourself.

Over the years, DevOps teams have developed a variety of tools such as Chef, Puppet, and Ansible. These tools provide configuration management, which is used to detail the steps necessary to provide a resource in a known state. In the early days, they were used primarily to take a default installation of an operating system and prepare it with the configuration and resources necessary to prepare the machine for production deployment. Over time they have evolved to allow describing full systems.

With cloud providers being a common target for provisioning, other tools have tackled the problem as well, with cloud being their primary focus. Two tools in particular, developed by HashiCorp, have become fairly standard for cloud deployment: Packer and Terraform. Packer is used to automate creation of machine images, while Terraform provides orchestration. HashiCorp messages them as "infrastructure as code"; each uses text-based configuration and description of infrastructure that is easily versioned via source control.

I like to think that Packer is to Terraform what Docker is to Kubernetes (or Compose or Stack): Packer and Docker help you describe and build the resources you will deploy, while Terraform, Compose, Stack, and Kubernetes helps you wire the resources together and deploy them.

See how to build rootless Docker images with ZendPHP >>

See Why PHP Configuration Management Is Important >>

Back to top

Packer

Packer's sole purpose is to build virtual machines. Its strength is that it can build virtual machines for different environments using the same specifications; if you want the same image for VirtualBox, VMWare, AWS, Azure, or others, you can use Packer to accomplish it. In fact, you can even use it to build Docker images!

Packer uses "HashiCorp Configuration Language" (HCL) to describe images and their configuration; the same language is used for Terraform, which means you can learn it once and re-use it everywhere. HCL is declarative, and most closely resembles JSON in structure, though with a few minor differences.

Within a configuration, you will generally rely on plugins. The plugins will give you additional configuration possibilities, and will do the actual work of translating that configuration into the various artifacts or performing provisioning tasks. Because Packer is pluggable, it means that if you are already using technologies such as Chef, Puppet, or Ansible, you can continue to use them with Packer! It also means that if you are targeting specific machine or platform types (e.g., VMWare, or AWS EC2), you can do so by adding the appropriate plugin for building the image.

When defining a configuration, you will:

  • Indicate any required plugins.
  • Define variables. Variables can be passed in at build time in order to customize the image built.
  • Define sources for the image. As an example, this might be a local VM you want to use as a base VM, or it might be configuration for querying a cloud provider for a specific virtual machine.
  • Define the build. You will specify which source image to use, and any provisioners required to put the image into the required state.

You can use any provisioning tools you are comfortable with, including Chef, Puppet, or Ansible. At its simplest you might only provision files, or run one or more shell scripts. The ability to keep generation of the image as complex or as simple as you need it is a huge benefit!

As an example, in a recent image, I defined my build as follows:

build{
  sources=["source.amazon-ebs.zendphp"]

  provisioner"file"{
    source      ="./assets/"
    destination="/tmp"
  }

  provisioner"shell"{
    script="./scripts/setup.sh"
  }
}

Let's unpack this. (Pun intended!)

I have a previous section that helps identify a source image, which I reference in the first line. Packer allows you to specify multiple sources, and each will result in a different image.

I then use the "file" provisioner to copy files under a local "assets" directory into the image "/tmp" directory.

Another provisioner then runs, this time a "script" provisioner, executing a script from the local filesystem. The interesting part about the script provisioner is that it doesn't leave the script on the generated image when complete, which means it can contain logic that will not be exposed to consumers of the image later.

What does this script look like?

#!/bin/bash

set-e

function apt-get(){
    whilefuser -s /var/lib/dpkg/lock;do
        echo"apt-get is waiting for the lock release..."
        sleep 1
    done

    whilefuser -s /var/cache/debconf/config.dat;do
        echo"apt-get is waiting for the lock release..."
        sleep 1
    done

    sudo /usr/bin/apt-get "$@"
}

# Install necessary dependencies
echo'debconf debconf/frontend select Noninteractive'|sudo debconf-set-selections
apt-get update
apt-get -y -qq install \
    curl\
    wget\
    apt-transport-https\
    ca-certificates

# Setup sudo to allow no-password sudo for "zendphp" group and adding "terraform" user
sudo groupadd -r zendphp
sudo useradd -m -s /bin/bash terraform
sudo usermod -a -G zendphp terraform
sudo cp /etc/sudoers /etc/sudoers.orig
echo"terraform ALL=(ALL) NOPASSWD:ALL"|sudo tee /etc/sudoers.d/terraform

# Installing SSH key
sudo mkdir -p /home/terraform/.ssh
sudo chmod 700 /home/terraform/.ssh
sudo cp /tmp/terraform-ssh-key.pub /home/terraform/.ssh/authorized_keys
sudo chmod 600 /home/terraform/.ssh/authorized_keys
sudo chown -R terraform /home/terraform/.ssh
sudo usermod --shell /bin/bash terraform

# Install application
echo"Installing the application source"
sudo mkdir -p /var/local/app
cd /var/local/app
sudo tar xzf /tmp/app.tgz --strip-components=1
sudo chown -R www-data.www-data /var/local/app

# and so on...

It's a normal shell script. I install some system packages (with a little "magic" to work around locking issues and lack of an interactive shell), setup some users and groups, install a default SSH key to allow access to the machine, and install the application source code. This latter is interesting; note the path "/tmp/app.tgz". This is a file installed via the previous provisioner!

By combining provisioners and using well-known tools, you can accomplish just about any provisioning tasks you need in order to setup your virtual image.

To build the image:

$ packer init {path to directory with packer configuration}

$ packer build {path to directory with packer configuration}

If you have defined variables that you want to use to customize the images, you pass them in via one or more --var options (each accepts a single KEY=VALUE pair) when calling packer build:

$ packer build --var PHP_VERSION=7.4 images

Now that we have an image defined, how do we deploy it?

Back to top

Terraform

While Packer describes images, Terraform describes systems.

As such, it is the tool used to deploy an entire software system. When we talk about a PHP application, that's the PHP nodes themselves, web servers (if you are using php-fpm), load balancers, databases, caching servers, and any other services required to run the application. A deployment can be as simple as setting up a single node that runs Apache with mod_php, or as complex as an auto-scaling web farm across multiple availability zones of multiple clouds.

Like Packer, Terraform uses HCL to describe systems. Configuration files will have a ".tf" extension, and it is the graph described by all files in a tree that defines the system. Within your configuration you will define:

  • Providers: these provide definitions of various resources you can build on. As an example, the AWS provider for Terraform defines resources for networking, cloud images, load balancers, auto-scaling groups, and much, much more. Your own configuration will then create more specific versions of these.
  • Variables: these are values that can be referenced zero or more times in your configuration, to ensure that multiple resources that require the same values do not need them hard coded. Better: you can specify variables when you execute your Terraform templates in order to customize the deployment.
  • Data: similar to variables, data can be referenced and consumed elsewhere in your configuration. Unlike variables, they are static once defined.
  • Resources: these are the actual artifacts of deployment, and can describe networks, storage, machine instances, cloud services (e.g., AWS ElastiCache, RDS, or CloudSearch services), and more.
  • Outputs: information about what has been deployed. When running Terraform, this information will be emitted on completion, but can also be retrieved later. Use cases include providing the DNS name or IP address of a deployed instance, or the total number of instances initially spun up for an auto-scaling group.

Let's create a quick example of spinning up a vanilla ZendPHP instance on AWS via Terraform:

provider"aws"{
  profile="default"
  region  ="${var.aws_region}"

  default_tags{
    tags={
      Application=var.app_purpose
    }
  }
}

variable"app_purpose"{
  description="Purpose of app instance"
  type        =string
  default     ="zendphp-demo"
}

variable"aws_region"{
  description="AWS region in which to launch"
  type        =string
  default     ="us-east-2"
}

data"aws_ami""zendphp"{
  most_recent=true
  #ZendbyPerforceonAWSMarketplace
  owners      =["679593333241"]

  filter{
    name="description"
    values=["*ZendPHP*Ubuntu 20.04*Apache*BYOL*"]
  }

  filter{
    name="root-device-type"
    values=["ebs"]
  }

  filter{
    name="virtualization-type"
    values=["hvm"]
  }
}

resource"aws_instance""app_server"{
  ami           =data.aws_ami.zendphp.id
  instance_type="t2.micro"

  tags={
    Name="ZendPHPInstance"
  }
}

output"public_dns_name"{
  description="Public DNS name of deployed instance"
  value       =aws_instance.app_server.dns_name
}

The above:

  • Sets up the AWS provider.
  • Defines variables for the application "purpose" (used in image tags) and AWS deployment region.
  • Defines a data source that will retrieve an AWS machine image (AMI) ID based on filters provided (in this case, an Apache-based ZendPHP image).
  • Defines an AMI instance based on the data.
  • Provides output detailing the public DNS name of the deployed image.

We could have course referenced our own images built via Packer in the "aws_ami" data provider, and that's the real plan for production - to create your deployment-ready images via Packer on any and all providers you use.

From here, you initialize the deployment:

$ terraform init

Validate it:

$ terraform validate

And then apply it, which performs the actual deployment:

$ terraform apply

This last command tells you what terraform plans to do, and asks you to confirm it. Once you have, it starts doing the work necessary to deploy your infrastructure. This can be anywhere from instantaneous, to taking a number of minutes or hours, depending on the infrastructure you are deploying.

When you have changes, you run terraform apply again, and it will tell you what changes it will make; Terraform tries to only modify what is necessary to reach the desired state, which means that if you segregate your resources well, resources to which you make no changes will continue to run merrily along while everything around them changes.

When you apply your configuration, you can also provide variables, via the -var option. This option can be repeated, once for every variable you want to supply, and expects a key/value pair separated by =: -var aws_region=us-west-1. By supplying variables, you can re-use the same templates and deploy multiple times for multiple contexts!

There's far more to Terraform than what we've demonstrated in this post, but we hope to have sparked some ideas!

Back to top

ZendPHP and Orchestration

What does all of this have to do with ZendPHP?

In the section on Packer, I noted that when you build images, you specify a source image. These can be local virtual machine images, or, depending on the plugins you have installed, a cloud machine image, such as one on aWS, Azure, GCP, or others. When using a cloud platform-specific plugin, the builder will use that provider's API to spin up a machine image, and then use the provisioners available to provision it. When done, it will save the image in your account on the provider, so that you can use it later.

We currently have ZendPHP images on AWS, with images coming soon to Azure and Google Cloud Platform. As such, you can use Packer to provision virtual machines based on our ZendPHP images!

As an example for AWS, you would specify the following as part of your Packer definition, in a similar way to how we demonstrated provisioning a vanilla ZendPHP image on AWS with Terraform:

packer{
  required_plugins{
    amazon={
      version=">=1.0.0"
      source  ="github.com/hashicorp/amazon"
    }
  }
}

variable"region"{
  description="AWS region in which to build and deploy"
  type        =string
  default     ="us-east-2"
}

variable"instance_type"{
  description="EC2 instance type to use (defaults to t2.micro)"
  type        =string
  default     ="m4.medium"
}

locals{
  timestamp=regex_replace(timestamp(),"[- TZ:]","")
}

source"amazon-ebs""zendphp"{
  ami_name      ="session-cluster-demo-${local.timestamp}"
  instance_type=var.instance_type
  region        =var.region
  source_ami_filter{
    filters={
      #FindaZendPHPimage,using*ZendPHP*{DISTRO}#{VERSION}*{Apache|Nginx}*(BYOL*)?
      #Examples:
      #   -*ZendPHP*Debian10*Apache*
      #   -*ZendPHP*Centos8*Nginx*
      #   -*ZendPHP*Ubuntu20.04*Apache*BYOL
      description         ="*ZendPHP*Ubuntu20.04*Nginx*"
      root-device-type    ="ebs"
      virtualization-type="hvm"
    }
    most_recent=true
    #ZendbyPerforceonAWSMarketplace
    owners      =["679593333241"]
  }
  ssh_username="ubuntu"
  tags         ={
    Extra="zendphp/7.4:ubuntu-20.04"
  }
}

build{
  sources=["source.amazon-ebs.zendphp"]

  #...
}

The above locates the most recent ZendPHP image built on Ubuntu 20.04 and using nginx as the web server.

From there, you would create a set of Terraform templates (configuration) that locate and consume this image for your deployment.

To simplify all of this for you, we have Terraform templates that demonstrate a session cluster of ZendPHP nodes using Redis for you to examine and adapt to your needs. You can download these templates here. Currently, these templates only address AWS, but as we roll out Microsoft Azure and Google Cloud Platform support in the coming weeks, we will update them to demonstrate orchestration on those platforms as well.

So, what are you waiting for? Get started orchestrating ZendPHP in the cloud today!

Try ZendPHP Free

 

Additional Resources

Back to top