INTRODUCTION
One of my clients is running WordPress for their website. And they are actually willing to utilise Cloud infrastructure to host their website (ps: some clients do not want to go to Cloud due to potential privacy breach). When we’re talking about hosting in the cloud Cloud, at the end of the day, we’re still hosting a website in “someone else’s computer” (so-to-speak). The difference is just, Cloud companies like Amazon, Microsoft or Google have the budget to provision a highly-scalable and reliable data centre. While shared hosting service, especially the small ones, often exists in the form of a reseller account or an individually-provisioned server, meaning when the server is gone, most likely all the data and websites hosted in that server will be gone, too (unless of course, the hosting company decides to provision redundant servers, etc which cost a lot of money).
To this day, shared hosting service is still popular. And I must admit, the cost is also a lot lower compared to going with Amazon AWS, Microsoft Azure or Google. The difference is just, with these giant Cloud companies, you can easily configure an auto-scaling architecture which will assist greatly in promoting 100% uptime for your client’s website. While with shared hosting service, you’re just provisioned with a control panel access like Plesk or WHM or CPanel which won’t support upscaling, etc.
Long story short, I suggested the client to go with AWS and they agreed. In this article I would like to share with you the architecture I provisioned in AWS to allow auto-scale which will promote 100% uptime of my client’s website. 100% means, unless AWS goes down, or the website gets hacked, it will be up and running at all times regardless of load.
Please note that there are “a thousand ways to go to Rome”, meaning, there are many ways to achieve the same thing. And this article just happens to demonstrate my way of doing it.
COST
Cost is always a factor especially with smaller clients like mine. Therefore, I try to be mindful of the amount of servers and services that will be created as part of the architecture (knowing that Cloud companies always charge by the number ofย “online” hours the provisioned resources are using).
THE JARGON
EC2 = this is a VM where your website will be hosted. It runs Apache, etc.
RDS = this is your database server.
Elastic Beanstalk = a layer above EC2 which when configured, will auto-create and destroy EC2 instances.
CloudFront CDN = geo-redundant resource distribution mechanism. It allows your users to load resources eg. images and files from the AWS server(s) closest to them.
S3 buckets = where your images and files are actually stored.
ARCHITECTURE
The architecture of choice is as follows:
- Elastic Beanstalk
- Elastic Beanstalk allows you to have an auto-scaling architecture with your EC2 instances. Plus, it also deploys the Load Balancer for you therefore you don’t need to configure anything further. All you need to set is how many instances you want to auto-scale to and the auto-scale metric. That’s it. EB will then take care of the rest.
- Elastic File System
- Elastic File System acts as an “attached USB drive”. It attaches to your EC2 instances. This is useful to act as your “wp-content/uploads” folder. When AWS auto-scale, it will create a new EC2 instance and may destroy old ones. Therefore, if you upload images and files only locally to a particular EC2 instance, when it gets destroyed, you will lose them. AWS does not clone the latest state of the currently-running EC2 instance, rather it creates a new one based on the “last working state”.
For example, at 1PM you deployed Elastic Beanstalk template v1.0. EB then creates EC2 instance(s) based on v1.0. Along the way, you uploaded some images and files to the currently running EC2. Suddenly, there was a surge in user load and EB decided to spawn a new instance. This new instance will be deployed based on v1.0 template ie. without the uploaded images and files. Therefore, if the Load Balancer suddenly decides to direct traffic to the new instance, your users will see broken images and files. This is why it’s very important to use EFS.
- Elastic File System acts as an “attached USB drive”. It attaches to your EC2 instances. This is useful to act as your “wp-content/uploads” folder. When AWS auto-scale, it will create a new EC2 instance and may destroy old ones. Therefore, if you upload images and files only locally to a particular EC2 instance, when it gets destroyed, you will lose them. AWS does not clone the latest state of the currently-running EC2 instance, rather it creates a new one based on the “last working state”.
- CloudFront CDN running with S3 buckets
- WordPress supports uploading media files (images and files) directly to S3 buckets. You may now ask why in the world we would still need EFS then? The reason is, there are still some WordPress modules that do not support uploading items to S3 buckets. These modules only upload files into local file system. For these modules, which my client is using, we would still need EFS.
- Amazon Aurora RDS cluster
- Finally, with the MySql database, we are using Amazon Aurora with cluster enabled. Cluster to a database is what load balancer is to a website. Basically, when one server is down, the “slave” one will be automatically turned on.
As you can see above, this architecture is highly scalable because we’ve covered every potential avenue where it could be down. For example, if an EC2 instance is running out of RAM, a new one will be spawned automatically. If there is a huge load with the database and it decides to crash, the other one auto-spawns.
It still does not remove the risk of having a bad code implementation which it could be trojan-horsed or XSS attacked, but at least we’ve done what we can from the infrastructure side.
CONFIGURATION
Below is some of the brief configuration settings I used for my AWS architecture. Please note that a detailed configuration setting is out of the scope of this article.
Elastic Beanstalk
Configuring EB is relatively easy. From the AWS Console, click on “Services” > Elastic Beanstalk. From there click “Create New Application” and then just follow the wizard. One important thing, you need to ZIP your whole working WordPress folder then upload it through the wizard. This allows EB to deploy the content of the ZIP file to the instances it creates.
I also included .ebextensions folder. In it you can have custom config files that allows you to override php.ini settings such as overriding maximum upload file size, etc. Example of a content of the custom config file is as follows:
files:
“/etc/php-7.0.d/php.ini” :
mode: “000777”
owner: root
group: root
content: |
upload_max_filesize = 64M
post_max_size = 64M
memory_limit = 1280M
max_execution_time = 12000
auto_prepend_file = “/var/app/current/wordfence-waf.php”
Elastic File System
You want EFS to automatically attach to your auto-scaled EC2. To do so, you need to add the config files in .ebextensions folder also. Please follow the instructions below:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/php-hawordpress-tutorial.html
This will allow you to override WP’s “wp-content/uploads” folder to use EFS instead.
CloudFront CDN and S3
For using CDN and S3 to store your media files, you need to first provision it. Go to “Services” and S3 and create your bucket. Once done, go to “Services” > CloudFront. You can then create your distribution and point it to your S3 bucket.
Once the distribution is all setup, the next thing is to tell WordPress to upload files to there. I’m using this plugin:
https://wordpress.org/plugins/amazon-s3-and-cloudfront/
The plugin will pretty much override WP’s upload capability to go to S3 instead.
NOTE: Not all modules upload file using WP pipeline. Some modules simply upload to the “wp-content/uploads” folder locally.
Amazon RDS
Go to “Services” > RDS. I simply launched an “Aurora DB Instance”. This is compatible with MySQL, thus, will work with WordPress. Please ensure the “Create Replica in Different Zone” is ticked. This will allow your database to be clustered, thus achieving high availability.
CONCLUSION
So far I can see AWS auto-scale the EC2 instances and we have not had any downtime whatsoever which I’m very well pleased with. After all, it’s 2018 and we can certainly utilise these Cloud services to achieve a 100% uptime for our or client’s website.
Even though this is AWS specific, the good news is, other Cloud providers such as Azure and Google have similar concept. Except, their products are named differently.
Hope this helps,
Tommy