Here’s something I’ve only just now learned: AWS S3 is just about the best thing imaginable for hosting static websites.
In case you’re not quite sure what I’m talking about here, AWS stands for Amazon Web Services, and S3 for Simple Storage Service. In short: it’s Amazon’s cloud storage service.
I’ve known for a while that it is possible to host static HTML content on S3 directly, which is pretty great because you get to leverage the performance, redundancy and scalability of AWS without so much as spinning up a cloud server.
Up until now, however, I was under the impression that the S3 was quite limited as a platform; that you could only host content in subdomains and effectively had to spin up a web server somewhere anyway to handle root domain redirects and other simple, but essential requirements.
This is why, a year ago, I opted to host this blog on a Linode VPS from the get go. Little did I know that S3 has supported root domains (given your DNS Zone is also hosted on AWS) and all forms of redirection magic since late 2012.
So, a couple of weeks ago, while migrating this blog from the old
yakwaxing.com domain to
nnevala.net, I also went ahead and decomissioned my old Linode instance and started hosting this site on barebones S3.
In practice this is as simple as signing up to AWS, opening the S3 management console and creating a “bucket” for content. Buckets work just like any typical filesystem would, and content can be managed either from the management console or through other utilities. I’m a command line kind of guy and prefer the S3cmd.
To make sure any possible links or bookmarks that were still pointing to
yakwaxing.com were handled appropriately, I also pointed the old domain to an S3 bucket and configured S3 to handle any redirects.
I wound up with the following bucket setup:
www.nnevala.net, for hosting this site
nnevala.netto redirect the root domain this subdomain, and
www.yakwaxing.comto redirect any requests from the old domain
(Actually the bucket configuration I have is a bit more elaborate than this to cover all the edge cases, but you should get the idea.)
There are a couple of different types of redirections you can configure in to the buckets. The simplest option is to simply redirect all requests to a different bucket. Alternatively, you can specify routing rules through an AWS specific syntax to achieve more advanced configurations.
For example, the configuration I applied to the
www.yakwaxing.com bucket looks like this:
<RoutingRules> <RoutingRule> <Redirect> <HostName>www.nnevala.net</HostName> </Redirect> </RoutingRule> </RoutingRules>
The syntax is quite simple and readable, and makes sense even if you are not familiar with it. It is also very comprehensive.
The downside of this migration is that I’ve now lost my go-to platform for hosting simple, but useful applications. For example: I used to have an instance of Piwik, an open-source web analytics suite, running on the same server as the blog itself.
On the other hand, because S3 is very reasonably priced, I’m now saving what is effectively the entire monthly cost of a VPS instance every month. In addition the maintenance overhead (which is not paid in hours of maintaining, but in hours of worrying about maintaining) of my blog is now zero.
I heartedly recommend S3 as the hosting platform for your static website. I’d even consider hosting on it if your website is not static, but can be rendered statically.
Bear in mind you also have to move your DNS Zone to AWS, but I would actually recommend you do that anyway: AWS’s Route 53 is second to none.