A few weeks ago I used AWS S3 for 301 redirects, with Cloudflare on top to cache these requests, few as they were. This effectively maintains a handful of links which redirect somewhere useful, while directing all other traffic to this domain.
The results are great! The switch to S3 was made on the 9th, and a Cloudflare page rule added on the 10th – at which point over 60% of requests became cached.
If you’re using a CNAME on your root domain, you’re gonna have problems. That’s just a DNS thing – and if you want to host a root domain on S3, you won’t be provided with an IP address by AWS. You can solve this if you use Route53, but what about if you want to keep your domain in Cloudflare?
You’ll also have problems if you want to use Cloudflare Full SSL on an S3 bucket configured for static website hosting – resulting in nothing but Cloudflare error 522 (Connection timed out) pages.
S3 buckets allow you to host static content with a pay-per-get model. No monthly fees and no servers – so I considered how I could use this to redirect a limited number of URLs from an old website to a new site.
It couldn’t be a straight forward as the URLs aren’t the same (so using a CNAME, domain forward, or the S3 Redirect requests options were out), but I wanted to preserve the links, and was previously using a .htaccess file to do this. Enter static hosting, on an empty bucket.
When you enable cross-region replication on an existing bucket, it doesn’t copy existing files from the source to the target bucket – it only copies those objects created or updated after the replication was enabled. We need to copy the original files manually using the AWS CLI.
In the past, I’ve configured these on my domains (and wrote about SPF with GSuite – which was at the time, Google Apps). In the last 9 years, the rest of the DNS config has changed a lot, and as I’ve never had issues with mail, I never reviewed my settings. Until today.
For another reason, I checked my config on mx toolbox – and I spotted that some tuning was required.
It seems, that at some point the recommended record has changed from:
v=spf1 include:aspmx.googlemail.com ~all
To a different domain:
v=spf1 include:_spf.google.com ~all
OK; no problem – that one’s easy to fix. Setting up DKIM was easy as well, using the guidance here, and again highlighted that those records were incorrect as well. At some point a CPanel server had managed the DNS config and added it’s own records!
Reminder to self – review my MX settings at least every couple of years!
Some of the most glaring omissions from Lightsail are scheduled tasks or triggers – which would provide the ability to automate backups. Competitors in this space like DigitalOcean are all set, as they offer a backup option, whereas for AWS I’m assuming they hope you’ll shift over to EC2 as fast as possible to get the extra bells and whistles.
I have one Lightsail server that’s been running for 6 months now, and it’s all been rosy. Except – I had been using a combination of first AWS-CLI automated backups (which wasn’t ideal as it needed a machine to run them), and then some GUI automation via Skeddly. However – while Skeddly works just fine, I’d rather DIY this problem using Lambda and keep everything in cloud native functions.
This post details my switch over to using Powershell and Cloudflare to update a DNS record to a server’s current IP. This effectively emulates dyndns for this host – except it’s free.
There are a load of other options out there, which even include some simple-but-quite-clunky apps for domain registrars like NameCheap; but installing third party software is not the route I want to take.
I previously had my target domain (let’s call it targetdomain.com) hosted on a Linux box, and used SSH to update the DNS settings via a Windows server. This worked well for three years without a blip – but was clunky. I was using a scheduled task to start a bat file, which then ran Putty to run the shell script…to update a config on a server which was only hosting the domain to serve this purpose.
I’ve been using Cloudflare for years, and set aside time to write a script to use their service for this purpose. As it turns out, people have done this for years – so I’ve taken one off the shelf.
Amazon Web Services (AWS) offers some very affordable archive storage via it’s S3 Glacier service. I’ve used this on a backup account in the past to store archives, and have decided it’s time to clear down this account (oh, and save $0.32 a month in doing so).
The main challenge with doing this, is that unlike S3, S3 Glacier (objects stored directly there rather than using the Glacier storage tier within S3) objects can only be deleted via the AWS CLI. And to delete a Glacier Vault, you’ve got to delete all of the objects.
In this post I’ll spin up a Lightsail box and wipe out the pesky Glacier objects through the AWS CLI. This doesn’t require any changes on your local PC, but will require some patience.