I’m often asked how to create mashups with Qlik Sense, and I strongly believe that it’s both easy and intuitive to leverage Qlik Sense APIs to build mashups…when you understand the options available to you.
To help new developers, I’ve put together a basic mashup using the Material Design Lite template. This example connects to a provided app and demonstrates several different ways of embedding Qlik Sense into a HTML site using just a little Javascript.
I was receiving emails titled “Cron <[email protected]> /usr/bin/kcarectl –auto-update” with a body “The IP 1.2.3.4 was already used for a trial license on 2018-12-23”
At one point, I was receiving one of these per VM every four hours
As I’ve not got a licence anyway, this is easily solved with:
A few weeks ago I used AWS S3 for 301 redirects, with Cloudflare on top to cache these requests, few as they were. This effectively maintains a handful of links which redirect somewhere useful, while directing all other traffic to this domain.
Stats from February for the site show 65% of requests were cached in the last week
The results are great! The switch to S3 was made on the 9th, and a Cloudflare page rule added on the 10th – at which point over 60% of requests became cached.
Simple Event Attendance has been updated to 1.5.1.
This is to fix an issue with PHP7.2 where debug logs contain a count function warning:
Warning: count(): Parameter must be an array or an object that implements Countable in /var/www/html/wp-content/plugins/simple-event-attendance/seatt_events_include.php on line 103
If you’re using a CNAME on your root domain, you’re gonna have problems. That’s just a DNS thing – and if you want to host a root domain on S3, you won’t be provided with an IP address by AWS. You can solve this if you use Route53, but what about if you want to keep your domain in Cloudflare?
You’ll also have problems if you want to use Cloudflare Full SSL on an S3 bucket configured for static website hosting – resulting in nothing but Cloudflare error 522 (Connection timed out) pages.
The easy solution to both problems is to use CloudFront to serve https requests for your bucket; but I’m going to assume that you want this solution to be as cheap as possible – and use only S3 from within the AWS ecosystem.
S3 buckets allow you to host static content with a pay-per-get model. No monthly fees and no servers – so I considered how I could use this to redirect a limited number of URLs from an old website to a new site.
It couldn’t be a straight forward as the URLs aren’t the same (so using a CNAME, domain forward, or the S3 Redirect requests options were out), but I wanted to preserve the links, and was previously using a .htaccess file to do this. Enter static hosting, on an empty bucket.
I’m setting up CRR on two buckets, one new, and one existing – which already contains files
When you enable cross-region replication on an existing bucket, it doesn’t copy existing files from the source to the target bucket – it only copies those objects created or updated after the replication was enabled. We need to copy the original files manually using the AWS CLI.
In the past, I’ve configured these on my domains (and wrote about SPF with GSuite – which was at the time, Google Apps). In the last 9 years, the rest of the DNS config has changed a lot, and as I’ve never had issues with mail, I never reviewed my settings. Until today.
For another reason, I checked my config on mx toolbox – and I spotted that some tuning was required.
The DNS report shows a few MX errors, and more warnings
It seems, that at some point the recommended record has changed from:
v=spf1 include:aspmx.googlemail.com ~all
To a different domain:
v=spf1 include:_spf.google.com ~all
OK; no problem – that one’s easy to fix. Setting up DKIM was easy as well, using the guidance here, and again highlighted that those records were incorrect as well. At some point a CPanel server had managed the DNS config and added it’s own records!
Reminder to self – review my MX settings at least every couple of years!
Some of the most glaring omissions from Lightsail are scheduled tasks or triggers – which would provide the ability to automate backups. Competitors in this space like DigitalOcean are all set, as they offer a backup option, whereas for AWS I’m assuming they hope you’ll shift over to EC2 as fast as possible to get the extra bells and whistles.
Of course you can manually create snapshots – just log in and hit the button. It’s just the scheduling that’s missing.
I have one Lightsail server that’s been running for 6 months now, and it’s all been rosy. Except – I had been using a combination of first AWS-CLI automated backups (which wasn’t ideal as it needed a machine to run them), and then some GUI automation via Skeddly. However – while Skeddly works just fine, I’d rather DIY this problem using Lambda and keep everything in cloud native functions.
This post details my switch over to using Powershell and Cloudflare to update a DNS record to a server’s current IP. This effectively emulates dyndns for this host – except it’s free.
There are a load of other options out there, which even include some simple-but-quite-clunky apps for domain registrars like NameCheap; but installing third party software is not the route I want to take.
I previously had my target domain (let’s call it targetdomain.com) hosted on a Linux box, and used SSH to update the DNS settings via a Windows server. This worked well for three years without a blip – but was clunky. I was using a scheduled task to start a bat file, which then ran Putty to run the shell script…to update a config on a server which was only hosting the domain to serve this purpose.
Although the scheduler>bat>shell tasks have been running well for years, it’s time to simplify!
I’ve been using Cloudflare for years, and set aside time to write a script to use their service for this purpose. As it turns out, people have done this for years – so I’ve taken one off the shelf.