Following on my previous post, I had a look at the performance impact of enabling QVD and QVF encryption in Qlik Sense.
In this test, I’m using Qlik Sense Enterprise November 2019 release on an Azure B4ms (4 vCPU, 16GB RAM) VM running Windows Server 2019. Qlik Sense encryption settings were left at default.
I’ll prepare a follow up post running through the questions and findings, this post summarises the test structure and high level findings.
The tests & source data
The data I’m loading is one of the freely available data sets on archive.org from StackExchange (in this case, a 618MB serverfault 7z archive).
Uncompressed, it’s 3.13GB, or 2.5GB for just the XML files I’m running tests against.
Each of the tests below was run a minimum of three times, on XML based data sets of three different sizes (PostHistory, Posts and Badges – in order of decreasing size).
The following tests were run:
- Load from XML (no transformation)
- Store loaded XML data into QVD (no transformation)
- Load from QVD using optimised load
- Store loaded QVD data into a second QVD (no transformation)
- Load from QVD using unoptimised load and perform transformations (using a wide range of functions)
- Store transformed QVD data into a third QVD
- Load from QVD using unoptimised load and perform transformation/where reduction (only two functions)
- Store transformed QVD data into a fourth QVD
- Load from QVD using optimised load, then resident to perform matching transformation to #5
- Store transformed QVD data into a fifth QVD
The QVF file and load scripts to run these tests are available on GitHub.
The results (when assessing ONLY PostHistory – the largest input file), with the exception of tests 6, 8 and 10 (all store operations on data originally loaded from a QVD), show that enabling encryption for QVDs increases load time, and enabling both QVD and QVF encryption increases this further.
No surprises there.
I’ll look into this in more depth in a follow up post.
Observation on QVD file size
There was no noticeable increase in QVD file size following encryption – see screenshots of before and after below.
Considerations for next time
- Instead of using a burstable instance (B4ms), I should have used a general instance such as a DS3 to ensure a baseline level of performance
- The server size was likely too large for the smallest data set I used, meaning that operations completed too quickly for any variation to be meaningful, while Posts and PostHistory were more suitable
- This time, I used Azure files for the primary read/write location. While we should assume performance remains consistent over time, testing with a provisioned disk attached to the VM would be a better test to remove any potential variability
- Services were not restarted between every test, only between test modes (i.e. encrypted, unencrypted) – it would be a better control to begin all tests following a restart of at least the engine service
- The initial test created the QVD files which were then overwritten by all following tests – ideally these would have been deleted between tests (incidentally, no obvious variation appeared between tests 1 and 2)
- There was no system monitoring set up – this would provide insights as to CPU and IO utilisation throughout and would be a useful addition to the time statistics