QlikView to Qlik Sense Converter

Great tool to Qlik community.

The QV Convertor tool is available from the Dev Hub in QS 3.2+.  It’s a pretty slick tool that converts QV Variables, Dimensions and Expressions to Master items and converts QV charts to Master QS Visualizations.  It does not attempt to replicate the sheets,  but expects you to place the visualizations on sheets yourself as required.

It’s a very useful tool with a good UI that allows for filtering and limiting what gets converted.

Reference: QlikView to Qlik Sense Converter

A Incremental Load with Data Archive

I would like to provide you an incremental load script for QlikView and Qlik Sense, which I wrote in 2014. Different from other solutions you can find today, my solution does not require the definition of the primary key; it also provides you data archive capability, which you can store in a separated location for security. Hope you like it.

The main idea of the solution is to split your source data into separated data partitions so each data partition is small, easy to load, and safe for backup. You can define your partition rule as by dates, by part types, or by any fields you have in your data. The assumption of the solution is that there will be no update to older partitions; new updates exist only on recent partitions. My codes are written for data partition based on dates, which is the most common scenario in staging. But, you can modify them for other data partition rules not using dates.

In your environment, you would use this script to load data from your source application, store them into data archive QVDs based on your data partition rule, and update your staging QVD based on the archive QVDs. The resulting staging QVD can then be loaded in your QlikView / Qlik Sense application, or your transformation script.

The main program requires the definition of the following variables:

  • vArchiveQVDPath: Storage path to your archive QVDs.
  • vTableQVDPath: Storage path to your staging QVD.
  • vQVDName: File name prefix of your archive and staging QVDs.
  • vOldPartitionRefreshThresholdInDays: The number of days before today to define which archive QVDs require refresh.

The following codes provide an example for these variables.

SET vArchiveQVDPath = '..\Data\Archive QVD\';
SET vTableQVDPath = '..\Data\Staging QVD\';
Set vQVDName = 'Test';
Set vOldPartitionRefreshThresholdInDays = 30;

In the example, all of our archive and staging QVD file names will start with “Test_”.  You would refresh archive QVDs, if their time stamp is within 30 days from the day you run your script.

Next, the script requires the following two sub-routines:

  • BuildStagingQVD: Build staging QVD by combining all archive QVDs.
  • BuildPartitionQVD: Build an archive QVD based on your definition of a data partition.

The following sections describe these sub-routines in detail.

BuildStagingQVD

data-archive-1or

SUB BuildStagingQVD (vQVDName, vNewQVDName)
TRACE [Fctl Call] BuildStagingQVD ('$(vQVDName)', '$(vNewQVDName)');
QVD_File:
NoConcatenate
Load '' As _tmp AutoGenerate 0;
Let vQVDFileMask = '$(vArchiveQVDPath)' & '$(vQVDName)' & '_*.qvd';
For each vFile in filelist('$(vQVDFileMask)')
Concatenate(QVD_File)
LOAD * From $(vFile)(qvd);
Next vFile
Drop Field _tmp From QVD_File;
If IsNull('$(vNewQVDName)') or '$(vNewQVDName)'='' then
Let vNewQVDName = vQVDName;
ENDIF
STORE QVD_File into $(vTableQVDPath)$(vNewQVDName).qvd(qvd);
Drop Tables QVD_File;
ENDSUB

The BuildStagingQVD sub-routine builds staging QVD by combining all archive QVDs. It has the following two parameters:

  • vQVDName: File name prefix of your (source) archive QVDs.
  • vNewQVDName: File name of your (target) staging QVD. This parameter is optional. If not provided, it is set to be the same as the value of vQVDName.

It starts with reading archive QVDs with name prefix provided by vQVDName at the location provided by vArchiveQVDPath. It reads the archive QVDs one at a time and concatenate them into a temporary table called QVD_FIle. Once it reads all archive QVDs with the specified prefix, it writes the resulting temporary table into the staging QVD file with name specified by vNewQVDName at staging location provided by vTableQVDPath.

BuildPartitionQVD

data-archive-2or

SUB BuildPartitionQVD (vQVDFullName, vLowerBound, vUpperBound)
// Note: vQVDName needs to be a complete file name with file extension, excluding file path.
TRACE [Fctl Call] BuildPartitionQVD ('$(vQVDFullName)', '$(vLowerBound)', '$(vUpperBound)');
Let vPartitionQVDFile = '$(vArchiveQVDPath)' & '$(vQVDFullName)';
Partition_Table:
NoConcatenate
LOAD * INLINE [
Comment
This is $(vQVDFullName).
];
// WHERE "Import Date">=CAST('$(vLowerBound)' AS DATE) AND "Import Date"<CAST('$(vUpperBound)' AS DATE); // WHERE TSH.TS_DT>=TO_DATE('$(vLowerBound)','YYYYMMDD') AND TSH.TS_DT<TO_DATE('$(vUpperBound)','YYYYMMDD'); Rec_Count: NoConcatenate LOAD COUNT(1) As PTAB_CNT Resident Partition_Table; If Peek('PTAB_CNT')>0 then
Store Partition_Table into $(vPartitionQVDFile)(qvd);
ELSE
TRACE [Info] No records are found. Skip partition build.;
ENDIF
DROP Tables Partition_Table, Rec_Count;
ENDSUB

The BuildPartitionQVD sub-routine builds an archive QVD based on your definition of a data partition, by dates in our codes here. The sub-routine has 3 input parameters:

  • vQVDFullName: Full name of an archive QVD to write to. This will be provided by our main script.
  • vLowerBound: Lower bound used in the SQL command to get data from your source application.
  • vUpperBound: Upper bound used in the SQL command to get data from your source application.

It starts with reading data from your source application based on the condition limited by vLowerBound and vUpperBound provided by the main script. This is the part where you need to replace the following codes with SQL command specific to your source application database.

LOAD * INLINE [
Comment
This is $(vQVDFullName).
];
// WHERE "Import Date">=CAST('$(vLowerBound)' AS DATE) AND "Import Date"<CAST('$(vUpperBound)' AS DATE); // WHERE TSH.TS_DT>=TO_DATE('$(vLowerBound)','YYYYMMDD') AND TSH.TS_DT<TO_DATE('$(vUpperBound)','YYYYMMDD');

It then stores the data into a temporary table Partition_Table. If there are data, it writes the temporary table into the archive QVD file provided by vQVDFullName at location vArchiveQVDPath.

Note that I purposely refresh the whole archive data with data from the source application for performance reason. If we keep an archive QVD small, it is fast to refresh the whole file (or data partition) than updating it or appending data to it. The objective is to make the process simple and fast.

Then, we will start to look at the main script below to see how the whole piece working together.

Main Script

Our main script starts with the creation of a data partition reference table: Partition_List.

data-archive-3or

Partition_List:
LOAD PartitionNumber, Num(PartitionStart) As PartitionStart, Num(PartitionEnd) As PartitionEnd;
// SQL Server, first partition
SQL SELECT 1 As PartitionNumber,
DATEADD(q, DATEDIFF(q, 0, Min("Import Date")), 0) As PartitionStart,
DATEADD(q, DATEDIFF(q, 0, Min("Import Date"))+1, 0) As PartitionEnd
FROM RecDisc.dbo.SP_TABLE_ARC;
// Oracle, first partition
// SQL SELECT 1 As "PartitionNumber",
// TRUNC(Min(ACCRUAL_PROC_DT),'YEAR') As "PartitionStart",
// ADD_MONTHS(TRUNC(Min(ACCRUAL_PROC_DT),'YEAR'),12) As "PartitionEnd"
// FROM SYSADM.PS_LEAVE_ACCRUAL;
Do While Peek('PartitionEnd') < QuarterEnd(Today())
Concatenate(Partition_List)
LOAD Peek('PartitionNumber')+1 As PartitionNumber,
AddMonths(Peek('PartitionStart'), 3) As PartitionStart,
AddMonths(Peek('PartitionEnd'), 3) As PartitionEnd
Autogenerate 1;
Loop

In the example, the main script looks at the minimum of the field “Import Date” in our source application table “RecDisc.dbo.SP_TABLE_ARC”, and use it as the beginning date of our data partition as “PartitionStart”. Then, assuming a data partition is one quarter of data based on the dates, we would set the ending date of our data partition as “PartitionEnd”. Add this as our partition, and then add other partition reference records until we reach today’s date. This results in our “Partition_List” reference table with the following fields:

  • PartitonNumber
  • PartitionStart (inclusive)
  • PartitionEnd (exclusive)

This part of codes is where you would replace them with your own source application data and partition rule.

Next, we define our staging QVD file name and path.

Let vStagingFile = '$(vArchiveQVDPath)' & '$(vQVDName)' & '.qvd'; // Change to Staging QVD path and file name.

If our reference table has only one record, it means we only need one archive QVD. Call our BuildPartitionQVD sub-routine to create the archive QVD and then call our BuildStagingQVD to create our staging QVD.

data-archive-5or

If Peek('PartitionNumber')=1 then
// Refresh vPartitionQVD
Let vPartitionQVD = '$(vQVDName)' & '_' & Date(Peek('PartitionStart',-1,'Partition_List'),'YYYYMMDD') & '.qvd';
TRACE [Info] Have only 1 partition. Refresh $(vPartitionQVD).;
Call BuildPartitionQVD('$(vPartitionQVD)', Date(Peek('PartitionStart',-1,'Partition_List'),'YYYYMMDD'), Date(Peek('PartitionEnd',-1,'Partition_List'),'YYYYMMDD'));
Call BuildStagingQVD($(vQVDName), Null());

Otherwise, we define the last two archive QVDs as vOldPartitionQVD and vNewPartitionQVD. If the PartitionStart of vNewPartitionQVD is more than vOldPartitionRefreshThresholdInDays days old, we refresh only vNewPartitionQVD; otherwise, we refresh both archive QVDs, followed by the creation of the staging QVD.

data-archive-4or

ELSE // >=2
Let vNewPartitionQVD = '$(vQVDName)' & '_' & Date(Peek('PartitionStart',-1,'Partition_List'),'YYYYMMDD') & '.qvd';
Let vOldPartitionQVD = '$(vQVDName)' & '_' & Date(Peek('PartitionStart',-2,'Partition_List'),'YYYYMMDD') & '.qvd';
If Today()-Peek('PartitionStart',-1,'Partition_List') < $(vOldPartitionRefreshThresholdInDays) then Set vRefreshOldPartition = 1; ELSE Set vRefreshOldPartition = 0; ENDIF // If vNewPartitionQVD exists, refresh vNewPartitionQVD. If Filesize('$(vArchiveQVDPath)' & '$(vNewPartitionQVD)')>0 then
TRACE [Info] Have multiple partitions. Current partition exists. Refresh $(vNewPartitionQVD).;
Call BuildPartitionQVD('$(vNewPartitionQVD)', Date(Peek('PartitionStart',-1,'Partition_List'),'YYYYMMDD'), Date(Peek('PartitionEnd',-1,'Partition_List'),'YYYYMMDD'));
If Today()-Peek('PartitionStart',-1,'Partition_List') < $(vOldPartitionRefreshThresholdInDays) then
TRACE [Info] Old partition end date is within the refresh threshold of $(vOldPartitionRefreshThresholdInDays) days. Refresh $(vOldPartitionQVD).;
Call BuildPartitionQVD('$(vOldPartitionQVD)', Date(Peek('PartitionStart',-2,'Partition_List'),'YYYYMMDD'), Date(Peek('PartitionEnd',-2,'Partition_List'),'YYYYMMDD'));
ENDIF
TRACE [Info] Build staging QVD.;
Call BuildStagingQVD('$(vQVDName)', Null());
// If vNewPartitionQVD does not exist but vOldPartitionQVD exists, refresh vOldPartitionQVD and create vNewPartitionQVD.
ELSEIF Not IsNull(Filesize('$(vArchiveQVDPath)' & '$(vOldPartitionQVD)')) then
TRACE [Info] Have multiple partitions. Current partition does not exist but old one exists.;
TRACE [Info] Refresh both $(vNewPartitionQVD) and $(vOldPartitionQVD).;
Call BuildPartitionQVD('$(vOldPartitionQVD)', Date(Peek('PartitionStart',-2,'Partition_List'),'YYYYMMDD'), Date(Peek('PartitionEnd',-2,'Partition_List'),'YYYYMMDD'));
Call BuildPartitionQVD('$(vNewPartitionQVD)', Date(Peek('PartitionStart',-1,'Partition_List'),'YYYYMMDD'), Date(Peek('PartitionEnd',-1,'Partition_List'),'YYYYMMDD'));
TRACE [Info] Rebuild staging QVDs.;
Call BuildStagingQVD('$(vQVDName)', Null());
// If neither vNewPartitionQVD not vOldPartitionQVD exists, create all files.
ELSE
TRACE [Info] Have multiple partitions but they do not exist yet. Build all partitions.;
FOR vPartitionStart = 1 to Peek('PartitionNumber')
Let vPartitionQVD = '$(vQVDName)' & '_' & Date(Peek('PartitionStart',$(vPartitionStart)-1,'Partition_List'),'YYYYMMDD') & '.qvd';
Call BuildPartitionQVD('$(vPartitionQVD)', Date(Peek('PartitionStart',$(vPartitionStart)-1,'Partition_List'),'YYYYMMDD'), Date(Peek('PartitionEnd',$(vPartitionStart)-1,'Partition_List'),'YYYYMMDD'));
NEXT
TRACE [Info] Build staging QVDs.;
Call BuildStagingQVD('$(vQVDName)', Null());
ENDIF
ENDIF

Unveiling The Qlik Luminary Class Of 2015

This article recognizes some of the best and brightest customers, partners, and enthusiasts of Qlik.

Reference: You’re The Best! Unveiling The Qlik Luminary Class Of 2015

My 2015 College Basketball Championship Bracket

Note that this application only works on Internet Explorer, because it is based on .NET technology, not Javascript.

Reference: My 2015 College Basketball Championship Bracket

Gartner 2015 Magic Quadrant

It is quite interesting how Microsoft jumps from a niche player to a visionary after it purchased Revolution Analytics, a company of R.

Reference:

Box Partners With Roambi To Attack The (TOFU) BI Market

The cooperation between Box and Roambi is an interesting join venture. Essentially, they support the integration of “spreadsheet” files for BI reporting. Such technology is similar to the QVDs and QVWs used by Qlik in Qlikview. The join venture also introduces an interesting concept of “TOFU BI”, that is, supporting BI with rudimentary documents (instead of database).

… the point of this partnership is to keep the wildly popular paradigm of self-service spreadsheets and add a delivery mechanism created for the modern, mobile world. Both Box and Roambi are well suited to solve parts of the problem and work together. Box acts as the repository that helps control the sprawl of hundreds or thousands of spreadsheets and makes them manageable. Roambi Analytics extracts data from spreadsheets and other sources and creates attractive dashboards or e-books (in the Roambi Flow product) that present data in an attractive way. …

Reference: Box Partners With Roambi To Attack The BI Market