Quick Tip – C# Batch Parallel Processing

There’s a fair few creative solutions around for executing data processing tasks in parallel in SQL and SSIS. As good as they are they’re not really necessary if you use Parallel.ForEach in the C# library System.Threading.Tasks. This is a parallel loop that can be used to iterate items in a collection and run some code for each item. All the thread allocation and loop tracking control is handled internally without having to do anything.

The only other thing you could do is pass it a parameter to specify how many concurrent executions you want making it adjustable and easy to configure at runtime:

int maxP = 5;


new ParallelOptions { MaxDegreeOfParallelism = maxP},
(currentProcess) =>

//Excecute SSIS Package
//Execute SQL Proc
//Azure Data Lake Analytics Job


The Basics – BIML : Preview Pain & Multiple File Dependencies

BIML Express 2017 has been formally release and a great new feature is the preview pain that lets you see how you’re BIML will be rendered during development. For metaprogramming this is awesome and saves a lot of pain. You can read about it here and here.


So what if you have multiple dependent BIML files? e.g. I have a file that retrieves my table meta data to build table def BIML and a file that consumes the table def BIML to create my packages. This way I can use the same table definition BIML file for multiple package patterns. I just thought this would be too much for it to cope with for the 1st release – but it does in fact work which is super awesome.

The trick is the preview pain operates on all the open files. So all the dependent files have to be open to fully rendered the desired result in the preview pain.

So with just my package definition BIML file open no package BIML code is rendered in the preview… this is because there are no table defs to iterate to create the packages.


If I open both the table definition BIML Script and the packages definition script then we’re all good.

Table definition BIML


Packages definition BIML Preview now has packages!


Auto DW – Metaprogramming

This is a high level consideration on using metaprogramming to build Automated Data Provisioning frameworks.

Once you’ve mastered writing code and applying solution patterns to solve real world problems a next natural progressive step is to write code that writes code! A lot of implementation technologies might even dip into this without you being aware or you may just start dipping into a natural innovative way so solve a certain problem. The topic is part of an advanced knowledge domain called Metaprogramming; the wiki-post discusses it pro’s and challenges. Kathleen Dollard has a great course on Pluralsight called Understanding Metaprogramming.

My own experience and perhaps the most common is that you’ll start metaprogramming before you’ve given the topic it’s full attention. I don’t remember getting out bed and thinking… “today I will do some metaprogramming”. What happened is that chasing the benefits as a result of experiencing pain provided the motivation. The next thing to say about code writing code is that it can go fantastically well or horrifically bad. Without giving the knowledge domain the respect that it deserves chances are it will be the latter.

Another fundamental software engineering trap I learnt the hard way is don’t program generic solutions to very specific problems you’ll pay for it in complexity and performance. The temptation can be quite strong because we’re taught to abstract and conquer particularly if the problem looks the same – but is it? This is particularly relevant for data provisioning platforms because (not exhaustive):

  1. Performance is high on the agenda. We attempt to routinely and frequently move and change tons of data; performance is crucial for success and it’s directly related to costs
  2. The repetition is obvious and can appear to constitute a large proportion of man hours; to an economist it seems to be the same solution over and over again e.g. stage data, build snapshot fact, build type 1 dimension, build type 2 dimension, etc…
  3. The content (the data) is fluid and dynamic over a large temporal period. Beyond it’s schema definition it has a low level and often an incomplete persistence of semantics that can be illusive that are a product of real world economic and human behaviour
  4. The expectations and requirements are also fluid and dynamic; they seek recover semantic meaning or information from data using reports, interactive data visualisation tools, semantic layers or other system to system interfaces.

So bringing this all into context:

  • Design patterns are common but the solutions are never the same. A type 2 dimension is a design pattern not a solution. This isn’t helped by teams running bad agile delivery. A type 2 dimension is not a backlog story and neither is a fact table.
  • The solution is to provide specific information to meet a business requirement. Not only is it different on every single project, it’s different in the same project over time. The business has to respond to it’s market which in turn motivates expectation and influences human behaviour which  in turn churns the raw solution content; the data. A static solution is not a solution.
  • The solution content is the data which is also different on every single implementation and within an implementation over time. It has features and they can all change either explicitly or implicitly
  • Performance in data platforms rely on issuing the correct amount of physical computing resources at exactly the right time. What this means is that a physical implementation needs to know about the features of the data very explicitly in order to allocate the correct amount of resources. Get it wrong in an on premise architecture and a job hogs limited resources causing other processes to suffer. Get it wrong on cloud MPP architecture and you’ll pay too much money. This is not going away; why? because information has entropy and you can’t cheat the law of physics.

In Conclusion

Building a generic solution to solve the problem of repetition in Data Platform delivery isn’t the answer. The data is specific, the requirements are specific and if you take this approach the solution is abstract leading to overly complicated and poor performing technical architectures. At their very worst the try to shoe horn the specifics into an architecture that hinders the goal and completely misses the requirements. I’d stick my neck out based on my own experience and state that 2 solutions are never the same; even in the same industry using the same operational systems.

Be very wary of magical all singing and all dancing products that claim to be a generic solution to data provisioning. AI is long way off being able derive specific semantics about the real world based on data. It’s just not possible right now… a lot of AI is approximate based on population statistics; the features of data and information are very specific.

Metaprogamming solves the problem of repetition but delivers specific solution artefacts that don’t sacrifice what Data Platforms implementations need in order to succeed which is:

  • Perform within their budget
  • Meet the business requirements

We aim to solve the repetition problem (and a whole host secondary problems) during the development process and recognise that there is the following:

  • Specific metadata about the technical features of the raw data
  • Specific metadata about the technical features of the deliverables
  • Generic implementation patterns

Development frameworks can collect the metadata specifics and combine them with generic implementation patterns to automatically generate the code of our specific solution artefacts. No product or framework however can do the following:

  • Semantically analyse the data to determine the code required to perform the transformations required to meet the information requirement. This requires real intelligence i.e. a human! It can also be extremely hard if the data and requirements are particularly challenging – this is where the real business value sits in your solution
  • Decide what are the best design patterns to use and how to construct them into a into a solution to meet the requirements. This requires knowledge and experience – An experienced solution architect

There are number of technical ways to achieve metaprogramming. I generally work in the Microsoft data platform space. Here are some I’ve used before I knew about metaprogramming:

  • XML/XSLT creating JavaScript!! Not data platform and a long time ago. Wouldn’t recommend it
  • SQL creating SQL (Dynamic SQL)
  • C# creating SSIS and SQL
  • T4 Templates
  • BIML (a Varigence creation)

I’ve built a few automated development frameworks using the above. Some of them were awful. I found myself neck deep in some crazy code maintenance and debugging hell which motivated me to learn more about the in’s and out’s of metaprogramming. I strongly recommend Kathleen’s course Understanding Metaprogramming if you’re heading down this road since it goes into detail about the approaches and the best classes of solutions for code generating code. Now I only use to the following:

The way that BIML works is actually a very similar T4 Templates it’s just that BIML brings a really useful mark-up language and development IDE to the party for scripting the creation of SSIS packages and database objects. They have also just released their automated development framework called BIML Flex if you don’t have the bandwidth/time to build your own.

As it turns out tackling metadata as a technical requirement during the development cycle lends itself to solving other common difficult problems in the data provisioning space which is integrating the following:

  • Data Catalog
  • Data Lineage
  • Operational Logging
  • Operational Auditing

Because the metadata collection is required and the assets are created from it, integrating these data platform features becomes a by-product of the development process itself. It’s a very proactive and effective solution. Retrospective solutions in this space can never keep up to the pace of change or are too pervasive requiring constant maintenance and support over and above the solution itself.




ADP Framework : Schema & Object Mapping

This is documentation for Schema and Object meta data mappings for the Automated Data Provisioning (ADP) Framework using BIML, SSIS and SQL. The getting started documentation can be found here.

The ADP Framework has a meta data repository for meta data and how data transfer is mapped across the meta data which is ultimately used to generate data loads and provide data lineage logging. The meta data repository is a bunch of tables that are created as an extension in the SSISDB. This blog documents these tables and describes how they are intended to be used.

Here is the diagram:



Data Object Tables

semanticinsight.system_component – This holds details and self referencing mappings of logical system components.

It’s common for data provision platform to be implemented in a hierarchy of system components such as data sources, stage databases, data marts ODS’s, data vaults and/or data warehouses. Sometimes logical sub-groupings are required in order to meet load provisioning and dependency requirements. The table is designed and intended to store components that may be a simple logical grouping or logically represent a physical component e.g. database, file share, blob storage or data lake store. Currently the framework is setup for traditional RDBMS data provisioning but the intention is to extend it for other nosql system components such as file shares, data lakes, etc.

semanticinsight.data_object – This holds details of objects that are at table level e.g. tables, views and procedures. It also holds details about how the data is formatted and should be loaded.

semanticinsight.data_schema – Data objects may be further classified into logical groups for security, maintenance and logical readability. Currently this table isn’t fully de-normalised and also holds the database name. This is for convenience since this table is intended to be configured for the solution and there is no front end for database as of yet.

semanticinsight.data_object_type – Defines what type a data object can be. Currently it can only be a Table, View and StoredProcedure.

semanticinsight.data_attribute – Defines the columns or attributes that data object can have and also the their data type constraints.


Data Load Mapping Tables

These tables hold details about how the meta data is mapped into the data provisioning solution.

semanticinsight.data_schema_mapping – maps data flow from source system component schema to another.

semanticinsight.data_object_mapping – maps data flow from a source schema data object to another.

semanticinsight.data_attribute_mapping – You’ve guessed it; maps data flow from a source data object attribute to another.

The framework and solution does not allow breaking the hierarchy i.e. sharing objects or attributes across schemas and databases. This is by design because I hate spaghetti data platforms – solutions should have clean logical layers. A skill in building data platforms is providing simple solutions to complex problems.

The database designers amongst us may notice that the higher level mappings of data objects and data schemas could just be implied by the data attribute mapping which is the lowest level mapping. Mappings at multiple levels is a very deliberate design decision.

The objective is to automate delivery but we need to give the framework some high level config to go on. This is what the following tables are for that should be manually configured for a specific solution. These tables should can be populated by modifying the stored procedure called semanticinsight.configure_system_component which is called in the meta data BIML scripts provided:

  • semanticinsight.system_component
  • semanticinsight.data_schema
  • semanticinsight.data_schema_mapping

The following tables can be automatically populated and mapped with careful design by the framework which saves us a lot of time since data objects and their attributes can run into their 1000’s.

  • semanticinsight.data_object
  • semanticinsight.data_attribute
  • semanticinsight.data_object_mapping
  • semanticinsight.data_attribute_mapping


Example 1

This is the demo and getting started setup that the GitHub project comes with. It should be fairly intuitive.


Basically it shows we have 2 system components grouped into a solution called “Adventure Works BI”. System components must have a root node. The table has a relationship for it’s parent system component. It also has a relationship directly to the root component for any system component which I added and found it made my coding life a lot easier for querying the data that is needed.

We can see in the schema and schema mapping tables that the 2 databases Adventure Works and Stage are mapped together across schema’s with identical names. This is not mandatory and schema’s can be mapped as required and the automated development framework will create the data objects in the schema’s as described in this table.


Example 2

Here is another example. This example might be relevant if we have multiple operational source databases in multiple geo-graphic regions loading into a single stage database for further integration. In this case we can’t use like for like schema names because the table names will clash. We could use the table names to make the distinction but for many reasons I won’t go into here (the biggest one code re-use) it’s better to keep common table names and use the schema name to make the distinction.



Automated Data Provisioning Framework – Release 1

I’m releasing my automated development framework for data provisioning onto GitHub. I’m doing it in stages just to make it more manageable for myself. Why am I doing this? because I like coding, building things and maybe someone will get some value from it.

Setting Expectation


To use it expect to have or bulid a level of knowledge with the following skills:

  • Data Warehouse, ETL & ELT Architectural Design Patterns
  • SSIS
  • SQL Server
  • C#
  • BIML Express
  • T4 Templates

It’s a development framework for techies and whilst it is setup ready to go with examples with all  projects there are always subtle design differences that will require configuration tweaks and or extensions. The aim of the framework is tailored code re-use thus:

  • Saving many (in fact rather a lot of) man hours
  • Provide a flexible framework
  • Provide an agile framework – steam ahead and don’t worry having to rework stuff
  • Provide robust and high quality deliverable’s with less human error
  • Don’t waste time on level plumbing and allow the team to focus on the difficult bits – e.g. data integration & BI transforms

It is not a tool for someone with no knowledge, experience or requirements to create an off the shelf MI platform. I’ve spent a long time delivering MI platforms and in my humble experience every project has subtle differences that will make or break it, hence a highly flexible and agile framework is the way to go. Trying to shoe horn specific requirements into generic solution or even worse, data into a generic data model never leads to happiness for anyone.

I’ll assist as much as possible (if asked) to help folks understand and make use of the assets.

Release 1


This release focuses on the core assets for delivering a simple bulk loaded stage layer in less than 2 minutes with full a meta data repository and ETL with data lineage and logging. In this release:

  • Metadata management repository
  • Metadata SQL Server scrapers to automatically fill the repository and map data flows at attribute level
  • Automated DDL creation of database tables
  • Automated ETL creation of OLEDB bulk load packages
  • .Net assembly to manage BIML integration with metadata repository


Framework Stage

It’s set up to use adventure works and can very quickly be changed to use any other SQL Server database(s) as source databases. This is because the metadata is scraped automatically from SQL Server. As the framework is extended I’ll add other source scrapers.

As it turns out Adventure Works was a good database to use because it uses all of the SQL Server datatypes and some custom data types too.

Release n


There’s loads more to add that will come in further releases. This is my initial list:

  • Patterns for loading other layers – probably the DW layer initially
  • MDS integration for metadata repository
  • Other stage loading BIML templates for MDS, Incremental Loads, CDC Loads
  • Automated stage indexing
  • Staging archive & retrieval
  • Meta scrapers to support other data source types
  • Tools to help generate meta data for flat files
  • Isolated test framework for loading patterns
  • Data lineage, dictionary, metadata and processing reports
  • Statistical process control – track and predict loading performance

The Good Stuff


I don’t want to procrastinate over documentation too much but will flesh out more detail as and when I can. Onto the good stuff.


Automating DW Delivery – T4 Templates & BIML

The awareness of BIML has improved somewhat in the last few years but still very little is known about T4 Templates which is the topic of this post.

Just a bit of fundamentals. Effectively data ETL, ELT and data provisioning platforms are predominantly schema typed and this isn’t going away whilst we’re on silicon chips. The primary reason for this is performance! Even in MPP architectures you’re going to hit a strongly typed schema at some point; you can’t escape the entropy of information.

The Good


SSIS has to size a memory pipeline for it’s given data feed. It does this so it can shovel the data through fast distributed or not; and just to mention the next version of SSIS can be distributed on multiple servers (perhaps we’ll see in Azure 1 day). SSIS provides a hugely flexible and re-usable GUI to pretty much build any workflow and data feed your abilities will allow; so what’s the rub?

The Bad


Each SSIS executable you create has specific use and definition of metadata, with re-usable but very low level features that are click-tastic enough to give you RSI after one large project. It’s labour intensive to the max and subsequently error prone.

Designers cannot separate the metadata definitions from re-usable implementation patterns


This is a real pain… Why can’t I just build a template SSIS package dimension load and fire in my metadata definition for all my dimensions and implement them all with a single click of a mouse getting all the re-use of fully tested code that any sane software engineer would expect? Well you can but not with VS alone; enter BIML



In short BIML (Business Intelligence Markup Language) returns SSIS back to a respectable coding experience where I can define my template SSIS patterns using readable XML, hook up a central metadata repository using C# and generate all my DW feeds. Now we’re talking… proper code re-use delivering specific and fast SSIS executables.

There’s a ton of info on BIML already, if this is new to you stop here and go learn BIML! I have a couple of starter vids and there’s loads of other tech writers on the topic:


T4 Templates


So what’s this about T4 Templates then?

Well BIML is only free to a point (BIML Express). It does have a fully blown IDE (Mist) if you have the bucks. Also a lot of good SSIS patterns for some of the more upstream transformations require relatively complicated stored procedures that can’t be created using BIML templates. You can of course write your own metadata SQL wrapper using C# which is a fair amount of leg work to say the least – it’s a long road I have been on it.

Another option is dynamic SQL – which in my humble opinion are 2 words that should never go together! Why? Because SQL is terrible for text parsing / code generation and all the other reasons that are just too upsetting to list.

Or… you can use T4 Templates!

T4 Templates have been in Visual Studio since 2005. They’re basically a similar concept to BIML except the language can just be any text – not just BIML. I can basically template out any other language using C# to dynamically push in metadata in very much the same way that BIML works. I’m not just limited to a specific language either it could be used to create SQL procedures, Azure Data Factory json pipelines, Azure Data Lake jobs or C#.

It was included in Visual Studio specifically for templating Code Generation for design time or run time. It’s used by visual studio itself to generate code from UI designers such as Entity Framework for example. T4 comes from the abbreviation of:

  • Text – dynamically creates text output using template files
  • Template – a combination of text blocks & control logic written in C# or VB.net
  • Transformation – transforms the text in executable code and executes to produce the final output
  • Toolkit – a set of assemblies packaged into Visual Studio

There are 2 types of Templates that can be used:

  • Design Time – Templates are executed at design to generate code
  • Run Time – Executed when the application executes are compiled into classes in the application. They can receive parameters and sit within control logic.

Using T4 Instead of BIML


It’s a non-starter, forget it! Whilst you could script out SSIS packages using the native XML it’s just not (that) human readable. BIML brings the mark-up language to party which is more human readable by a country mile. You’d be mad to try to script out native SSIS XML whilst BIML express is available for free.

Design or Run Time


On the topic of code generating code. Should we be generating and executing code during design time or run time? I’ve seen various flavours of frameworks that do both or somewhere in between.

I’m a firm believer that code generation should only occur during design time or more specifically during a development process. The development process has all the bells and whistles required to help manage risk during a code generation exercise e.g. accidentally loading live data to an unsecured area of access or loading DEV data into live not to mention all the other bugs and failures that could occur.

Do we really want dynamic code executing in live that has never run before? Also debugging and issue resolution is an enormous pain in the neck if the code that ran isn’t available, easy to isolate and debug – dynamic run time frameworks written in anger tend to be low on support features and over complicated!

Also the arguments for dynamic run time seem to be around circumventing bureaucratic change control that was put in place because of a poor development process. The solution to a robust agile BI development isn’t slipping in a cheeky back door to make clandestine changes it is in fact continuous integration which is a whole other detailed topic.

To use T4 templates with BIML frameworks we use the run time execution type but during the development process since we can call the classes using C# that BIML executes during package creation. So in that respect they execute at the run time of BIML execution not the run time of the warehouse load.



I’ve still yet to come across a team fully automating the delivery of a data warehouse using meta-data. The tools are available (and free) so I’m not sure what the barriers are.

I’ll start dumping my BIML and T4 Template framework assets into Git when I get chance and hopefully show T4 working in unison with BIML to do more than just load a basic staging database layer.




Quick Tip – SSAS Degenerate Attributes

Not a huge amount of technical detail in this one… just a simple tip for SSAS because I’ve seen it more times now than I can remember and ultimately it boils down to a fundamental misunderstanding of technical architecture and what the role of SSAS plays in that architecture. A common response is that SSAS is “doing something weird”, or “SSAS is rubbish and not fit for purpose” leading to awkward overly complex work arounds.  SSAS is awesome if you use it for what it’s meant for.

Don’t put degenerate attributes in a SSAS cube


A degenerate attribute has a unique attribute value for every row in a fact table. A cube is for browsing aggregates and drilling down into a narrow set of detail.

  • How is a degenerate useful for a person to look at holistically or to analytically slice and dice measures? It isn’t. What will most likely happen is that a user will drag it onto ad-hoc report and then get very frustrated waiting for it to return
  • Secondly the cube will take a very very long time to process. Particularly with column storage and especially if it’s a string data type. Do you really want to process millions of unique values into a distinct column store? Is that going to achieve anything?

If you really must put degenerate information into a BI model:

  • Bucket them up into bandings so they can be aggregated
  • Don’t include the leaf level detail as an attribute or column… Use drill through or a report action to navigate to low level transactions on a narrow selection of data… Cubes are for aggregates and drilling down into a narrow set of detail.
  • If they’re needed for a relationship something has gone wrong in your Kimball model. Rethink the model so it provides detail for drill through but binds on something that can be better aggregated and grouped.