Quick Tip – TSQL: Parameter & Variable Naming

Saw this a while ago and used it ever since… can’t remember who from otherwise would credit…

I always use @@ for parameters and @ for variables. Makes for easier intellisense and readability.

CREATE PROCEDURE [dbo].[my_proc]
( 
 @@name varchar(100), 
 @@is_enabled bit, 
 @@catageory char(5)
) as
BEGIN

DECLARE @dte DATETIME = GETDATE();

SELECT 
 name,
 status
FROM dbo.process
WHERE name = @@name
  AND is_enabled = @@is_enabled
  AND category = @@category
  AND rundate < @dte

END
Advertisements

Quick Tip – TSQL: Validate Procedure Metadata

So lets say you’re building some sort of DB integration and you want list only those procs that have a specific input and output definition. Here’s one way to do it… I used a table variable to hold the definitions but obviously there might be a better way to do that!

First create a proc we can target:

CREATE PROCEDURE [dbo].[my_proc]
( 
  @@name varchar(100), 
  @@is_enabled bit, 
  @@catageory char(5)
) as
BEGIN

  SELECT 
    name, 
    object_id, 
    schema_id, 
    type_desc, 
    create_date 
  FROM sys.tables 

END

Some T-SQL that returns the list procs that meets the specific inputs and outputs:

DECLARE @valid_definition TABLE
( 
  name VARCHAR(150) NOT NULL, 
  type VARCHAR(50) NOT NULL, 
  ordinal INT NOT NULL, 
  direction BIT NOT NULL --1=in and 2=out 
) 

  INSERT INTO @valid_definition (name, type, ordinal, direction) 
  VALUES 
    ('@@name',       'varchar'       ,1 ,1), 
    ('@@is_enabled', 'bit'           ,2 ,1), 
    ('@@catageory',  'char'          ,3 ,1), 
    ('name',         'nvarchar(128)' ,1 ,0), 
    ('object_id',    'int'           ,2 ,0), 
    ('schema_id',    'int'           ,3 ,0), 
    ('type_desc',    'nvarchar(60)'  ,4 ,0), 
    ('create_date',  'datetime'      ,5 ,0)

  ;WITH cte_params AS 
  ( 
    SELECT  
      pa.object_id, 
      procedure_name   = '[' + OBJECT_SCHEMA_NAME(pa.object_id) + '].[' + OBJECT_NAME(pa.object_id) + ']', 
      parameter_name   = pa.name,
      parameter_type   = TYPE_NAME(pa.user_type_id),
      parameter_length = pa.max_length,
      parameter_prec   = CASE WHEN TYPE_NAME(pa.system_type_id) = 'uniqueidentifier' THEN precision ELSE OdbcPrec(pa.system_type_id, pa.max_length, precision) END,   
      parameter_scale  = OdbcScale(pa.system_type_id, pa.scale),
      parameter_order  = pa.parameter_id 
    FROM sys.parameters pa  
  )
  SELECT DISTINCT 
    proc_signiture.procedure_name   
  FROM 
  ( 
    SELECT 
      pa.procedure_name 
    FROM sys.procedures pr 
    JOIN cte_params pa ON OBJECT_ID(SCHEMA_NAME(pr.schema_id) + '.' + pr.name) = pa.object_id 
    JOIN @valid_definition vd ON vd.name = pa.Parameter_name AND vd.type = pa.parameter_type AND vd.ordinal = pa.parameter_order AND vd.direction = 1 
    GROUP BY pa.procedure_name 
    HAVING COUNT(1) = 
    (
      SELECT COUNT(1) 
      FROM @valid_definition WHERE direction = 1
    ) 
  ) proc_signiture 
  -- get the input procedure signiture 
  CROSS APPLY sys.dm_exec_describe_first_result_set (proc_signiture.procedure_name, NULL, 0) proc_metadata 
  --get the output metadata 
  JOIN @valid_definition vd ON vd.name = proc_metadata.name AND vd.type = proc_metadata.system_type_name AND vd.ordinal = proc_metadata.column_ordinal 
  GROUP BY proc_signiture.procedure_name 
  HAVING COUNT(1) = 
  (
    SELECT COUNT(1) 
    FROM @valid_definition WHERE direction = 0
  )

The Basics – T-SQL: Recursive Inheritance

This is a post regarding a SQL design pattern for applying configuration attributes to a parent child table. There’s a bunch of obvious use cases particularly around metadata and ETL control & configuration repositories…

Imagine if your ETL has hundreds if not thousands of processes. Often it makes sense to model process meta data in parent child table since you have 1 root process that when executed executes a hierarchy of child process. If then you have configuration options for those processes you won’t want to maintain a configuration row for every process right? that would be a huge maintenance headache. Ideally we would configure the root process with options that the child processes just inherit unless they’re specifically overridden. This gives us the best of both worlds in terms of being able to have the flexibility of specific configurations for child processes but where they don’t have configurations then inherit the configurations from the parent resulting in less configuration maintenance. This model doesn’t persist the inherited configuration but remains sparse and derives the inheritance logically when queried removing the need for updates or inserts.

We’ll have 2 tables:

  1. The parent child entity: process_executable
  2. The configuration attributes: process_configuration

Here they are:

CREATE TABLE [etl].[process_executable] (
 [process_executable_id] INT IDENTITY (1, 1) NOT NULL,
 [parent_process_executable_id] INT DEFAULT ((-1)) NOT NULL,
 [connection_id] INT NOT NULL,
 [process_executable_name] VARCHAR (200) NOT NULL,
 [created_date] DATETIME DEFAULT (getdate()) NOT NULL,
 [is_enabled] BIT DEFAULT ((1)) NOT NULL
 CONSTRAINT [PK_process_executable] PRIMARY KEY CLUSTERED ([process_executable_id] ASC)
);


CREATE TABLE [etl].[process_configuration] (
 [process_configuration_id] INT IDENTITY (1, 1) NOT NULL,
 [process_executable_id] INT NULL,
 [number_of_retries] INT NULL,
 [is_incremental] BIT NULL,
 CONSTRAINT [PK_process_configuration] PRIMARY KEY CLUSTERED ([process_configuration_id] ASC)
);

Here is the solution

First left join the configurations in so we get all process with and without configurations. Then recurse down the hierarchy using the a recursive CTE defaulting the configurations to the parent values where they are null.

WITH [cte_process_configuration]  
AS --join in the configuration 
(
  SELECT  
    p.[process_executable_id],  
    p.[parent_process_executable_id],  
    pc.number_of_retries, 
    pc.is_incremental 
  FROM etl.process_executable p 
  LEFT JOIN [etl].[process_configuration] pc ON p.process_executable_id = pc.process_configuration_id
 ), 
cte_collapse 
AS --recurse  down tacking the parent settings down to children where they don't have settings. 
(
  SELECT  
    m.process_executable_id, 
    m.parent_process_executable_id,
    m.max_threads,
    m.number_of_retries,
    m.is_incremental 
  FROM [cte_process_configuration] m 
  WHERE m.parent_process_executable_id = -1 

  UNION ALL

  SELECT 
    m.process_executable_id, 
    m.parent_process_executable_id, 
    --carry down the settings where there aren't any 
    ISNULL(m.number_of_retries, pm.number_of_retries) AS number_of_retries, 
    ISNULL(m.is_incremental, pm.is_incremental) AS is_incremental 
  FROM [cte_process_configuration] m 
  JOIN cte_collapse pm ON m.parent_process_executable_id = pm.process_executable_id 
)
--show me the money! 
SELECT 
  process_executable_id,  
  number_of_retries, 
  is_incremental 
FROM cte_collapse c

 

Quick Tip – C# Batch Parallel Processing

There’s a fair few creative solutions around for executing data processing tasks in parallel in SQL and SSIS. As good as they are they’re not really necessary if you use Parallel.ForEach in the C# library System.Threading.Tasks. This is a parallel loop that can be used to iterate items in a collection and run some code for each item. All the thread allocation and loop tracking control is handled internally without having to do anything.

The only other thing you could do is pass it a parameter to specify how many concurrent executions you want making it adjustable and easy to configure at runtime:

int maxP = 5;

Parallel.ForEach(
collectionOfProcesses,

new ParallelOptions { MaxDegreeOfParallelism = maxP},
(currentProcess) =>
{

//Excecute SSIS Package
//Execute SQL Proc
//Azure Data Lake Analytics Job

  Execute(currentProcess);
});

The Basics – BIML : Preview Pain & Multiple File Dependencies

BIML Express 2017 has been formally release and a great new feature is the preview pain that lets you see how you’re BIML will be rendered during development. For metaprogramming this is awesome and saves a lot of pain. You can read about it here and here.

pvpain.png

So what if you have multiple dependent BIML files? e.g. I have a file that retrieves my table meta data to build table def BIML and a file that consumes the table def BIML to create my packages. This way I can use the same table definition BIML file for multiple package patterns. I just thought this would be too much for it to cope with for the 1st release – but it does in fact work which is super awesome.

The trick is the preview pain operates on all the open files. So all the dependent files have to be open to fully rendered the desired result in the preview pain.

So with just my package definition BIML file open no package BIML code is rendered in the preview… this is because there are no table defs to iterate to create the packages.

pvpainpkg

If I open both the table definition BIML Script and the packages definition script then we’re all good.

Table definition BIML

pvpaintbl

Packages definition BIML Preview now has packages!

pvpainpkgwrk

Auto DW – Metaprogramming

This is a high level consideration on using metaprogramming to build Automated Data Provisioning frameworks.

Once you’ve mastered writing code and applying solution patterns to solve real world problems a next natural progressive step is to write code that writes code! A lot of implementation technologies might even dip into this without you being aware or you may just start dipping into a natural innovative way so solve a certain problem. The topic is part of an advanced knowledge domain called Metaprogramming; the wiki-post discusses it pro’s and challenges. Kathleen Dollard has a great course on Pluralsight called Understanding Metaprogramming.

My own experience and perhaps the most common is that you’ll start metaprogramming before you’ve given the topic it’s full attention. I don’t remember getting out bed and thinking… “today I will do some metaprogramming”. What happened is that chasing the benefits as a result of experiencing pain provided the motivation. The next thing to say about code writing code is that it can go fantastically well or horrifically bad. Without giving the knowledge domain the respect that it deserves chances are it will be the latter.

Another fundamental software engineering trap I learnt the hard way is don’t program generic solutions to very specific problems you’ll pay for it in complexity and performance. The temptation can be quite strong because we’re taught to abstract and conquer particularly if the problem looks the same – but is it? This is particularly relevant for data provisioning platforms because (not exhaustive):

  1. Performance is high on the agenda. We attempt to routinely and frequently move and change tons of data; performance is crucial for success and it’s directly related to costs
  2. The repetition is obvious and can appear to constitute a large proportion of man hours; to an economist it seems to be the same solution over and over again e.g. stage data, build snapshot fact, build type 1 dimension, build type 2 dimension, etc…
  3. The content (the data) is fluid and dynamic over a large temporal period. Beyond it’s schema definition it has a low level and often an incomplete persistence of semantics that can be illusive that are a product of real world economic and human behaviour
  4. The expectations and requirements are also fluid and dynamic; they seek recover semantic meaning or information from data using reports, interactive data visualisation tools, semantic layers or other system to system interfaces.

So bringing this all into context:

  • Design patterns are common but the solutions are never the same. A type 2 dimension is a design pattern not a solution. This isn’t helped by teams running bad agile delivery. A type 2 dimension is not a backlog story and neither is a fact table.
  • The solution is to provide specific information to meet a business requirement. Not only is it different on every single project, it’s different in the same project over time. The business has to respond to it’s market which in turn motivates expectation and influences human behaviour which  in turn churns the raw solution content; the data. A static solution is not a solution.
  • The solution content is the data which is also different on every single implementation and within an implementation over time. It has features and they can all change either explicitly or implicitly
  • Performance in data platforms rely on issuing the correct amount of physical computing resources at exactly the right time. What this means is that a physical implementation needs to know about the features of the data very explicitly in order to allocate the correct amount of resources. Get it wrong in an on premise architecture and a job hogs limited resources causing other processes to suffer. Get it wrong on cloud MPP architecture and you’ll pay too much money. This is not going away; why? because information has entropy and you can’t cheat the law of physics.

In Conclusion

Building a generic solution to solve the problem of repetition in Data Platform delivery isn’t the answer. The data is specific, the requirements are specific and if you take this approach the solution is abstract leading to overly complicated and poor performing technical architectures. At their very worst the try to shoe horn the specifics into an architecture that hinders the goal and completely misses the requirements. I’d stick my neck out based on my own experience and state that 2 solutions are never the same; even in the same industry using the same operational systems.

Be very wary of magical all singing and all dancing products that claim to be a generic solution to data provisioning. AI is long way off being able derive specific semantics about the real world based on data. It’s just not possible right now… a lot of AI is approximate based on population statistics; the features of data and information are very specific.

Metaprogamming solves the problem of repetition but delivers specific solution artefacts that don’t sacrifice what Data Platforms implementations need in order to succeed which is:

  • Perform within their budget
  • Meet the business requirements

We aim to solve the repetition problem (and a whole host secondary problems) during the development process and recognise that there is the following:

  • Specific metadata about the technical features of the raw data
  • Specific metadata about the technical features of the deliverables
  • Generic implementation patterns

Development frameworks can collect the metadata specifics and combine them with generic implementation patterns to automatically generate the code of our specific solution artefacts. No product or framework however can do the following:

  • Semantically analyse the data to determine the code required to perform the transformations required to meet the information requirement. This requires real intelligence i.e. a human! It can also be extremely hard if the data and requirements are particularly challenging – this is where the real business value sits in your solution
  • Decide what are the best design patterns to use and how to construct them into a into a solution to meet the requirements. This requires knowledge and experience – An experienced solution architect

There are number of technical ways to achieve metaprogramming. I generally work in the Microsoft data platform space. Here are some I’ve used before I knew about metaprogramming:

  • XML/XSLT creating JavaScript!! Not data platform and a long time ago. Wouldn’t recommend it
  • SQL creating SQL (Dynamic SQL)
  • C# creating SSIS and SQL
  • T4 Templates
  • BIML (a Varigence creation)

I’ve built a few automated development frameworks using the above. Some of them were awful. I found myself neck deep in some crazy code maintenance and debugging hell which motivated me to learn more about the in’s and out’s of metaprogramming. I strongly recommend Kathleen’s course Understanding Metaprogramming if you’re heading down this road since it goes into detail about the approaches and the best classes of solutions for code generating code. Now I only use to the following:

The way that BIML works is actually a very similar T4 Templates it’s just that BIML brings a really useful mark-up language and development IDE to the party for scripting the creation of SSIS packages and database objects. They have also just released their automated development framework called BIML Flex if you don’t have the bandwidth/time to build your own.

As it turns out tackling metadata as a technical requirement during the development cycle lends itself to solving other common difficult problems in the data provisioning space which is integrating the following:

  • Data Catalog
  • Data Lineage
  • Operational Logging
  • Operational Auditing

Because the metadata collection is required and the assets are created from it, integrating these data platform features becomes a by-product of the development process itself. It’s a very proactive and effective solution. Retrospective solutions in this space can never keep up to the pace of change or are too pervasive requiring constant maintenance and support over and above the solution itself.

 

 

 

ADP Framework : Schema & Object Mapping

This is documentation for Schema and Object meta data mappings for the Automated Data Provisioning (ADP) Framework using BIML, SSIS and SQL. The getting started documentation can be found here.

The ADP Framework has a meta data repository for meta data and how data transfer is mapped across the meta data which is ultimately used to generate data loads and provide data lineage logging. The meta data repository is a bunch of tables that are created as an extension in the SSISDB. This blog documents these tables and describes how they are intended to be used.

Here is the diagram:

metadatatables

 

Data Object Tables

semanticinsight.system_component – This holds details and self referencing mappings of logical system components.

It’s common for data provision platform to be implemented in a hierarchy of system components such as data sources, stage databases, data marts ODS’s, data vaults and/or data warehouses. Sometimes logical sub-groupings are required in order to meet load provisioning and dependency requirements. The table is designed and intended to store components that may be a simple logical grouping or logically represent a physical component e.g. database, file share, blob storage or data lake store. Currently the framework is setup for traditional RDBMS data provisioning but the intention is to extend it for other nosql system components such as file shares, data lakes, etc.

semanticinsight.data_object – This holds details of objects that are at table level e.g. tables, views and procedures. It also holds details about how the data is formatted and should be loaded.

semanticinsight.data_schema – Data objects may be further classified into logical groups for security, maintenance and logical readability. Currently this table isn’t fully de-normalised and also holds the database name. This is for convenience since this table is intended to be configured for the solution and there is no front end for database as of yet.

semanticinsight.data_object_type – Defines what type a data object can be. Currently it can only be a Table, View and StoredProcedure.

semanticinsight.data_attribute – Defines the columns or attributes that data object can have and also the their data type constraints.

 

Data Load Mapping Tables

These tables hold details about how the meta data is mapped into the data provisioning solution.

semanticinsight.data_schema_mapping – maps data flow from source system component schema to another.

semanticinsight.data_object_mapping – maps data flow from a source schema data object to another.

semanticinsight.data_attribute_mapping – You’ve guessed it; maps data flow from a source data object attribute to another.

The framework and solution does not allow breaking the hierarchy i.e. sharing objects or attributes across schemas and databases. This is by design because I hate spaghetti data platforms – solutions should have clean logical layers. A skill in building data platforms is providing simple solutions to complex problems.

The database designers amongst us may notice that the higher level mappings of data objects and data schemas could just be implied by the data attribute mapping which is the lowest level mapping. Mappings at multiple levels is a very deliberate design decision.

The objective is to automate delivery but we need to give the framework some high level config to go on. This is what the following tables are for that should be manually configured for a specific solution. These tables should can be populated by modifying the stored procedure called semanticinsight.configure_system_component which is called in the meta data BIML scripts provided:

  • semanticinsight.system_component
  • semanticinsight.data_schema
  • semanticinsight.data_schema_mapping

The following tables can be automatically populated and mapped with careful design by the framework which saves us a lot of time since data objects and their attributes can run into their 1000’s.

  • semanticinsight.data_object
  • semanticinsight.data_attribute
  • semanticinsight.data_object_mapping
  • semanticinsight.data_attribute_mapping

 

Example 1

This is the demo and getting started setup that the GitHub project comes with. It should be fairly intuitive.

schema_mapping1

Basically it shows we have 2 system components grouped into a solution called “Adventure Works BI”. System components must have a root node. The table has a relationship for it’s parent system component. It also has a relationship directly to the root component for any system component which I added and found it made my coding life a lot easier for querying the data that is needed.

We can see in the schema and schema mapping tables that the 2 databases Adventure Works and Stage are mapped together across schema’s with identical names. This is not mandatory and schema’s can be mapped as required and the automated development framework will create the data objects in the schema’s as described in this table.

 

Example 2

Here is another example. This example might be relevant if we have multiple operational source databases in multiple geo-graphic regions loading into a single stage database for further integration. In this case we can’t use like for like schema names because the table names will clash. We could use the table names to make the distinction but for many reasons I won’t go into here (the biggest one code re-use) it’s better to keep common table names and use the schema name to make the distinction.

schema_mapping2