Azure Big Data: Spark 2.3 Centos VM Standalone

As of late I’ve been investigating all the options of running a Spark Big Data platform on Azure using blob and datalake for data storage. So far I’ve poked around with the following – which I may blog about if I get time:

  • IaaS Linux VM Build (standalone and clustered)
  • HDInsight
  • Databricks
  • Spark on Azure Container Cluster (AKS preview) i.e. Kubernetes

This is basic how to install Spark 2.3 on a standalone Centos VM in Azure. Basically the latest and greatest build of Spark 2.3, Centos 7.4 (Linux), Scala 2.11.12 and Java 8+. There are later versions of Scala but Spark 2.3 requires Scala 2.11 max as covered here:

Preparing Your Client Machine

 

  1. Install bash client
  2. Create ssh rsa key – we need this before creating the Azure VM

We’re setting up a linux server to run Spark on a Centos VM in Azure. I’m not going to bother with a Linux Desktop or remote desktop but we’ll need a client bash terminal to connect to the machine in order to:

  • Administrate the Centos & install software
  • Use the Spark terminal

I run mac OS and windows 10; mostly Windows 10. If you’re running with a mac you don’t need a bash client terminal since you have one already. Windows however does need a bash client.

There is a new Microsoft Linux Subsystem available in the Windows 10 Fall Creators update but I hit some issues with it so wouldn’t advise it yet. It’s not just a Bash client; it emulates a local Linux subsystem which provides some irritating complications. The best experience by far I’ve had is with Git Bash so go ahead and install this if you’re using Windows.

Once we have Bash we need to create public private key so that we can use the Secure Shell (SSH) command to securely connect to our Linux VM.

Open Git Bash and execute the following:

ssh-keygen -t rsa -b 2048

This creates a an 2048 bit rsa private public key pair that we can use to connect to our Azure VM. You’ll be prompted for a filename and passphrase. See here:

gitbashkey

As we can see it says that it created the key in:

C:\Users\shaun\.ssh\id_rsa

In my case it didn’t however and demokey.pub and demokey can be found here, which is my bash home directory:

C:\users\shaun\demopub.key (this is the public key)
C:\users\shaun\demopub  (no extension this is the private key)

Review these files using notepad. Copy the private key to the respective .ssh folder and rename it to id_rsa:

C:\Users\shaun\.ssh\id_rsa

Keep a note of the public key which looks something like below because this lives on the Linux server and we need it when creating the Linux VM in Azure. Also don’t forget the passphrase you entered because we need that to login using the ssh command.

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDBkb5GTWTIAtGhZeHNKXwbVF6WoQqb0u23D3opQc0TId9NdlWj8WnYmFu1/l4CuqdD/uzE7/JJTP2pW9mrb3/sZyygF560XGQzTmEUAGRlAexTr509Q0wB/Spekp9qGLVqkys3wQdbxjWsWI2lEhwJIvPlyzgzIAJrmeUU/NGS6rQN+tzoqntg4V2fI714W7f0YRerUICb9rveVwbDU0ieihs1B+n+ljNoJ+J3yFAKqYVcYyQIL4WYmpYgi/M1EMOyrRZK0hVySIbhGh4eI1FBOfplxEOhI8SgedK1KaemhBWs4f+zs1bntqkSCgFHJzV/eLUHDsYxTrgEK3Tn9s5X shaun@DESKTOP-CKJA9OR

Right time to create our Linux VM.

Create Centos VM in Azure

 

Login into the Azure Portal, click Create a Resource and search for Centos. Choose the CentOS-based 7.4 and hit create.

centOS VM.PNG

Fill in the necessaries in order to create your VM choosing the most affordable and appropriate machine. For a demo learning standalone I tend to go for about 4 cpu’s and 32GB (remember spark is an in-memory optimised big data platform). The important bit is to copy and paste our public rsa key into the SSH Public Key input box so it can be placed on the VM when provisioned. When Azure has provisioned your VM it leaves it up and running.

centOS VM2.PNG

Connect to CentOS VM

 

So hopefully that all went well and we’re now ready to connect. You can give your VM a DNS name (see docs) however I tend to just connect using the IP. Navigate to the VM in the portal and click the connect button. This will show you the SSH command with the server address that we can enter into a bash client in order to connect.

connect

Enter the SSH command, enter the passphrase and we’re good to go:

connect

Patch the OS

 

Ensure the OS is patched, the reboot will kick you out of your ssh session. So you’ll need to sign back in.

sudo yum update -y
sudo reboot

Install Java 8

 

Install open JDK 1.8 and validate the install

sudo yum install java-1.8.0-openjdk.x86_64
java -version

Set the following home paths in your .bash_profile so that everytime we login our paths are set accordingly. To do this we’ll use the nano text editor.

sudo nano ~/.bash_profile

Add the following path statements, since they’re required by the scala config:

export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk
export JRE_HOME=/usr/lib/jvm/jre

should look something like this.

javapaths

To exit press ctrl+x, you’ll be prompted to save. Now reload the bash profile.

source ~/.bash_profile

Check the java version.

java -version

javaversion

check the java paths:

echo $JAVA_HOME
echo $JAVA_JRE

echojavapaths

Java is all done. Onto scala.

Install Scala 2.11.12

 

Spark 2.3.0 requires Scala 2.11.x version. Note that the current scala is version 2.12 so we’ll go for the last 2.11 scala version which is 2.11.12; we want the rpm package:

wget http://downloads.lightbend.com/scala/2.11.12/scala-2.11.12.rpm
sudo yum install scala-2.11.12.rpm

We should now validate the install

scala -version

Note the output as follows:

cat: /usr/lib/jvm/jre-1.8.0-openjdk/release: No such file or directory
 Scala code runner version 2.11.12 -- Copyright 2002-2017, LAMP/EPFL

This is because there is no release directory in the $JAVA_HOME path directory which the scala script looks for; see a more thorough explanation here. It’s not vitally necessary but I got around this by just creating a release directory at $JAVA_HOME.

cd $JAVA_HOME
sudo mkdir release
cd ~
scala -version

output:

cat: /usr/lib/jvm/jre-1.8.0-openjdk/release: Is a directory
Scala code runner version 2.11.12 -- Copyright 2002-2017, LAMP/EPFL

Install Spark 2.3.0

 

Final step! Install spark. Download the 2.3.0 rmp package. We’ll use wget again and download the package from a mirror url listed from on this page. I’m using the 1st listed mirror url but adjust as you see fit.

wget http://apache.mirror.anlx.net/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz

unzip and move the files into a more appropriate directory:

tar xf spark-2.3.0-bin-hadoop2.7.tgz
mkdir /usr/local/spark
cp -r spark-2.3.0-bin-hadoop2.7/* /usr/local/spark

We need need add some paths to out bash profile for convenience so that we don’t have to map them every time we create a session. Again we’ll use nano for this.

cd ~
sudo nano ~/.bash_profile

Add the following lines along with the java paths that added earlier.

export SPARK_EXAMPLES_JAR=/usr/local/spark/examples/jars/spark-examples_2.11-2.0.0.jar

Also put the spark binary folder into the $PATH variable:

PATH=$PATH:$HOME/.local/bin:$HOME/bin:/usr/local/spark/bin

The file should look something like this.

spark paths

Now we can exit and save the file using crtl+x, reload the bash profile and check the paths.

source ~/.bash_profile
echo $PATH
echo $SPARK_EXAMPLES_JAR

Now we should be good to run the spark shell!!

spark-shell

There you have it folks enjoy!

sparkshell

To exit spark-shell use

:quit

to close your ssh session use

exit

Finally, don’t forget to stop your VM to reduce your Azure spend. Sensitive machine details, user details and rsa keys used for this blog have since been deleted.

 

 

Advertisements

Azure Databricks Walkabout

Finally got my Azure Databricks preview enabled. This is just a quick overview of how it all hooks together.

Technical Architecture Overview

Basically Databricks is the PaaS and Azure is the IaaS. Which is a smart play by Databricks. Why try and compete with the scale and resilience that Microsoft and AWS hosting IaaS. Leveraging partner IaaS to host their PaaS service allows them to do what they do well which is to focus on pushing the Databricks spark platform above and beyond what other data platforms are capable of.

Creating the Service

When you create the Databricks service it creates what appears to be a PaaS service that sits in a blade in the resource group you select when creating the service. So in my case I created a service called sibytes-databricks in my DataLakePatterns resource group.

Screen Shot 2018-01-14 at 08.45.08

At the top:

  • Managed Resource Group – this is a resource group that houses the managed IaaS – see below
  • URL – the URL to the Databricks PaaS service. This is also where the “Launch Workspace” button goes to
  • Resource Group – Just where this pass through blade to the Databricks service sit in your subscription

When you click the Launch Workspace button it takes you through a single sign-on to Databricks which is where the PaaS services operates. The first time you go through you’re asked to delegate authorisation to your azure account from the Databricks service so it can manage the IaaS as part of the service. So you’ll see this

Screen Shot 2018-01-14 at 08.19.34

NOTE: this has to be an azure domain account not a Microsoft account so you’ll need to set one up before hand. Also it’s probably wise to create a domain account specifically for this purpose rather than using a specific user account so you can lock down it’s permissions entirely to what it should have – particularly with regards data access.

Once in you’ll see the usual and in my opinion brilliantly usable PaaS service interface.

Screen Shot 2018-01-14 at 08.47.00

Remember that it’s a single sign-on delegated permission to Azure so when you manage your account in Databricks it will take straight back into Azure.

Managed IaaS

When you create your Databricks service you’ll also notice that it will also create an additional Resource Group to house the infrastructure services that Databricks creates to run the enhanced spark data platform.

Screen Shot 2018-01-14 at 08.31.10

Initially it will look like the following i.e. with no clusters created and running.

Screen Shot 2018-01-14 at 09.06.14It has:

  • A storage account – so it’s not entirely IaaS. The storage is read-only and you cannot access it. Not 100% what this is for yet but if it’s for data content storage then we might be limited to 1 account and that also means data going in and out can only be through Databricks. Or maybe it’s just for a specific types of Databricks meta data or processing storage type… Or maybe it’s for internal config, logs and security… Need to look into this further when I get into data processing and explore to see what if any other Azure storage accounts can be attached. Will blog on it…
  • Virtual Network – This is basically a secure vnet for the IaaS clusters.
  • Network Security Group – This is all the network security inbound and outbound settings that allow Azure and Databaricks to speak to each other to spin up IaaS VM’s and access data.

Creating A Cluster

Things get a little more interesting when we create a cluster. I created a 2 node cluster with a 1 driver using Standard DS3’s. In the drop downs for choosing machines you’re limited to a specific collection and currently they are all listed as beta.

Screen Shot 2018-01-14 at 08.32.10

Looking back into the Azure IaaS resource group we can see it’s spun up 3 linux machines on the VM’s we have chosen in Databricks:

Screen Shot 2018-01-14 at 08.33.20

When we stop the cluster in Databricks I was expecting the VM’s to be stopped. However that doesn’t happen. Currently what happens is that Databricks retains the cluster as a terminated cluster and Azure delete’s all of the VM’s so all you’re left with is the VNet, SGN and Storage Account with our data on it – which is what we started with plus our data.

What this means is that you’ll pay less in storage when you cluster is stopped however it can take quite a while for clusters to come and down particularly if they’re pretty large. It would be good to have an option to pause the service that keeps the VM’s persisted.

Thoughts & More to Follow

This is just a quick look and intro – more to follow particularly on data processing, storage and hooking into the PaaS service through their data provider.

How the storage accounts hook in is a big part of how good this will be because if interaction and flexibility around storage is limited then it may significantly reduce it’s appeal as an architectural option. If it is significantly locked down then we’ll be forced to use and pay for services that Microsoft and Databricks have agreed to integrate that you would otherwise may not need in order to build a viable and compelling data platform  e.g. Azure Data Warehouse…

Finer Points – USQL: Merging Datasets Part 1

Like a lot of Bid Data platforms Data Lake Analytics is a file based data platform. It’s a totally different approach to RDBMS data warehouse batch processing. The architecture brute forces the processing of complete file partitions on a distributed architecture. It just accepts that it will crunch a lot of data but achieves performance by doing job lots on loads of computers. I’ll write more on technical architecture considerations at a later date.

This means in a structured data world we lose row level resolution of inserts, updates and deletes. So how do we do common data integrations like merge?

Most good tech platforms have multiple ways of achieving the same thing. I’m going to compare 2 different ways. I didn’t set out to rubbish one approach over the other. It’s just an exercise of scientific comparison to build my understanding how stuff works, build up my coding patterns tool box and to share code and spark ideas  – feedback of anything I’ve overlooked is most welcome.

I’ll split this into 2 blogs… since there’s a fair bit to get through. Part 1 will present different approaches and part 2 will put them both under load and compare the job graphs and performance differences (if any).

So the approaches are:

  1. Multiple steps using basic SQL like pattern – blogged about by Paul from Purplefrog
  2. Single hit using a Full Outer Join and conditional C# – from my own patterns

A note about Combiners – I considered and played around with a custom combiner. However the join clause seems to be limited to an inner join, you can work around this but in the end it just feels very hackey and more hassle than it’s worth.

My aim is to understand the following facets:

  • Performance – how do the jobs compare when compiled and stress tested
  • Limitations – are there situations where 1 pattern works better over the other
  • Style – coding complexity, skill level and maintenance

The Demo Setup

 

Create the Target Managed Table

 

We need to create a managed table to merge the data into. We could of course do this into a file however these types of operations are going to be quite far up the information architecture stack since merge sets are structured, clean and conformed so using a managed table seems appropriate.

You could entertain the idea of holding a complete Kimbal star schema in Data Lake managed tables instead of using an RDBMS. There are consequences to this of course which is another topic for another day.

CREATE DATABASE IF NOT EXISTS Patterns;
 
USE Patterns;
 
CREATE SCHEMA IF NOT EXISTS dw;
 
DROP TABLE IF EXISTS dw.person;
 
CREATE TABLE IF NOT EXISTS dw.person
(
    personid int,
    title string,
    email string,
    firstname string,
    surname string,
    deleteddate DateTime?,
    filename string,
    INDEX clx_personKey CLUSTERED(personid ASC) DISTRIBUTED BY HASH(personid)
);

The table structure implies that we’re using details about persons as an example since it just offers simple data change examples that are intuitive.

Source Data

 

I’m using 3 files as example to incrementally load. For any solution we need to understand how we’re are receiving data. To keep things simple we’re assuming a full incremental extract with hard deletes that will require a full compare i.e. there is no Change Data Capture (CDC) lower down the information architecture. The files are as follows:

person1.csv – 2 inserts

ID,Title,Email,Firstname,Surname
1,Mr,sr@qwertymail.com,Shaun,Ryan
2,Mr,dj@qwertymail.com,David,Jones

person2.csv – 1 insert, 1 update

ID,Title,Email,Firstname,Surname
1,Mr,sr@qwertymail.com,Shaun,Ryan
2,Dr,dj@qwertymail.com,David,Jones
3,Ms,sr@qwertymail.com,Stacy,Rogers

person3.csv – 2 inserts, 1 update, 1 physical delete (ID=1)

ID,Title,Email,Firstname,Surname
2,Dr,dj@qwertymail.com,David,Jones
3,Mrs,sr@qwertymail.com,Stacy,Stowman
4,Mrs,jm@qwertymail.com,Janet,Moores
5,Mr,kf@qwertymail.com,Kevin,Fullers

To do a deterministic merge of data we of course need a true unique business key which in this case is ID.

Executing the following Scripts

 

To load the files dump them all on data lake or local storage at “/Person/” and run the scripts incrementing @filenumber and inspecting the output in the dw.person table. I’m also writing the merged data out to “/Person/output/” if you prefer to look at files rather than tables in Data Lake and for debugging; for me the table viewer in visual studio is fine for a small amount of data.

I’m not going to review the data output of every approach, other than to say the data merges as you would expect. Below is the final output of all 3 incremental loads. All the code and files is up on GitHub so to see working give it whirl.

output

1  – Multiple USQL Merge Steps

 

This is Paul’s approach, tweaked for my meta data. Basically we’re going to hit it in 3 steps using an inner, left and right join, and union all the results together.

USE Patterns;
 
DECLARE @tabcode int = 44;
DECLARE @delimiter char = (char) @tabcode;
DECLARE @escape char = Char.Parse("œ");
DECLARE @extension string = "csv";

//  Path
DECLARE @filenumber string = "1";
DECLARE @filename string = "person";
DECLARE @inpath string = "/Person/" + @filename + @filenumber + ".csv";
DECLARE @outpath string = "/Person/output/" + @filename + @filenumber + ".csv";
 
// read data
@data =
    EXTRACT Id int,
            Title string,
            Email string,
            Firstname string,
            Surname string
    FROM @inpath
    USING Extractors.Text(delimiter : @delimiter, escapeCharacter : @escape, quoting : false, skipFirstNRows : 1);

@merge =
    //update current
    SELECT [src].[Id] AS [personid],
           [src].[Title] AS [title],
           [src].[Email] AS [email],
           [src].[Firstname] AS [firstname],
           [src].[Surname] AS [surname],
           (DateTime?) null AS [deleteddate],
           @filename + @filenumber AS [filename]
    FROM @data AS [src]
         INNER JOIN
             [dw].[person] AS [tgt]
         ON [src].[Id] == [tgt].[personid]
 
    UNION ALL
 
    //insert new
    SELECT [src].[Id] AS [personid],
           [src].[Title] AS [title],
           [src].[Email] AS [email],
           [src].[Firstname] AS [firstname],
           [src].[Surname] AS [surname],
           (DateTime?) null AS [deleteddate],
           @filename + @filenumber AS [filename]
    FROM @data AS [src]
         LEFT OUTER JOIN
             [dw].[person] AS [tgt]
         ON [src].[Id] == [tgt].[personid]
    WHERE [tgt].[personid] IS NULL
 
    UNION ALL
 
    //keep existing and logically delete
    SELECT [tgt].[personid],
           [tgt].[title],
           [tgt].[email],
           [tgt].[firstname],
           [tgt].[surname],
           (DateTime?) DateTime.Now AS [deleteddate],
           [tgt].[filename]
    FROM @data AS [src]
         RIGHT OUTER JOIN
             [dw].[person] AS [tgt]
         ON [src].[Id] == [tgt].[personid]
    WHERE [src].[Id] IS NULL;
 
//optionally - output to file
//so we can save a review each iteration output
OUTPUT @merge
TO @outpath
USING Outputters.Csv();
 
//truncate table - we can't do row level operations
//it's all file based processing so we have to reload the whole table (partition)
TRUNCATE TABLE [dw].[person];
 
//insert the merged data
INSERT INTO [dw].[person]
(
    [personid],
    [title],
    [email],
    [firstname],
    [surname],
    [deleteddate],
    [filename]
)
SELECT [personid],
       [title],
       [email],
       [firstname],
       [surname],
       [deleteddate],
       [filename]
FROM @merge;

2  – Single USQL Merge Step

 

This is an approach I put together after seeing the 1st approach wondering how much more complicated and what the execution differences there would be to do it 1 hit using a full outer join with conditional selects.

USE Patterns;
 
DECLARE @tabcode int = 44;
DECLARE @delimiter char = (char) @tabcode;
DECLARE @escape char = Char.Parse("œ");
DECLARE @extension string = "csv";
 
//  Path - increment @filenumber to load files 2 and 3 on successive runs
DECLARE @filenumber string = "1";
DECLARE @filename string = "person";
DECLARE @inpath string = "/Person/" + @filename + @filenumber + ".csv";
DECLARE @outpath string = "/Person/output/" + @filename + @filenumber + ".csv";
 
// read data
@data =
    EXTRACT Id int,
            Title string,
            Email string,
            Firstname string,
            Surname string
    FROM @inpath
    USING Extractors.Text(delimiter : @delimiter, escapeCharacter : @escape, quoting : false, skipFirstNRows : 1);
 
@merge =
        SELECT
            //select the source data if insert or update
            //select the target data if deleted 
            //we have to cast the datatypes since the if else construct returns nullable types

            (int)(issource ? [srcPersonid] : [tgtPersonid] ) AS personid,
            (string)(issource ? [srcTitle] : [tgtTitle] ) AS title,
            (string)(issource ? [srcEmail] : [tgtEmail] ) AS email,
            (string)(issource ? [srcFirstname] : [tgtFirstname] ) AS firstname,
            (string)(issource ? [srcSurname] : [tgtSurname] ) AS surname,
            (issource ? null : (DateTime?)DateTime.Now ) AS deleteddate,
            (string)(issource ? [srcFilename] : [tgtFilename] ) AS filename
        FROM
        (
    SELECT (
               // create a boolean that can be re-used in the outer 
               // query to keep the code clean
               // if update
               ([src].[Id] == [tgt].[personid] & [src].[Id] != null)
               // or if insert
               || ([tgt].[personid] == null)
               //then write source data
               ? true
               //else keep the target data
               : false
           ) AS issource,
           //source data
           [src].[Id] AS [srcPersonid],
           [src].[Title] AS [srcTitle],
           [src].[Email] AS [srcEmail],
           [src].[Firstname] AS [srcFirstname],
           [src].[Surname] AS [srcSurname],
           @filename + @filenumber AS [srcFilename],
           //target data
           [tgt].[personid] AS [tgtPersonid],
           [tgt].[title] AS [tgtTitle],
           [tgt].[email] AS [tgtEmail],
           [tgt].[firstname] AS [tgtFirstname],
           [tgt].[surname] AS [tgtSurname],
           [tgt].[filename] AS [tgtFilename]
    FROM @data AS [src]
         FULL OUTER JOIN
             [dw].[person] AS [tgt]
         ON [src].[Id] == [tgt].[personid]
    ) AS cpr;
 
//optionally- output to file
//so that we can save and review each iteration output
OUTPUT @merge
TO @outpath
USING Outputters.Csv();
 
//truncate table - we can't do row level operations
//it's all file based processing so we have to reload the whole table (partition)
TRUNCATE TABLE [dw].[person];
 
//insert the merged data
INSERT INTO [dw].[person]
(
    [personid],
    [title],
    [email],
    [firstname],
    [surname],
    [deleteddate],
    [filename]
)
SELECT [personid],
       [title],
       [email],
       [firstname],
       [surname],
       [deleteddate],
       [filename]
FROM @merge;

Summary

 

I’ve presented 2 alternative ways to merge data using native USQL constructs. In the second part (not done yet) of this posting I’ll compare the compiled jobs graphs and any performance differences.

The Basics – Azure Stream Analytics : Use GetArrayElements to Flatten Json

In this blog I’m detailing out how flatten complex json in Azure Stream Analytics. It’s not massively hard but took a bit of fiddling since the online docs stop a little short. This kind of stuff is mostly a reference for myself for when I need to do it again sometime later and will have probably forgotten.

There is this blog which gives some insight but stops a little short on dealing with arrays. Basically I’m just going to cover how GetArrayElements works in a bit more detail.

Consider the following json event payload we might receive through a stream… No guesses for how I waste most of my time… and… it’s close but that’s not my real gamer tag 🙂

{  
   "id":"123e4567-e89b-12d3-a456-426655440000",
   "device":"34543FERRE55",
   "player":{  
      "firstname":"Shaun",
      "surname":"Ryan",
      "gamertag":"worlord"
   },
   "gamestats":[  
      {  
         "statid":1,
         "gametime":"2017-02-17T11:59:12.000Z",
         "event":"Kill",
         "item":"BAR M1918",
         "gamertag":"millipead"
      },
      {  
         "statid":2,
         "gametime":"2017-02-17T11:59:15.000Z",
         "event":"Suppression",
         "item":"BAR M1918",
         "gamertag":"duckwarrior"
      },
      {  
         "statid":3,
         "gametime":"2017-02-17T11:59:09.000Z",
         "event":"Kill Assist",
         "item":"Rifle Grenade",
         "gamertag":"captaincardboard"
      },
      {  
         "statid":4,
         "gametime":"2017-02-17T11:59:34.000Z",
         "event":"Heal",
         "item":"medipack",
         "gamertag":"rhubarb"
      }
   ],
   "EventProcessedUtcTime":"2017-02-17T12:00:00.384Z",
   "PartitionId":0,
   "EventEnqueuedUtcTime":"2017-02-17T12:00:00.146Z"
}

Let us imagine our game sends a payload every minute that contains all the game stats that occurred in that minute. This is completely fabricated. Am sure that’s not how such a thing would be done but its a fun example and provides an array.

So our json above is relatively semi-structured, we have:

  • Attributes in the root
  • Child entities with attributes
  • A child entity with an array

Our end game is that we want to flatten this into a denormalized data sets to insert into a SQL table for example or aggregate our stats which is a more likely use case of stream analytics.

We want a data set that looks like this (click image to see larger pic):

capture

If we were aggregating we might thin out a few columns not needed in the grouping but just for an example it’s fine.

Solution

So here’s the solution straight off the bat. There’s some discussion that follows if you’re interested.

SELECT
 [Stat].[id] AS [ID],
 [Stat].[device] AS [Device],
 
 [Stat].[player].[firstname] AS [Firstname],
 [Stat].[player].[surname] AS [Surname],
 [Stat].[player].[gamertag] AS [Playergamertag],
 
 [Stat].[GameStat].[statid] AS [Statid],
 [Stat].[GameStat].[gametime] AS [Gametime],
 [Stat].[GameStat].[event] AS [Event],
 [Stat].[GameStat].[item] AS [Item],
 [Stat].[GameStat].[gamertag] AS [Gamertag],
 
 [Stat].[EventProcessedUtcTime],
 [Stat].[PartitionId],
 [Stat].[EventEnqueuedUtcTime]
FROM
(
 SELECT
 [EventAlias].*,
 [GameStatsAlias].[ArrayValue] AS [GameStat]
 FROM [InEventHub] AS [EventAlias]
 CROSS APPLY GetArrayElements(Event.gamestats) AS [GameStatsAlias]
) AS Stat

 

Finding the Solution

The best way I found to fiddle around with this stuff is to write out to json file on blob storage. Since you get to see the json of how close you’re getting and go from there. There is a Stream Analytics addin for visual studio now that offers local debugging but I’ve had some issues with it, namely the addin breaking my data lake tools addin.

GetArrayElements

The GetArrayElements function pivots out the array items but has to be applied using a cross apply so that it runs across each item in the array. Any where clause should follow the cross apply. I’ve used a sub query since it allows me to:

  1. Have intuitively readable code
  2. Break out the columns from the ArrayValue into their own alias’s above; it might be possible to do that in 1 hit but I like the subquery (see 1)

If you take the sub query alone it creates the following json so it’s not hard to see how to get to the final result by looking at the intermediate result.

Query:

SELECT * FROM
( 
 SELECT
 [EventAlias].*,
 [GameStatsAlias].[ArrayValue] AS [GameStat]
 FROM [InEventHub] AS [EventAlias]
 CROSS APPLY GetArrayElements(Event.gamestats) AS [GameStatsAlias]
) AS Stat

Creates:

[  
   {  
      "stat":{  
         "id":"123e4567-e89b-12d3-a456-426655440000",
         "device":"34543FERRE55",
         "player":{  
            "firstname":"Shaun",
            "surname":"Ryan",
            "gamertag":"worlord"
         },
         "GameStat":{  
            "statid":1,
            "gametime":"2017-02-17T11:59:12.000Z",
            "event":"Kill",
            "item":"BAR M1918",
            "gamertag":"millipead"
         },
         "EventProcessedUtcTime":"2017-02-17T12:00:00.384Z",
         "PartitionId":0,
         "EventEnqueuedUtcTime":"2017-02-17T12:00:00.146Z"
      }
   },
   {  
      "stat":{  
         "id":"123e4567-e89b-12d3-a456-426655440000",
         "device":"34543FERRE55",
         "player":{  
            "firstname":"Shaun",
            "surname":"Ryan",
            "gamertag":"worlord"
         },
         "GameStat":{  
            "statid":2,
            "gametime":"2017-02-17T11:59:15.000Z",
            "event":"Suppression",
            "item":"BAR M1918",
            "gamertag":"duckwarrior"
         },
         "EventProcessedUtcTime":"2017-02-17T12:00:00.384Z",
         "PartitionId":0,
         "EventEnqueuedUtcTime":"2017-02-17T12:00:00.146Z"
      }
   },
   {  
      "stat":{  
         "id":"123e4567-e89b-12d3-a456-426655440000",
         "device":"34543FERRE55",
         "player":{  
            "firstname":"Shaun",
            "surname":"Ryan",
            "gamertag":"worlord"
         },
         "GameStat":{  
            "statid":3,
            "gametime":"2017-02-17T11:59:09.000Z",
            "event":"Kill Assist",
            "item":"Rifle Grenade",
            "gamertag":"captaincardboard"
         },
         "EventProcessedUtcTime":"2017-02-17T12:00:00.384Z",
         "PartitionId":0,
         "EventEnqueuedUtcTime":"2017-02-17T12:00:00.146Z"
      }
   },
   {  
      "stat":{  
         "id":"123e4567-e89b-12d3-a456-426655440000",
         "device":"34543FERRE55",
         "player":{  
            "firstname":"Shaun",
            "surname":"Ryan",
            "gamertag":"worlord"
         },
         "GameStat":{  
            "statid":4,
            "gametime":"2017-02-17T11:59:34.000Z",
            "event":"Heal",
            "item":"medipack",
            "gamertag":"rhubarb"
         },
         "EventProcessedUtcTime":"2017-02-17T12:00:00.384Z",
         "PartitionId":0,
         "EventEnqueuedUtcTime":"2017-02-17T12:00:00.146Z"
      }
   }
]

This has blown out the array across the higher level json data, from here we can just pick out what we need in our rows using the outer query. When you dump this out to a file it won’t create an array as a single json doc it basically writes a json document per file line, so 4 json documents in this case separated by a line feed.

Incidentally if you are using the Stream Analytics Visual Studio addin with local debug and a sample json file you have to encase your json document events into an array or it errors.

Hope this makes sense / helps. No doubt I’ll have forgotten how to do this in a couple of days.

Finer Details – Stream Analytics & PowerBI: Feb 1st Update

So folks might have noticed the Feb 1st update to stream analytics. There’s a fair bit of stuff in this release.. What I am going to focus on though is how stream analytics integrates into Power BI now.

Power BI has had the addition of Stream Data sets sometime after Azure Stream Analytics (ASA) integration. Folks who have worked with it might be aware that when you hook up ASA to Power BI it just creates a dataset and doesn’t create a stream dataset. When I first did this it jumped out as something that probably needed to be consolidated… and now it has.

capture

So what does the implementation experience look like now… Well from the stream analytics nothing much appears to have changed. I have a test query that’s not doing much it just fires through some data. I’m actually firing json in through the event hub but will focus on the ASA PowerBI bit. The query is as follows:

WITH customers AS (
 SELECT
 [customer].customerCode,
 [customer].firstname,
 [customer].lastname,
 [customer].addressline1,
 [customer].addressline2,
 [customer].addressline3,
 [customer].postcode,
 [customer].town,
 [customer].county,
 [customer].country,
 [customer].customerEmail,
 [transaction].transactionCode,
 [transaction].status,
 [transaction].[transactionTime],
 [transaction].[transactionAmount]
 FROM
 [evehub]
)

SELECT * INTO [TestReport] FROM customers

When creating the output everything seems as it was before. My assumption was that something would’ve changed here and that I would have to create my streaming dataset end point definition first in Power BI and choose my data set here. But that’s not the case…

capture2

The subtle impact here is that it’s not like an API where I can just fire json at the secure end point. I still have to authorize with an account! This means you need to have a PowerBI account for your stream analytics service which in a production environment you probably won’t want that to be a named user account. Also, if the workspace needs to be shared then it will need to be a Pro account.

So far we’ve hooked up ASA to Power BI but we haven’t done anything with Power BI. Well it turns out we don’t have to manually define the end point like we do with API integration. When data starts streaming through the data set appears automatically in our Streaming Datasets within Power BI. So when I turn on my data stream using a C# app I get the following in PowerBI.

capture3

When I click the pencil to edit the data set I can see that it’s created all the attributes, typed them and defaulted history to on.

capture4

Now it seems we have some control of how these data sets are shaped in Power BI and we can flip the history on or off. We also have an end point that we can use with an api, cURL or PowerShell…

capture5

This means we might be able to stream in data to the data set using another api source not just stream analytics. Also it might seem I can do the following:

  • add an attribute that’s not in my ASA stream
  • remove an attribute that is in my ASA stream

Lets see…

Add an Attribute

I created a streaming tile just totting up transactions realtime.

capture6

When I add a new column the streaming tile still displays without issue. However when I go to create a new report or edit the existing report I used to pin to the dashboard from my dataset I start hitting issues. To see the new column in the dataset on the report editor I have to hit refresh. After doing this my data set and report is broken.

capture7

capture8

Editing the data set again, removing the column and hitting refresh doesn’t fix the issue. It seems I have to trash and recreate the data set.

How about ASA? Well ASA also had a problem. It didn’t stop the job but failed to send the transactions meaning the seemingly unaffected tile won’t update with any new data. Note the lack of column name in both the Power BI and ASA exception.

capture10

 

Remove an Attribute

What happens when we remove an attribute?

Well… the same thing as we might expect from trying to add a column.

Conclusion

This is a good change and consolidates the features of streaming data sets into Power BI for API’s and Stream Analytics. We gain some further control over datasets but it seems ASA is very tightly bound to the data set definition and in reality we can’t do much without breaking it.

It might mean that we can stream data in from another source not just ASA since we now have an endpoint we can use… I haven’t tried this yet.

All in all I’m still left a bit wanting in the mashing features for realtime data in Power BI. I’m sure it will get there… Microsoft are doing a great job… We are a tough bunch to please these days 🙂

 

 

The Basics – Azure SSAS: Authentication

Azure SSAS has hit preview and if you’re familiar with SSAS you’ll know it only works with Active Directory Integrated authentication. If you’re not an Active Directory spod this might throw you a little with it being an Azure PaaS service. Effectively it works in a similar way and requires you to create the service declaring  an Azure Active Directory administration account. If you’ve created your own subscription what you use to login is not an Azure AD account and choosing that account won’t be an option.

So… A little on Azure AD. Azure AD wasn’t really intended to be a stand alone Active Directory, well not yet anyway. Essentially for cloud to work there had to be an authentication method that works well over the internet and Microsoft went for OAuth 2 which is basically what Azure AD uses to create secure tokens. Azure AD is effectively a cloud extension to traditional AD that we are familiar with at least on some level and is the organization account we use every day to logon onto our work machines.

Good news is you get basic Azure Active Directory for free. See the screen shot below.

capture

In the screen shot there are 2 accounts:

  • Microsoft Account
  • Azure Active Directory Account

The Microsoft Account will be there already. That’s what was possibly used to create and use the subscription up till now. This is the personal account sign in when you go to sign into office 365 or the Azure portal. The personal account username will be whatever email address you used… e.g.

joe.blog@hotmail.com

When you look at the profile of the account in the next blade on the above image you’ll see the source of this account is declared Microsoft Account.

The other account is an Azure Active Directory account. This can be created in the blade above using “Add User”. It will have the following username convention.

username@joebloghotmail.onmicrosoft.com

If you want to assign your own domain to get rid of the one given to you by Azure you can but that isn’t free. I’ll leave that beyond the scope of this blog. There are brilliant courses on Pluralsight for Azure AD… Thoroughly recommend pluralsight… it’s awesome.

When we look at the profile of this account it’s source is declared as Azure Active Directory. So to create a SSAS service we need an Azure Active Directory account to be an administrator so go ahead and create one. This account is what we know and sign in as an organizational account and you can indeed give it further privileges and use it to sign into the portal to provision and use other services.

capture2

Now we have an Azure AD account we’re ready to go…

When creating the SSAS service you’ll be asked for this type of account for an administrator. It won’t allow to select a Microsoft Account (personal account). Below we see the Microsoft Account is grayed out I can only select an Azure AD Account (organizational account).

capture3

Once the service is provisioned we can deploy to this services using SQL Server Data Tools (SSDT) and connect to it using SQL Server Management Studio (SSMS). These docs go into detail:

When connecting using SSMS normally with SSAS we just go straight in using our on premise AD account with integrated authentication. Because we’re signing in with our Azure AD Account we need to provide the account details that we have created in Azure. In order to connect we need a new option Active Directory Password Authentication that requires a registry entry to enable it (Connect using SSMS). If you’re building up advanced security models in SSAS then we can obviously create a number of users and use different sign-in’s to test the results.

capture4

Unless of course your on premise AD and Azure AD are hooked up via federation then you can go straight in with Active Directory Integrated Authentication without having to sign in.

In a working environment we obviously wouldn’t want to assign individual users. It would be better to create a groups e.g. SSAS Administrators, assign the groups and manage user access by assigning users to groups in Azure Active Directory.

I hope this helps. Not all BI developers have a handle on Active Directory especially the in’s and out’s with Azure. I only know what I know from app programming many moons ago and some hours spent on Pluralsight learning enough to get by and design good security models