Maven Challenge – Net Promoter Score and Likert Analysis – Airline Survey

Another Maven challenge, another blog post!

However, rather than a capturing a summary of a particular strategy, this one is on the development of a set of visuals that are well suited to represent the results of survey data and find the categories with the largest proportions of positive and negative attitudes.

The Challenge

The overall challenge was to analyse a survey response data set from around 130,000 passengers who have travelled on Maven Airlines to determine what areas were contributing to the satisfaction rate falling below 50%, and what data-backed steps could be taken to make improvements.

The Data

The data contained various personal information on each passenger, as well as details about their journeys and class of travel.

In addition there were a number of questions which ask to rate particular services on a scale of 1 to 5, with 1 being the worst service and 5 the best (see below extract from the data dictionary).

This type of survey query is typically known as a Likert Scale survey question, named after Renis Likert, a famous social scientist. Here a 5 number scale has been used, but you may also encounter 7 or 10 number scales. You may also come across non-numeric versions (e.g. “very likely” to “not likely at all”).

These types of scales allow for a little more nuance in the sentiment surrounding a survey response where the answer may not be a binary yes/no.

Analysis of Likert Scales

I had read about Likert scales and survey data when working on the remote working challenge. Although I didn’t fully use it there, I was now keen to learn more about it and look at ways it could be visually represented.

One method I had previously come across in my endless trawling of Youtube tutorials and browsing other challenges such as Workout Wednesday and Makeover Monday, was to show the results in a horizontal bar chart, and show divergence between positive and negative results.

Below are a few different ways of presenting data using the scales. This ranges from a simple stacked bar chart, to basing it around a neutral zero value, to extracting neutral values, and all the way to a full deconstruction.

Each method has it’s own particular advantages depending on what you are trying to present, but they are effective at showing the relative distribution of a spectrum of responses.

Net Promoter Score

A child of the Likert scale is the Net Promoter Score (NPS). Strictly speaking, this is used on survey results with scales of 1-10 (see below), and it is calculated by subtracting the % of promoter scores (9-10) from the % of detractor scores (0-6).

If you return a positive result, then people are more enthusiastic about that aspect of your brand or company, whereas if it is negative, that could be damaging to your company.

What I wanted to show

Taking this into account, I wished to show a version of the bar chart type analysis of the Likert data shown above, along with the corresponding NPS for each category – basically a tidied up version of the below.

However, in this case we only have a scale of 1-5. Therefore I opted to re-scale the analysis based on the following:

  • 1-2 = Negative
  • 3 = Neutral
  • 4-5 = Positive

Again, strictly if I strictly followed the NPS chart it probably should be 1-3 as negative, 4 as neutral and 5 as positive, but I would maybe argue we are looking for “satisfaction” rather than “evangelist promoters”, so seeing 4-5 as positive is good enough for me.

Research on building the Likert bar chart and NPS

I did some “research” ( aka Google and YouTube searching), on how to put together these charts, specifically in Power BI. Although I didn’t search for hours, I didn’t manage to find anything useful as a guide.

Step into the rescue, two invaluable sources of information:

  1. The Big Book of Dashboards (Wexler, Shaffer and Cotgreave)
  2. Andy Kriebel’s Tableau YouTube video

The book provided some great advice on visually what does and doesn’t work in these types of charts, and the video provided an overview of the steps and measures to allow me to build the charts.

Apply the techniques to Power BI

Now it was a matter of translating the textbook advice and Tableau tutorial over to Power BI desktop. This was done in several steps.

1. Power Query Work

First I imported the dataset into Power Query, then created a duplicate of the table for this particular work.

Next, I selected the column headers for all the columns containing the Likert categories. After that, I navigated to the Transform ribbon at the top, then selected “Unpivot Columns”, and then “Unpivot Selected columns.

This resulted in the below layout. There is a column called “Attributes”, which relates to all the Likert categories, and another column called “Values” which are the corresponding Likert scale numbers – effectively all the categories and their corresponding values are in two columns.

While I was in Power Query, I created a “sentiment score”, which was -1 for “neutral or dissatisfied”, and +1 for “satisfied”. I never used this in my final report, but was using it in some experimenting small multiple visuals shown below, effectively showing the 1-5 rating in each category, but mirroring the satisfied and unsatisfied passengers.

I found it showed some interesting patterns, but it took quite a close inspection to glean those patterns, therefore I discarded it for my final report, in favour of the stacked bar chart.

2. Measures

Now that the rejigging of the data was done in Power Query, I moved on to start creating the measures that would help me build the below draft visual. This included the:

  • Count the positive scores (4-5)
  • Count the negative scores (1-2)
  • Count the neutral scores (3) – this was done in two parts as they straddled the zero line
  • % positive
  • % negative
  • % neutral
  • NPS

Counts

The fairly simple count DAX measures were as follows:

Note that the below calculation is multiplied by -1 so that is will appear in the negative side of the y-axis.

Next, as mentioned, for the neutral scores, these straddle the zero line, therefore the calculation is split into two, one for the positive side, and one for the negative side.

All Selected Attributes

In order to transform these count measures into % values, I need a denominator. Typically, you could use an ALL() DAX function so that you are measuring as a portion of the total of a given column, but as I have created an unpivoted table with multiple categories, it requires something a little more refined.

Instead of ALL(), I opted for ALLEXCEPT() with several columns referenced – see below:

Percentage Values

Now this was done, all that was left to do was to create a few simple DIVIDE() functions to finalise the percentage values – simply dividing each count by the All Selected Attribute:

again, remember we need two values for the neutral (positive and negative).

NPS

Finally, after all those measures were created, I can now come to finalising the NPS. If we relook at the below calculation, I recreated a version using my measures. (Note that I add my positive to negative scores, as my negative are actually negative numbers)

Rank NPS – (Bonus Calculation)

I almost forgot, there is a bonus calculation that I needed to order my Likert chart by the NPS. I used the RANX function to rank the attributes by the NPS is descending order, as shown below. I will show you a little trick on how to apply this in the next few paragraphs.

3. Building the Visuals

Ok, now onto the applying all those measures into something visual.

First I select the stacked bar chart. Then I add Attribute to the X-axis. Following that, I add the 4 % measures in the order shown below. It looks a little strange, but the add the negative value first, going from the least negative to most negative, then add the positive values after that.

Finally, drop the Rank NPS in the tool tips.

By adding the Rank NPS to the tooltip, it allows you to select it as a sorting option for your chart. To do this, select the ellipsis (…) at the top right of the chart, then select the sort axis as shown below. This will allow you to rank both your Likert chart and NPS chart by the NPS scorings, and this allows you to have common y-axis categories across both charts.

The NPS chart is simply put together, adding the Attribute in the Y-axis and NPS Grouped score in the X-axis, as shown below.

Once the charts have been built, it is a matter of applying your preferred colour palettes, fonts and formats. in the formatting pane.

4. Final product

Once I was happy with my general visual content, I brought them into my final dashboard, and applied final touches to axis labels, legends and covering titles shown below.

I then applied this set of visuals twice; one time for analysing business class passengers, and another for analysing economy and economy plus passengers. This formed my most detailed analysis in the report, allowing me to drill down in detail from those initial large percentage numbers to finding the issues that were affecting passenger satisfaction.

Let me know in the comments below, or feedback on LinkedIn if this kind of content was of use, or if there were other ways you may have gone about executing this.

Thanks for reading.

Maven Challenge – Superbowl Commercial – My Plan and Strategy

After submitting my Power BI report on LinkedIn as part of the Maven Super Bowl Challenge, I was messaged by a few people asking if I could put together an article or video blog on how I created it. So, here I am!!

Rather than jumping onto the snapshots of the Power BI platform and going through the process of putting the visuals together, I thought I would take a few steps back, and go through the plan and strategy I now go through before attempting these types of challenges, as without that you can often create a confused message. I pick up new lessons with every challenge, and find it an excellent form of self-development.

Final Submission

My Key Steps in Planning and Strategizing

For this particular challenge, I broke my process down into the following steps, but they are equally applicable to other work I do:

  1. Read the Brief
  2. Consider the End User
  3. Summarise Scope and Objectives
  4. Review the Raw Data
  5. Consider available metrics
  6. Develop a story or flow
  7. Sketch your layout and structure
  8. Execute, Review and Polish

Step 1 – Read the Brief

The challenge noted the following:

We’ve just added a brand new data set to the Data Playground, containing data from Super Bowl commercials for 10 popular brands this century

and

For this challenge, you’ll be assuming the role of Marketing Analyst at Maven Motors, an up-and-coming US car manufacturer looking to make a splash in the market. They have approved the budget to run a TV spot during the 2022 Super Bowl, but need you to analyze historical data to help guide the creative direction.

Your task is to recommend a data-driven strategy for the Maven Motors Super Bowl spot, and present it in the form of a single page report or dashboard.

They are asking you for two things, namely to “analyze the historical data to help guide the creative direction…” and to “….recommend a data-driven strategy for the Maven Motors Super Bowl spot.”

So, we need to keep these two points in the forefront of our mind as we progress.

Step 2 – Consider the End User

Perhaps the most important step and question to ask yourself – who am I creating this report for and how do I want them to feel when they look at it? This will dictate the style, format and presentation.

For example, the Harry Potter challenge or the Jordan vs LeBron challenge would encourage the use of colours, graphics and specialised visuals to engage the general viewer and add an element of fun and excitement.

My Harry Potter Magic Submission

However, as this challenge is aimed at the decision makers within a large corporation, the style and format should be very different. For this, if I imagined being in their shoes, I would like to see the following type of summary report or dashboard:

  • Structured – easy to read and flows well
  • Professional – the product should align with the standards and values of the company
  • Related to the brief – no need for superfluous information that distracts or wastes time, something I don’t have a lot of!
  • Comprehensive but to the point – only pick and present the key findings
  • Able to invite a decision – use the data and recommendations to enable an easy decision to be made

Step 3 – Summarise Scope and Objectives

Following Project Management 101 techniques, I then documented my scope and objectives. This may seem like a duplication of the above, but is helpful as a reference to always come back to. For this reason, I usually write this out and have it sitting to the side to glance at as I work on the final solution:

Scope Notes

In case you can’t read my writing:

Scope

  • Provide a 1 Page Report
  • Analyze Historical Data
  • Provide Recommendations for a data-driven strategy

Objectives

  • Enable board members, who are “time-poor” people, to have sufficiently succinct information in order to make a key decision on the direction of the marketing strategy
  • Be clear, concise and professional
  • Enable board members to be confident in your analysis (i.e. structure, methodology, analysis all appear credible, therefore we can put a measure of trust in the recommendations).

Step 4 – Review the Raw Data

It’s only now that I actually come to open the data files and look to see what is in there.

I found that, having performed the above steps, when I started looking at the data I was immediately considering whether this particular type of information or column was relevant to my brief or objectives.

I will try and briefly go through the my main thoughts when reviewing the data (below snapshot for 2000 and 2001), and how it related back to the scope, and what I would look at for more detailed analysis.

I would note at this point that this usually takes a few days, as I often like to let it ferment a little while I sip on a flat white or two.

2000-2001 data snapshot

Data Not Considered Relevant

Here’s point number oneNOT ALL DATA WILL BE RELEVANT.

Therefore, it was important to put aside any items that are not particularly useful in relation to the scope of works. Looking at the above table, these are highlighted in light blue.

Individual Brands

You may argue that the brand is important, but I would argue otherwise.

What’s the difference between Coke and Pepsi, or Bud and Bud Light in the context of this data? I don’t think this data set is going to tell you especially with the individual sample sizes.

What may give insight is the type of brand (food, cars, alcohol, etc. which I will address later). So for me, I took the decision to ignore individual brands.

The links themselves have no data analysis value for me here. Their only use may be to look at the actual ads, or provide a link in your dashboard or report. Therefore, I ignored these.

Estimated Cost

You may think- Wowwww! How can you ignore cost??? Well, I am not totally ignoring cost, but the total cost provides little insight, as there is the variable of ad length to consider.

The longer the ad, the higher the cost, right?

So, I checked if there is a correlation with a quick cost/time calculation. Guess what? The $ cost per second is constant for each year. Therefore I ignored the total cost, and focussed on the $ cost per second instead.

YouTube Likes

Again, you may wonder why I might be interested in YouTube views, but not likes? Well, this is my personal insight. I watch a lot of YouTube, and I enjoy a lot of what I watch, but I rarely feel the need to click “like” – I would be more likely to subscribe. I heard that YouTube itself does not place too much importance on “likes” these days too, they are more likely to track views, duration, click through, etc.

Additionally, the number of views is not underestimating the number of people who have engaged with the video (even if they only watched for a few seconds), but the number of likes will actually underestimate the number of people who actually like the video, as it is not compulsory to click the like button if you liked it! Again, I knocked that one off my analysis list.

TV Viewers

This may or may not be another surprise to some. If you take the time to look at the TV views for any given year, you will see they are pretty much constant – see below ( I recall there was one outlier). You can read into this that the money you pay to advertise during the Super Bowl is giving you the direct access to everyone that is watching.

As it is constant, the duration, brand, type, etc. are not directly impacting the number of viewers. I would only take this into account if you had a live feed of figures while the ads were being watched.

So, you guessed it, I kicked TV Viewers to the kerb!

TV Viewers – Year 2000

Data Considered Relevant

OK, ok….. so I have pretty much now ignored a large part of the dataset. So what exactly am I actually interested in, and what information relevant to the scope do I think I can get out of it? Well, let’s see now….

Year

An easy one – we want to potentially see how our variables change over time, so we will definitely be keeping Year

Brand Category

With ten brands, its a little difficult to compare. However if we can reduce this to a smaller set of categories, it may be of benefit. In the back of my mind, I was thinking that as Maven Motors is a motor company, it might be useful to look at the performance of Kia, Hyundai and Toyota as a group, rather than as individual companies as it effectively provides a larger single sample size.

Following that, it made sense to group the others into Alcohol, Snacks and Drinks, Sports and Services, and see if there were any trends among those categories.

By the way, I created a YouTube video showing you how to create those categories.

Video Characteristics

It was difficult to glean any real insight into the 7 categories without more detailed analysis. Further, it is part of the brief to influence the “creative direction” of the advertisement. Therefore, it was a no-brainer to retain this data.

Again, I created YouTube video actually showing you how to “unpivot” these characteristics for more effective analysis.

Length

Further, I saw that there were varying lengths of ads, and there may be a potential trend over time which may allow me to impart some recommendation on what the current trend is – so that was kept in too.

YouTube Views

I retained this as one of my key data points. As Maven Motors were already gaining access to the TV network through having a large marketing budget, I believed that a true test of whether an ad was “creatively successful” (viral), was whether people then took the time to seek it out on YouTube to watch again, or felt the need to share it with friends, family and their social networks.

$/s

As discussed earlier, I saw that the $/s for advertisements was constant for each year, which makes sense. I kept and used this data point as I would be able to see the trend in cost over time and potentially give further assurance to the board on likely final expenditure (even though budgets were already approved).

Number of Ads per Brand per Year

One further data point I noticed was that some brands were having more than one ad per year. The below example shows Budweiser using 13 ads over 4 Super Bowl events. Again, this is something I thought would be interesting to look at further to see if there were additional recommendations I could make on the potential number of ads to run.

Budweiser Ads – 2000 – 2003

Step5 – Consider Available Metrics

Considering all of the above, I came up with the following list of data points or metrics I wished to examine and present and make data-driven recommendations on:

The number of ads in a campaign year – What was the average number of ads per year per brand category.

Ad Duration – what is considered a standard ad length, and what is the trend over time telling us we should do this year.

$/second – what is the historical trend in ad cost, and see if we can forecast the range for the coming year.

Average YouTube Views vs Ad Characteristics – analyze the each characteristic and see which is more likely to garner additional views.

Step 6 and 7 – Develop a Story or Flow and Sketch Out

The next steps were to capture the entire process above is a single structured sheet that would form my final report. My story needed to:

  • Set the scene (provide context),
  • Lay out my methodology,
  • Establish the scope of the data analysed
  • Present the key data and insights
  • Give a summary recommendation

Again, I find it useful to sketch this out on a piece of paper (see below), along with even some potential visuals I have in my head.

I note that for this report I included a significant amount of text. This was on purpose as one of my key objectives was to enable the the board members to have confidence in my analysis. I felt that if I laid out the context, methodology and recommendations in clear concise language, that is then further backed up by data, this will provide that level of confidence required in such situations.

Report Structure Sketch

Step 8 – Execute, Review and Polish

Once the above it set, it is now a matter of prepping and cleaning any data (categorization, etc.), establishing the measures, then building the visuals into the structure I had envisaged.

One thing I like about Tableau is creating visuals on separate pages, and then constructing a final dashboard. I sometimes replicate that feel in Power BI, where I will trial and build individual visuals on separate tabs before bringing them in. Below are a few samples.

Data Scope and Ad Length Visuals

This gets the concepts down, so I know the type of data I want to show and in what general format.

Options for Ad Characteristics – Phase 1

I will experiment with how things are presented and how well the visual can communicate the point I want to make.

Options for Add Characteristics – Phase 2

Once I am comfortable with the concept, I will look to “polish” it. What does this mean to me?

Polishing Steps

As I said, once I am happy with the story, flow, structure and type of visuals I am using, I will move on to “polish”, which for me usually involves the following:

  • Selecting an appropriate colour theme/palette and applying consistent colours for headings, text and visuals. There are lots of websites that help generate themes and palettes.
  • Apply consistent fonts and font sizes
  • Remove superfluous information:
    • axis titles (if already obvious)
    • axis values (if using data labels)
    • repeated legends, which can be done with use of consistent colouring
    • gridlines, axis lines, etc. unless really needed
    • give concise visual titles
    • any other item that is not conveying
  • Align titles, text boxes, visuals (the eagle eyed will note a few of my visuals are off centre!)
  • Provide consistent spacings

Applying these steps allows me to move from the above visuals to the below final versions, which I believe met my goal of giving clean, concise and professional looking visuals.

If you have made it to the end of this, congratulations!! I apologise that my blogs are maybe not as short and to the point as my dashboard, but I hope this provided some insight into the process I took in developing my challenge submission, and maybe provided you with some tips and tricks for the next one.

Let me know what you think – would you change anything, or do you have a good alternative approach? I would be interested to hear.

Also, If you would like to see a video version of this and maybe some exploration of actually creating the above visuals, leave a comment below, or drop me a message on LinkedIn.

Maven Challenge – Power Query Mega Clean – Taxi Challenge

Summary

This is a walkthrough of the cleaning I undertook on excess of 25 million lines of data as part of the #maventaxichallenge , which is the monthly data visualization challenge set up by Maven Analytics. This month the challenge involved looking at detailed NYC Taxi data between 2017 and 2020 and developing a usable single page dashboard to analyze weekly trends.

Details of the challenge can be found here. This includes the data files being provided and the requirements for cleaning and presenting the data. I have also created a walkthrough of the below on a video link if you want to see it in live action.

Plan

My plan was fairly simple but structured:

  1. Write a clear list of the cleaning steps linked back to the data dictionary
  2. Create a Data Sample
  3. Apply the cleaning steps to the sample and verify
  4. Copy out the M Code for each step
  5. Load the full data set and apply the M Code for each step

Step 1 – Cleaning List

So, the following were the steps required, and what that translated into using the data dictionary:

Cleaning RequestData Dictionary Translation
Let’s stick to trips that were NOT sent via “store and forward”store_and_fwd_flag = N
I’m only interested in street-hailed trips paid by card or cash, with a standard ratetrip_type = 1 ,
payment_type = 1 or 2,
RatecodeID = 1
We can remove any trips with dates before 2017 or after 2020lpep_pickup_datetime = between 01/01/2017 and 31/12/2020 lpep_dropoff_datetime = between 01/01/2017 and 31/12/2020
…along with any trips with pickups or drop-offs in unknown zonesPULocationID <> blank/empty
DOLocationID <>blank/empty
Let’s assume any trips with no recorded passengers had 1 passengerIf passenger_count = 0 or null, then replace the 0 with 1
If a pickup date/time is AFTER the drop-off date/time, let’s swap themIf lpep_pickup_datetime > lpep_dropoff_datetime
then lpep_dropoff_datetime
else lpep_pickup_datetime

if lpep_dropoff_datetime < lpep_pickup_datetime
then lpep_pickup_datetime
else lpep_dropoff_datetime
We can remove trips lasting longer than a day, and any trips which show both a distance and fare amount of zeroadd column
lpep_dropoff_datetimelpep_pickup_datetime
then filter out values >= 24 hours

if trip_distance AND fare_amount = 0
then filter out values
If you notice any records where the fare, taxes and surcharges are ALL negative, please make them positiveif fare_amount <0
and mta_tax <0
and extra <0
and improvement_surcharge <0
and congestion_surcharge <0
then “all negative”
else “ok”

then apply “trick” replacement to change values (see below for more detail)
For any trips that have a fare amount but have a trip distance of 0, calculate the distance this way: (Fare amount – 2.5)/2.5If fare_amount > 0
And trip_distance = 0
Then ((Fare amount – 2.5) / 2.5)
Else trip_distance
For any trips that have a trip distance but have a fare amount of 0, calculate the fare amount this way: 2.5 + (trip distance x 2.5)If trip_distance > 0
And fare_amount = 0
Then (2.5 + (trip_distance x 2.5))
Else fare_amount
Cleaning Requests and Action Steps

Step 2 – Create Data Sample

It would be next to impossible to apply the steps to over 25 million lines of data and then easily verify that all the filters, additions, replacements and modifications had taken place and you got the results you were looking for.

A much more digestible method is to recreate a sample data set based on the data we were provided, and ensure that at least one example of each cleaning step scenario listed above is included.

To do this, I took the smallest file (2020), and performed a series of deduplications on values across the locations, passenger numbers, etc. so that I was able to get a small sample of approximately 50 varied line items.

Then, in order to recreate some of the cleaning scenarios, I made minor adjustments to some values. e.g. swap the drop off and pick up times. Above is the resulting sample data set.

Step 3 – Apply Cleaning Steps to Sample and Verify

Next, I created a new PowerBI file, uploaded the sample data set and then moved to edit the set in Power Query.

Sample data in Power Query

After performing the usual steps on checking data types (text, date, time, numbers, etc) have been applied, it was then a case of applying each cleaning step, and working through. For example, the first step became: =Table.SelectRows(#”Changed Type”, each [store_and_fwd_flag] = “N”)

M Code Replace Hack

There is one hack that is really worth highlighting here, and will save some added columns and processing time in your Power Query tasks, especially in larger data sets. The below gallery shows a snapshot of each step, but here is a brief description:

  • Create a “Dummy Replace” step for the column you which to replace values on.
    • Select the Transform ribbon, and click on Replace Values (picture 1)
    • Choose two dummy values to replace that would not be values in your column. As an example here, I chose replace 5000 with 6000 (picture 2), where most values would actually be single digit values in that column.
    • Click OK, and you will see the M code in the formula box at the top (picture 3)
  • Create a “Dummy Custom Column” step to allow you to write the conditional statement you want to insert into your Dummy Replace code
    • Select the Add Column ribbon, and click on Custom Column (picture 4)
    • use intellisense to help you write the conditional statement you wish to create (picture 5)
    • Copy the statement, and click cancel. This way you are not creating an additional column.
  • Return to your Dummy Replace step and perform the following:
    • For the first value (5000), amend this value to refer to the column you want to replace values in, and precede it with “each”.
    • In this example, I replace 5000 with each [fare_amount]
    • For the second value (6000), amend this value to refer to the conditional statement you want to apply to the selected column, and again precede it with “each”.
    • In this example. I replace 6000 with each if [negatives_charges] = “all negative” then [fare_amount]*-1 else [fare_amount]
  • Once you hit return, the replacement of values occurs within the column, without the need to create an additional column. This will increase efficiency for any refreshes.

If you are interested, the “normal language” logic behind this step is:

  • For each row value in the the column fare_amount, if the corresponding row in the negative_charges column is equal to “all negative”, then we would like to multiply the fare_amount by -1, thus changing it from a negative to a positive value. Else, if it is not “all negative”, then just keep the fare_amount value as it is (no change).

Step 4 – Copy out M Code

Once you have gone through each of the steps on your sample set and verified that it has had the desired effect on you line items, you will now have the list of steps under your Applied Steps on the right hand side of the Power Query interface. A good tip is to rename these steps based on your need (e.g. Change Negative Values or Swap Dates). This will help when you want to copy steps to use in the full data set.

You can see a list of this information in the Advanced Editor Window (see below). This can be copy and pasted out and saved for future use.

If you click on each step you will see the corresponding M code just above your data table.

Advanced Editor showing M Code Steps

There is another way that I learned from the Curbal Youtube page, which is pretty powerful. I wont repeat all the steps here, but here is a link to the tutorial.

Step 5 – Apply to Full Data Set

Now that you have your all your steps and code written, tested and verified, it is now a pretty straight forward proposition to apply them to your full data set.

An easy way to add steps, is to right-click on a step then select “Insert Step After” (see below). This will then allow you to paste in the M code that you have saved. One tip is to check the step reference. It will refer to the name of the preceding step. Therefore make sure you do that in the first instance. For following steps, it should be easy to use your copied values from the sample data, provided you use the same names for your steps.

Insert Step

Once you have completed all your steps, you are done – all that’s left to save is Apply and Close.

The “trial and error” approach is removed, which means that when Power BI applies the updates to the 25 + million line items, you can be reasonably confident you will not have to revisit your Power Query. This is important here, as with such large data sets, the updates can sometimes take hours to complete depending on your computer’s processing capabilities.

You are now free to move on to the next part of the challenge and concentrate on creating you DAX calculations and a nice neat dashboard.

If I have time to finish my dashboard on this one, I will add it on a future post. As always, any comments or queries, please let me know in the comments below!

5 Quick Tips for next level BAN in Tableau

Everyone loves a BAN (Big Aggregate Number). They are your all important key numbers in your dataset and should be jumping off the screen and and ingraining themselves in the back of your retinas!

But the standard way of creating them in Tableau can be a bit dull and monochrome.

Below I’ll step through the usual way of creating a set of BAN, followed by 5 quick tips to take them up a level in a recent dashboard I created as part of a Maven challenge

Standard BAN Creation

Normally, you will have a relatively small set of categories that you wish to show an aggregate value for (example being “Age Group” below.

We traditionally create this by dragging the category (Age Group) into the columns, then pulling the calculation into Text marks area.


Voila! We have a BAN. Not very pretty, but a BAN all the same.

You can adjust the header and value for font type, size and colour. Normally, that is about as far as most people go.

What’s the alternative?

But hey – what if you want your BAN in a single horizontal or vertical line, or you want to colour code based on the category or value?

What can we do to customise our BAN and make it that little more memorable?

5 Simple Tips to go from “Boring BAN” to “Badass BAN”

1. Orientation

It is easy to change from a single horizontal to a vertical line, by simply dragging the category from column to rows. This allows you to maximise your data real estate depending on how how you are structuring your overall visuals.

2. Headers

We can drop a duplicate category “Age Group” onto the Text marks card. Then right click on the category header and select remove. This will result in the second image below – still not too pretty, but we are on our way.

You can keep the category above or below the BAN by shifting it up or down on the marks card.

However, my preference is to keep it below, as it keeps the focus on your Big Number!

3. Font

Many people may have different views, but my preference is to keep a single font on a dashboard. Having multiple fonts can become an unwanted distraction and give a clunky look.

4. Size

For the numbers, bigger is better!! Make the size of the numbers much larger than any adjacent text to emphasize the contrast.

To do this, select the Text icon in the marks card, and click on the three little dots on the side.


This will bring up the “edit label” input box. Here you can adjust the size and font attributes (bold, underline, italic).
In my example, I chose 36 for the Aggregate Number and 16 for the underlying category.

5. Colour

Adding some colour can help place emphasis on numbers or categories, and help improve the aesthetics and feel of your visual.

Ctrl dropping the category onto the colour mark would allow you to assign distinct colours based on each category, whereas Ctrl dropping the calculations “CNT(Consumers)” onto the colour mark will allow you to assign colour.

You can also maybe apply a quick table calculation. Here I opted to show a % of total, rather than the straight numbers. This gives a good overall perspective. As shown in the below picture, you can right click on the aggregate number and select the quick table calculation.


Lastly, in my example, I opted for adjusting the colour based on the BAN value itself. As I wanted to draw the eye to the highest value, I used a diverging scale from a green (#00aa00) to a white, which was offset at -20%. This enabled my lowest value to still be almost visible, while keeping the focus squarely on the largest number.

Overall

I was pretty happy with the outcome, and was able to apply the same effect to two sets of BAN. This helped maintain the overall important consistency and look when they were brought into the main dashboard.

What do you think?

As always, if there are any questions or comments, please reach out. I am happy to help where I can, and always open to feedback on alternative methods and learning new tricks from the data fam.

Happy Analysing – DG

Maven Challenge – Mexican Restaurant Scouting

It was that time of the month when Maven Analytics set their data playground challenge#mavenrestaurantchallenge. This time it was using 5 separate csv data tables detailing consumer and restaurant data from cities across three states in Mexico during 2012.

The brief was to

  • assume the role of a Lead Analyst that scopes out new restaurant opportunities abroad
  • review and analyse the data provided for interesting patterns and trends
  • develop a single page visual that gives potential investors a good understanding of the market to guide them in making investment decisions
  • post your final proposal on LinkedIn for review by Maven and fellow challengers

My Plan

As the final product was going to be a single page (jpg) being viewed on LinkedIn, it makes sense to keep the visual:

  • Simple and clean with a consistent palette
  • Structured and flowing, telling a progressive story
  • Clear in terms of visibility of graphics and legibility of any text

I landed on asking three simple questions:

  • Who? – who were the key demographic to target in terms of age and smoking/drinking habits
  • Where? – which city or location had the highest ratio of consumers to restaurants
  • What? – what kind of cuisine should a restaurant serve based on popularity and levels of current availability

My Solution

I decided to use Tableau for this challenge to try out some newly acquired skills. The below was my final proposal posted on LinkedIn, as well as on my Tableau Public Account

I will share some tips on how I made a few of the visuals in a follow up blog. If anyone reading has any queries on any part of the visual or it’s development, drop me a comment, and I’ll be happy to provide further detail.

Happy Analysing!