My Data Analysis Portfolio

Introduction

From reading a lot of data viz experts and tutors comments on platforms such as LinkedIn, one recommendation that keeps popping up is that any aspiring or practising “visualizer” should maintain a portfolio of work to showcase the skills and breadth of work that they may have.

I have taken that advice, and put together this portfolio style blog. Most of the visualizations that I create in a professional field are not something that I can necessarily share in the public domain, therefore I am opting to showcase the fun challenges and trials I have taken part in over the last 6-8 months.

These mostly include the Maven Analytics challenges, and are mostly performed in Power BI, although there are a few Tableau examples in there too. There are other examples from Onyx and Dataworld.

I have decided to post the visualizations in reverse chronological order for each section, to (hopefully) show some kind of progress in technique and presentation.

I will keep this “live” as a repository for my public displays.

Maven Challenges

The details for all these challenges can be found here.

2022

Maven Space Challenge – Current Entry

This is the current challenge in August and September 2022, and relates to analysing space mission data from the 1950s to present, and coming up with a visual that captures the awe of space travel. I felt that the data required some supporting context to bring alive the “awe” of travel, and focused on missions which either visited other planets within our solar system, or journeyed beyond our realm.

As such, I experimented in some visuals to try and bring out that story. I took a little inspiration from the Beastie Boys, with the title of the visual, and the fonts taken from their album (Hello Nasty). Below is my entry, and my LinkedIn submission post.

Maven Telecoms Churn Challenge – Winner

This challenge was in finalised in July 2022, and related to a set of customer profile data for a fictional telecoms service provider in California. The challenge brief was to identify high value clients, examine churn risk, and look at what steps can be taken to retain clients.

This analysis involved creating a set of normalised metrics to look at both value and risk, based on the profile characteristics of the clients. I have done a more detailed write up on my approach here.

This entry was the winner out of over 300 entries, with the Linkedin notification here, and the judging panel video here.

Power BI – Maven Churn Challenge

Maven Airlines Challenge – Finalist

This challenge ran from May through to mid June 2022, and was in relation to a fictional airline which had crossed the line of having more than 50% of passengers feeling unsatisfied by their experience in travelling with them.

The challenge was to analyze around 130,000 survey responses which included Likert type data, and investigate key areas of improvement which could help get the airline back on track with their passengers. Here is a link to my LinkedIn post, and a snapshot below of my work performed in Power Query and Power BI.

Power BI

Maven Unicorn Challenge – Joint Winner

This challenge was between April and May 2022, and used a summary dataset looking at the valuation, funding, location and investors involved in global unicorns – the $1bn+ privately owned companies, and not the other fantasy figures!

The brief was to illustrate the global landscape of unicorns, and below is my proposed report, which was posted on LinkedIn as usual.

Maven Remote Work Challenge – Finalist (2nd)

This challenge took place during March 2022. It involved analysing the results of two comprehensive surveys which posed over 100 questions to in excess of 1,500 people on the subject of remote working both during and after the recent COVID pandemic lockdowns in NSW, Australia.

I had to provide advice on a proposed remote working policy in a post-COVID world, and translate the qualitative data from the survey to provide quantitative insights into what effects remote work had on productivity and morale. Below is my final report, which was also posted on LinkedIn. I note that I also used this visual in the March SWD challenge.

Power BI

Maven SuperBowl Challenge – Finalist

The challenge ran from January to February 2022, and involved analysing historical SuperBowl advertising data in order to propose an upcoming advertising strategy for an up and coming car company. My original posting is on LinkedIn.

Power BI

2021

Maven Magic (Harry Potter) Challenge – Finalist

This challenge ran from December 2021 to January 2022. It involved reviewing the film scripts for the Harry Potter movie series and coming up with a way to visualize the “magic” of the movies.

My submission scored as a Finalist on this challenge (my third in a row), with the critique for my submission recorded here by the team at Maven Analytics. I also created a few of my own videos showing some of the techniques I used in my data prep and analysis.

Power BI

Maven Taxi Challenge – Finalist

This challenge ran from November to December 2021, and involved following a set criteria on what needed to be presented. This was a real challenge from a data prep point of view, as it encompassed in the region of 27 million line items of journey data.

Because of the complexity of the data load and set of data cleaning steps, I created a video showing my strategy and steps, which has proved to be relatively popular. I made the finals for the second time, which I was very pleased with, and took into account the comments provided by the Maven team.

Power BI

Maven Restaurant Investments Challenge – Finalist

This challenge ran from October to November 2021. It involved assessing customer and restaurant related data in several cities in Mexico to assist investors in selecting a location and type of restaurant that would prove popular.

This was around the time I was experimenting in Tableau, and decided to use it for this challenge. A good advantage of Tableau, is the public platform allows everyone to visit and fully interact with each visualization or dashboard.

I was pretty chuffed and surprised to make the finals for the first time, and I learned that the use of key questions and summary take-aways were powerful techniques in laying out a presentation.

Tableau

Maven Olympics Challenge – Entrant

This challenge ran around the time of the Tokyo Olympics, and was my first foray into the Maven Challenges, after having taken their Excel courses and a few Power BI courses. Instead of using all the data from both the summer and winter games, I decided to focus on only female participation during the summer games only.

Looking back on it now, although I tried to provide some structure and flow, it was quite busy and maybe included too much information. I was quite interested to see the difference in participation between communist and non-communist countries during the cold war.

Looking back, I should have honed in and explored that in more detail. But it was a good introduction, and spurred me on to learn more.

Power BI

Onyx

I have followed Onyx challenges for several months, but finally decided to join in in June 2022, starting with the Forbes Billionaires challenge.

Forbes Billionaires

This base requirements of this challenge was reviewing the raw Forbes data and providing a summary of the greatest philanthropists, as well as the industries which had the most successful billionaires. Here, I focused on those base requirements only, and aimed for the feel of a Forbes type article, giving a high level synopsis rather than a deep dive. The work was done in Power BI, and the LinkedIn post is here.

storytelling with data

Being an avid listener of the storytelling with data podcast and the book by Cole Nussbaumer Knaflic, I joined their online community to take part in their monthly challenges. These are more “sprint” like challenges when compared to Maven, and are good to hone in on particular aspects of improving visual communication.

February 2002 – Focus and Declutter

This challenge can be found here. The goal here was to take busy and potentially confusing data visual, and apply techniques that are taught by Cole in order to declutter the noise surrounding the visual, and attempt to focus in on the story behind the graph. Removing unnecessary components, and appropriate use of colour to focus, I wanted to be able to take the following away within 5 seconds of looking at the graph.

  1. Philadelphia is the only city to have a higher crime rate than 1990
  2. Other cities generally have seen reductions in crime rates
  3. New York is the stand out in terms of overall reduction.
Power BI

Dataworld

I had started to look to take part in the Makeover Monday data viz series, but as soon as I started, it ceased to exist! I had thought I would have time to go back and try out the old challenges, but just haven’t had any spare time lately, so there is only one example below.

Workout Wednesday has however taken on the mantle, and I intend to work on some of the Power BI related challenges, which are sometimes a challenge to recreate a Tableau visual.

The datasets are usually quite small when compared to the Maven challenges, so in theory they should be used for “quick” practise.

The Dark Web

This challenge involved looking at the price index for various illegally obtained goods on the dark web.

I used this challenge to practise the use of parameters in Tableau, which I thought was put to relatively good use when I try it out in Tableau Public, as well as trying a kind of “dark mode” background with light coloured text.

Tableau

Other Miscellaneous Work

Social Media

I created some template Power BI dashboards to present the interactions of a fictional company on LinkedIn, as well as Twitter. I used this mostly as practise for Time Intelligence functions. I created an accompanying video and blog, which allowed people to recreate it, and it has been relatively popular.

Power BI

Weather Data

This was quite a simple dashboard put together in Power BI to look at data collected by a weather balloon travelling over Germany. I used this primarily to practise working with geospatial data as well as grouping scatter charts.

Power BI

Sankey Charts

This was in some way related to my work life, as I used it as a pilot to then present at work.

This is a visual rather than a full dashboard, but I had thought that Sankey Charts would be a good way of visualising the change in risk status before and after mitigating steps were applied.

I also learned how to use particular DAX functions to create the necessary data table structure to utilise the Sankey Chart visual in Power BI. Again, I created an accompanying video and blog, which I still see people interacting with most days.

Where it all began!

This is where it began about 9 months ago in mid 2021. As I was taking the basic Tableau course with Maven Analytics, I began to experiment with some data I picked up. In this instance it was the great debate of Ronaldo vs Messi.

Looking back now, I can see plenty of little touches I could make to improve or other ways I could display this, but it is also a good reminder of the journey taken, mostly with Maven, in the last 9 months!

Tableau

So, if anyone is looking for inspiration to create their own portfolio, or see what improvements you can make to your visuals inside a few months, hopefully you can find it here.

Good luck, and keep vizzing!

Maven Challenge – Harry Potter Scripts

Using Power Query to match number of particular spells to characters

When I first saw the new Maven Magic Challenge description, I was drawn to the words “share a…visual that captures the MAGIC. And so, when I started looking at the data I had little real interest in the budgets, box office and run times. It was all about the characters, their journey, and the spells they cast… both in the movies and on the movie goers.

Connecting the Places and Spells to the Characters

So, I started to look at how I could connect the story characters and their dialogues to the magical places around the Potter universe, and the data as it was arranged allowed me to do that without too much hassle.

I then came to look at the data associated with the spells. There was no path in the dataset as provided that allowed me to directly link the spells both to the characters, the dialogues and the places. All it would allow me to do is look at them in isolation, which was no good!!

The challenge of this data set was officially set – how can I link the spells to the dialogue, and then onto the characters and places, and how can I pull this weaving story into a single page visual???

I came up with two methods to try and achieve this

My first method would be to count the number of spells cast by a character. The next would be to also identify the type and name of each individual spell cast.

It was time to cast those “spells”…..Powerus Querio!!

Method 1 – Calculating the Number of Spells Cast by a Character

Step 1 – Create a List of Spells

I needed to create a separate list containing only the incantations from the spell file. So, I imported a copy of the “Spells” csv file. I then selected the Incantation column and the Transform tab, selected Convert to List (shown below) and muttered “Reducio” under my breath.

Voila! The file was reduced to a single list of the spell incantations, as shown below. I would use this list as a reference to look at instances where the incantations were mentioned in the character dialogue. Keep reading to find out more!

Step 2 – Searching the Dialogue for Incantations

The next step was to search the Dialogue for any of the line items from my newly acquired list. I imported the Dialogue csv, then set to work deploying a combination of List.Count and Splitter.SplitTextByAnyDelimeter. The line of M code I used was:

= Table.AddColumn(#"Changed Type", "No. of Spells - Splitter", each List.Count(Splitter.SplitTextByAnyDelimiter(Spells)([Dialogue])) -1)

This would count the number of times a spell was mentioned in the line of dialogue. Let me explain a little how this works, starting from the inside out.

Splitter.SplitTextByAnyDelimeter(Spells)[Dialogue]

This code allows us to work through the lines of dialogue, and when it meets a spell from our list, it effectively stops and splits that line into a list. This new mini list will contain a separate line for each time a spell is encountered, and will have a final line to dump the text from the dialogue that does not appear in the spell list.

This new mini list does not capture which spell it has encountered, only that it has encountered one. In fact, it returns a blank line each time it finds a spell. Confused? I was a little. The below picture may help visualize it.

List.Count – Wrap Around

The Text Splitter does not do a whole lot on it’s own. Therefore, the next piece of the puzzle was to wrap it in a List.Count. This would count the number of items in each mini list we have created. However, as the above shows, it would count once for the spell, and once for the remaining text.

The next action would be to add “-1”. The result would be to give us a count of the number of times an incantation from our list had been used in the dialogue. See below example highlighting where a count has been made of an incantation.

Step 3 – Tidy Up

The final step in this initial tidy up is simply to filter the column for values greater than zero. You have now identified all the lines of dialogue that contain an incantation, as well the numbers of incantations used – sometimes there was more than 1 – see below.

Not Satisfied Yet!!!!!!

Method 2 – Extracting the name of the spells cast by each character

Step 1 – Custom Column with List

Create a list of Spells in the same way as Method 1, Step 1 above – Easy as!

Step 2 – Add Spells List to Dialogue Line Items

Add a custom column to your dialogue sheet in Power Query called “Find Spell”, and simply add “= Spells” and hit ok. You will see a new column added containing the word List in each cell. This list contains the full list of spells.

Step 3 – Expanded List

This next step may not be recommended for very large data sets, like the Maven Taxi Challenge, but it performs fine here.

We will now expand the list. This is done by clicking on the double arrows and selecting “Expand to New Rows” as shown below.

This will then duplicate each line of dialogue to equal the number of spells in our list. Below is an example, where we see the line of dialogue repeated, with each spell listed beside it. The reason for doing this will become clear below.

Step 4 – Find the Spell

Next we can use a Boolean calculation in a custom column to find if our line of dialogue contains the spell that is living in the “Find Spell” column.

I have used Text.Contains([Dialogue],[Find Spell]) . This will look at the text in the Dialogue column and see if it contains the text shown in the corresponding Find Spell column. If it finds it, it will return TRUE, and if not it will return FALSE. See the below example, where if I can recall correctly Hermione repairs Harry’s glasses with an Occulus Reparo spell!!

Oculus Reparo - theQuotes.me

Step 5 – Filter for TRUE values

Once the above step is taken, it is a matter of applying a simple filter on your custom column to only show TRUE values. Looking at the below, I can now see only the lines of dialogue that contain a spell, and now also the name of the actual spell that has been used!! This was what I originally had in mind.

Using this Information

As the Dialogue data sheet contains the movie, chapter, place and character IDs, I can now link the spells to these people and locations!

Next Step

The next step is to now build the visualisation I had in mind to work with this new insight.

Watch this space…..

Expecto Patronum!

Social Media Dashboard – Power BI

Overview

I have created a walkthrough so that you can use just 6 DAX formula to develop data extracted from your social media accounts into a metrics dashboard like that below, whether that be LinkedIn, Twitter, blog accounts or your own website.

Sample Fictional Data from LinkedIn Corporate Account
LinkedIn Metrics Dashboard developed in Power BI

A Youtube video of this walkthrough is linked here. Some of you may have come here after viewing it – thanks, and welcome! I will get down to what you came here for…

6 Key Metric DAX Formula

From the video, you will see that I used six key DAX expressions or formula again and again to create a comprehensive set of metrics to allow you to develop the dashboard. Here they are in order of development.

Totals

Total LinkedIn New Followers = 
SUM(LinkedIn_Data[LinkedIn New Followers])

You can use SUMX in lieu of SUM if you wish here, noting you will need to provide a table and expression in lieu of a column.

Year to Date (YTD)

LinkedIn New Followers YTD = 
TOTALYTD(
    [Total LinkedIn New Followers],
    LinkedIn_Data[Date]
)

Latest Month (MTD)

LinkedIn New Followers This Month = 
Calculate(
    [Total LinkedIn New Followers],
    LASTDATE(LinkedIn_Data[Date])
)

Previous Month

LinkedIn Followers Previous Month = 
Calculate(
    SUM(LinkedIn_Data[LinkedIn New Followers])
    ,PREVIOUSMONTH(LinkedIn_Data[Date]
    )
)

Month Over Month Difference


LinkedIn Followers Diff MoM = 
VAR CurrentS = Sum(LinkedIn_Data[LinkedIn New Followers])
VAR PreviousS = [LinkedIn Followers Previous Month]
VAR Result = CurrentS - PreviousS
Return
    Result

Month Over Month % Growth

LinkedIn Followers MoM Growth % = 
Divide(
    [LinkedIn Followers Diff MoM],
    [LinkedIn Followers Previous Month]
)

Bonus – Last Date

One simple bonus DAX formula that will extract the latest date in your date table. This is useful for title blocks and banners in you reports and dashs, as it will automatically update when you add your monthly data.

Latest Date = 
LASTDATE(
    LinkedIn_Data[Date]
)

Resulting Data

Once I have developed a set of calculations, I like to test them in a matrix to make sure they have the desired outcome. This is what I have done below. You can then be confident that the cards and visuals you create accurately reflect your data set.

Resulting DAX calculations with verification table showing outcomes

Comments and Feedback

If you have any comments, feedback, or requests, please let me know below or leave a comment on my Youtube channel.

Thanks

Datasets

Maven Challenge – Power Query Mega Clean – Taxi Challenge

Summary

This is a walkthrough of the cleaning I undertook on excess of 25 million lines of data as part of the #maventaxichallenge , which is the monthly data visualization challenge set up by Maven Analytics. This month the challenge involved looking at detailed NYC Taxi data between 2017 and 2020 and developing a usable single page dashboard to analyze weekly trends.

Details of the challenge can be found here. This includes the data files being provided and the requirements for cleaning and presenting the data. I have also created a walkthrough of the below on a video link if you want to see it in live action.

Plan

My plan was fairly simple but structured:

  1. Write a clear list of the cleaning steps linked back to the data dictionary
  2. Create a Data Sample
  3. Apply the cleaning steps to the sample and verify
  4. Copy out the M Code for each step
  5. Load the full data set and apply the M Code for each step

Step 1 – Cleaning List

So, the following were the steps required, and what that translated into using the data dictionary:

Cleaning RequestData Dictionary Translation
Let’s stick to trips that were NOT sent via “store and forward”store_and_fwd_flag = N
I’m only interested in street-hailed trips paid by card or cash, with a standard ratetrip_type = 1 ,
payment_type = 1 or 2,
RatecodeID = 1
We can remove any trips with dates before 2017 or after 2020lpep_pickup_datetime = between 01/01/2017 and 31/12/2020 lpep_dropoff_datetime = between 01/01/2017 and 31/12/2020
…along with any trips with pickups or drop-offs in unknown zonesPULocationID <> blank/empty
DOLocationID <>blank/empty
Let’s assume any trips with no recorded passengers had 1 passengerIf passenger_count = 0 or null, then replace the 0 with 1
If a pickup date/time is AFTER the drop-off date/time, let’s swap themIf lpep_pickup_datetime > lpep_dropoff_datetime
then lpep_dropoff_datetime
else lpep_pickup_datetime

if lpep_dropoff_datetime < lpep_pickup_datetime
then lpep_pickup_datetime
else lpep_dropoff_datetime
We can remove trips lasting longer than a day, and any trips which show both a distance and fare amount of zeroadd column
lpep_dropoff_datetimelpep_pickup_datetime
then filter out values >= 24 hours

if trip_distance AND fare_amount = 0
then filter out values
If you notice any records where the fare, taxes and surcharges are ALL negative, please make them positiveif fare_amount <0
and mta_tax <0
and extra <0
and improvement_surcharge <0
and congestion_surcharge <0
then “all negative”
else “ok”

then apply “trick” replacement to change values (see below for more detail)
For any trips that have a fare amount but have a trip distance of 0, calculate the distance this way: (Fare amount – 2.5)/2.5If fare_amount > 0
And trip_distance = 0
Then ((Fare amount – 2.5) / 2.5)
Else trip_distance
For any trips that have a trip distance but have a fare amount of 0, calculate the fare amount this way: 2.5 + (trip distance x 2.5)If trip_distance > 0
And fare_amount = 0
Then (2.5 + (trip_distance x 2.5))
Else fare_amount
Cleaning Requests and Action Steps

Step 2 – Create Data Sample

It would be next to impossible to apply the steps to over 25 million lines of data and then easily verify that all the filters, additions, replacements and modifications had taken place and you got the results you were looking for.

A much more digestible method is to recreate a sample data set based on the data we were provided, and ensure that at least one example of each cleaning step scenario listed above is included.

To do this, I took the smallest file (2020), and performed a series of deduplications on values across the locations, passenger numbers, etc. so that I was able to get a small sample of approximately 50 varied line items.

Then, in order to recreate some of the cleaning scenarios, I made minor adjustments to some values. e.g. swap the drop off and pick up times. Above is the resulting sample data set.

Step 3 – Apply Cleaning Steps to Sample and Verify

Next, I created a new PowerBI file, uploaded the sample data set and then moved to edit the set in Power Query.

Sample data in Power Query

After performing the usual steps on checking data types (text, date, time, numbers, etc) have been applied, it was then a case of applying each cleaning step, and working through. For example, the first step became: =Table.SelectRows(#”Changed Type”, each [store_and_fwd_flag] = “N”)

M Code Replace Hack

There is one hack that is really worth highlighting here, and will save some added columns and processing time in your Power Query tasks, especially in larger data sets. The below gallery shows a snapshot of each step, but here is a brief description:

  • Create a “Dummy Replace” step for the column you which to replace values on.
    • Select the Transform ribbon, and click on Replace Values (picture 1)
    • Choose two dummy values to replace that would not be values in your column. As an example here, I chose replace 5000 with 6000 (picture 2), where most values would actually be single digit values in that column.
    • Click OK, and you will see the M code in the formula box at the top (picture 3)
  • Create a “Dummy Custom Column” step to allow you to write the conditional statement you want to insert into your Dummy Replace code
    • Select the Add Column ribbon, and click on Custom Column (picture 4)
    • use intellisense to help you write the conditional statement you wish to create (picture 5)
    • Copy the statement, and click cancel. This way you are not creating an additional column.
  • Return to your Dummy Replace step and perform the following:
    • For the first value (5000), amend this value to refer to the column you want to replace values in, and precede it with “each”.
    • In this example, I replace 5000 with each [fare_amount]
    • For the second value (6000), amend this value to refer to the conditional statement you want to apply to the selected column, and again precede it with “each”.
    • In this example. I replace 6000 with each if [negatives_charges] = “all negative” then [fare_amount]*-1 else [fare_amount]
  • Once you hit return, the replacement of values occurs within the column, without the need to create an additional column. This will increase efficiency for any refreshes.

If you are interested, the “normal language” logic behind this step is:

  • For each row value in the the column fare_amount, if the corresponding row in the negative_charges column is equal to “all negative”, then we would like to multiply the fare_amount by -1, thus changing it from a negative to a positive value. Else, if it is not “all negative”, then just keep the fare_amount value as it is (no change).

Step 4 – Copy out M Code

Once you have gone through each of the steps on your sample set and verified that it has had the desired effect on you line items, you will now have the list of steps under your Applied Steps on the right hand side of the Power Query interface. A good tip is to rename these steps based on your need (e.g. Change Negative Values or Swap Dates). This will help when you want to copy steps to use in the full data set.

You can see a list of this information in the Advanced Editor Window (see below). This can be copy and pasted out and saved for future use.

If you click on each step you will see the corresponding M code just above your data table.

Advanced Editor showing M Code Steps

There is another way that I learned from the Curbal Youtube page, which is pretty powerful. I wont repeat all the steps here, but here is a link to the tutorial.

Step 5 – Apply to Full Data Set

Now that you have your all your steps and code written, tested and verified, it is now a pretty straight forward proposition to apply them to your full data set.

An easy way to add steps, is to right-click on a step then select “Insert Step After” (see below). This will then allow you to paste in the M code that you have saved. One tip is to check the step reference. It will refer to the name of the preceding step. Therefore make sure you do that in the first instance. For following steps, it should be easy to use your copied values from the sample data, provided you use the same names for your steps.

Insert Step

Once you have completed all your steps, you are done – all that’s left to save is Apply and Close.

The “trial and error” approach is removed, which means that when Power BI applies the updates to the 25 + million line items, you can be reasonably confident you will not have to revisit your Power Query. This is important here, as with such large data sets, the updates can sometimes take hours to complete depending on your computer’s processing capabilities.

You are now free to move on to the next part of the challenge and concentrate on creating you DAX calculations and a nice neat dashboard.

If I have time to finish my dashboard on this one, I will add it on a future post. As always, any comments or queries, please let me know in the comments below!

5 Quick Tips for next level BAN in Tableau

Everyone loves a BAN (Big Aggregate Number). They are your all important key numbers in your dataset and should be jumping off the screen and and ingraining themselves in the back of your retinas!

But the standard way of creating them in Tableau can be a bit dull and monochrome.

Below I’ll step through the usual way of creating a set of BAN, followed by 5 quick tips to take them up a level in a recent dashboard I created as part of a Maven challenge

Standard BAN Creation

Normally, you will have a relatively small set of categories that you wish to show an aggregate value for (example being “Age Group” below.

We traditionally create this by dragging the category (Age Group) into the columns, then pulling the calculation into Text marks area.


Voila! We have a BAN. Not very pretty, but a BAN all the same.

You can adjust the header and value for font type, size and colour. Normally, that is about as far as most people go.

What’s the alternative?

But hey – what if you want your BAN in a single horizontal or vertical line, or you want to colour code based on the category or value?

What can we do to customise our BAN and make it that little more memorable?

5 Simple Tips to go from “Boring BAN” to “Badass BAN”

1. Orientation

It is easy to change from a single horizontal to a vertical line, by simply dragging the category from column to rows. This allows you to maximise your data real estate depending on how how you are structuring your overall visuals.

2. Headers

We can drop a duplicate category “Age Group” onto the Text marks card. Then right click on the category header and select remove. This will result in the second image below – still not too pretty, but we are on our way.

You can keep the category above or below the BAN by shifting it up or down on the marks card.

However, my preference is to keep it below, as it keeps the focus on your Big Number!

3. Font

Many people may have different views, but my preference is to keep a single font on a dashboard. Having multiple fonts can become an unwanted distraction and give a clunky look.

4. Size

For the numbers, bigger is better!! Make the size of the numbers much larger than any adjacent text to emphasize the contrast.

To do this, select the Text icon in the marks card, and click on the three little dots on the side.


This will bring up the “edit label” input box. Here you can adjust the size and font attributes (bold, underline, italic).
In my example, I chose 36 for the Aggregate Number and 16 for the underlying category.

5. Colour

Adding some colour can help place emphasis on numbers or categories, and help improve the aesthetics and feel of your visual.

Ctrl dropping the category onto the colour mark would allow you to assign distinct colours based on each category, whereas Ctrl dropping the calculations “CNT(Consumers)” onto the colour mark will allow you to assign colour.

You can also maybe apply a quick table calculation. Here I opted to show a % of total, rather than the straight numbers. This gives a good overall perspective. As shown in the below picture, you can right click on the aggregate number and select the quick table calculation.


Lastly, in my example, I opted for adjusting the colour based on the BAN value itself. As I wanted to draw the eye to the highest value, I used a diverging scale from a green (#00aa00) to a white, which was offset at -20%. This enabled my lowest value to still be almost visible, while keeping the focus squarely on the largest number.

Overall

I was pretty happy with the outcome, and was able to apply the same effect to two sets of BAN. This helped maintain the overall important consistency and look when they were brought into the main dashboard.

What do you think?

As always, if there are any questions or comments, please reach out. I am happy to help where I can, and always open to feedback on alternative methods and learning new tricks from the data fam.

Happy Analysing – DG

Maven Challenge – Mexican Restaurant Scouting

It was that time of the month when Maven Analytics set their data playground challenge#mavenrestaurantchallenge. This time it was using 5 separate csv data tables detailing consumer and restaurant data from cities across three states in Mexico during 2012.

The brief was to

  • assume the role of a Lead Analyst that scopes out new restaurant opportunities abroad
  • review and analyse the data provided for interesting patterns and trends
  • develop a single page visual that gives potential investors a good understanding of the market to guide them in making investment decisions
  • post your final proposal on LinkedIn for review by Maven and fellow challengers

My Plan

As the final product was going to be a single page (jpg) being viewed on LinkedIn, it makes sense to keep the visual:

  • Simple and clean with a consistent palette
  • Structured and flowing, telling a progressive story
  • Clear in terms of visibility of graphics and legibility of any text

I landed on asking three simple questions:

  • Who? – who were the key demographic to target in terms of age and smoking/drinking habits
  • Where? – which city or location had the highest ratio of consumers to restaurants
  • What? – what kind of cuisine should a restaurant serve based on popularity and levels of current availability

My Solution

I decided to use Tableau for this challenge to try out some newly acquired skills. The below was my final proposal posted on LinkedIn, as well as on my Tableau Public Account

I will share some tips on how I made a few of the visuals in a follow up blog. If anyone reading has any queries on any part of the visual or it’s development, drop me a comment, and I’ll be happy to provide further detail.

Happy Analysing!