Maven’s data challenge for April-May 2022 involves the analysis of a set of data related to “Unicorns”.
A few of my work colleagues saw my LinkedIn post related to the announcement of this challenge, and thought I was looking at data related to the horsey-type unicorns, and they were pleasantly amused at the thought.
Needless to say, they now understand what the term means in a business sense (as I had learned what it meant a mere 48 hours beforehand).
Anyway, this is a shorter than usual blog post related to my initial approach to this challenge, and again links back to an earlier post I made in LinkedIn about a technique I picked up from Ben Jones in a SWD podcast he had with Cole Nussbaumer Knaflic.
In it, he described how we can take on the role of a journalist, and list out the questions we would like to “ask” the data in order to draw out information that you could use to build a structured story or article with real depth of meaning.
Step 1 – Looking at the type of data
I did not wish to look directly at the data initially in case it skewed or biased my view on the questions I might be able to ask. This is where a data dictionary comes in very handy. It helps give an overview and context for the subject without giving too much away about the content.
The below is the data dictionary provided with the dataset.
Field
Description
Company
Company name
Valuation
Company valuation in billions (B) of dollars
Date Joined
The date in which the company reached $1 billion in valuation
Industry
Company industry
City
City the company was founded in
Country
Country the company was founded in
Continent
Continent the company was founded in
Year Founded
Year the company was founded
Funding
Total amount raised across all funding rounds in billions (B) or millions (M) of dollars
Select Investors
Top 4 investing firms or individual investors (some have less than 4)
From an initial review, I divided the types of data up into several categories which could help me form a list of questions
Dates – These can help form time and trend related questions
Money – These can form queries on average values, return on investment and growth
Geography – These can help make comparisons at several hierarchical levels
Companies – These can assist in analysis at both high and detailed levels from both and company and investor viewpoint
Now this initial breakdown was complete, I could move on to developing a list of questions I could potentially “ask” the data and form the crux of my analysis.
Step 2 – Ask the Questions Already!
OK, so here are a shortlist of questions I made, along with some “subquestions”. I explored and used some of them in my final design, but left them all listed here in case they help fire the imagination of anyone else in their journey.
Questions Related to Date/Time
Are unicorns a recent phenomena, and is there any trend over the past number of years in the number of unicorns being created? How is this trending with the overall economy?
How long does a company typically take to become a unicorn? Is this impacted by the type of industry or country? Is there any apparent trend over time?
Are there certain industries which have become more prevalent in recent years as opposed to say 10 or 20 years ago?
Questions Related to Companies, Money, Funding and Growth
Which industries make up the majority of unicorns and have the highest average valuations?
Which industry or country is seeing the most funding from venture capitalists, and are there any particular industries providing high capital investment return rates?
Which industry or company has experienced the highest or lowest value annual growth rate since joining the unicorn club?
Are investors typically attracted to particular industries, or do they “spread their bets”?
Questions Related to Location
Where in the world (cities, countries or continents) are the highest prevalence for unicorns? Has this changed over time?
Is there any connection between these locations and the type of industry?
Where in the world has the highest value unicorns or those that provide the highest investment return rates?
How do unicorns measure up against % GDP of GDP per capita in particular countries? (might need some additional data for this one!)
Are unicorns confined to historic world powers, or are emerging markets and countries taking a lead?
Step 3 – Start your analysis
It would maybe not be wise to attempt to answer all these questions, but maybe focus on a selection that you think will tell a story to “illustrate the current landscape of unicorn companies around the globe“.
I will leave you here with at least an alternative technique to approaching these challenges, and hopefully also a little inspiration or some ideas to start or evaluate your own approach.
Below is my final result of answering at least one or two questions from each section.
There was quite a bit to this challenge, so I am covering it in one blog and one video. This one will look at the strategy and execution of the planned steps I laid out in a LinkedIn post and in the below photo shot; the separate video looks at the technical aspects of cleaning and manipulating the data, as well as assembling the visuals.
For anyone who has stumbled across this blog accidently, and is not familiar with the challenge, details can be found here.
So let’s begin!
As well as Maven providing the raw data from the remote work survey and some background on the contents, the below is what I took as the brief for the challenge, with the crux purposely highlighted in bold by the Maven team.
For this challenge, your task is to assess the productivity and morale implications of working remotely and outline an ideal policy for the post-pandemic future, presented in the form of a single-page report or dashboard.
Maven Remote Challenge Blog
I used this as the basis for my overall plan for the challenge, which I quickly drafted up using the old school pen and paper method.
Now that was done, it was time to put the plan into motion….
Part A – Considering Outcomes and Goals
When reviewing the brief, I asked myself a few questions that would dictate my outcomes and goals.
Why am I actually creating this report / dash?
Who is it for?
What decision needs to be made as a result of it?
What can I do by way of design to facilitate that decision process?
Apart from wanting to compete in the Maven challenge, I am creating this report to summarize a large amount of survey data and analyze how remote work impacts productivity and morale among workers.
In terms of “Who?” and “What decision?”, I considered my key stakeholder to be the senior management at a generic company, and anticipated that they would require a high level assessment with a summary of the positive and negative affects of remote working on mainly productivity, but also morale, and how this would shape how employees felt about remote work. This would in turn help the company decide on what plans they would need to enact in a post-COVID world.
After mulling over these questions and answers, it was fairly straightforward in my mind that my overall outcomes would include:
Outlining the basis for an ideal future remote working policy for a company
Consider any implications (positive or negative) on worker morale and wellbeing
Measure how productivity was impacted during remote working
Summarize the above in a high level explanatory report
These aspects, as well as the fact the requirement was for a single page static report/dash, meant that firstly, I would be analyzing at a high level rather than a granular level. The only reason to go to a granular level would be to analyze a specific industry or demographic, which I did not consider part of my brief (rightly or wrongly).
Secondly, I would also be looking to produce and explanatory set of visuals rather than an exploratory set.
What’s the difference?
Exploratory vs Explanatory
The storytelling with data blog will define this more eloquently than me, but basically an exploratory report would allow the user to manipulate data through filters and slicers and search for their own answers to questions they may have. For explanatory reports, someone has already done the “exploration” on the data, and is presenting what are hopefully the key points with supporting data and visuals.
Part B – Metrics and Comparisons
Now a general approach has been defined, I looked at potential comparisons and metrics that would be used to determine company policy. I needed to keep in mind that if a company is going to base their policy on these, they need to be robust enough to allow a decision to be made.
You may think it’s strange as I haven’t mentioned looking at the data yet, so how would I know what metrics or comparisons I will be able to make?
While this is true, I believe it is good to have an idea of what the most appropriate metrics could be in order to fulfil your brief, even before you look at the data in any kind of detail.
Productivity
I know I have two data sets, one for 2020 and one for 2021. The obvious metrics would be to try and measure productivity in each separate year, and then do a year-over-year (YoY) comparison. If data is also available, I could also look at a further comparison with “normal working” productivity. This will tell me whether remote working has had a positive or negative effect and whether that was sustained between 2020 and 2021.
Morale
Similar to productivity, I would look at changes in any issues that may affect morale or working conditions between “normal work” and the periods of remote work in 2020 and 2021. Again, if possible, I would like to see how the stats change over time.
General Opinion
I am guessing that the survey will ask people’s opinions of working remotely, and whether they are generally in favour of it and whether they would like to retain it as a benefit long term. If so, I could measure any change over time, or the ratio of people in favour versus those not in favour, and this could also feed into the overall future policy to be developed.
Part C – Reviewing the Data
With ultimate goals and an idea of the type of metrics I feel would fulfil the brief, it was time to look at the data itself.
This was a little dauting at first. It wasn’t the fact that each survey had around 1,500 respondents; rather it was that the 2020 survey had 73 questions, and the 2021 survey had 109 questions. Add to that, many of the questions were quite lengthy, and were the headings of each column. This made it quite difficult to know exactly what you were looking at!
Quick solution
My quick solution for reviewing? Copy and transpose the head row from each file into a new table. This makes it much easier to read, and easier to see how many of the questions are actually grouped together.
It also allowed me to add a few columns to each table aligned to the metrics. As I reviewed each years’ questions I could tick whether I thought particular questions may be related to assessing morale, productivity or an overall policy. Having this list also came in very handy as a reference when the data was eventually loaded into Power BI.
As an example of the transposed table, below is a sample of a few questions from the 2020 and 2021 files that would touch on productivity.
Once I reviewed and listed out all the questions I thought were applicable to my goals, I was happy to move onto the next step and start some provisional review and analysis of the survey results.
Part D – Pull in Data and Develop Plan
There were two distinct csv files for the surveys, and I loaded them into Power Query and Power BI. The respondents in each survey are not linked, therefore there was no immediate need to link the files via a model.
On reviewing the questions and answers for each data set, I narrowed my focus to a limited set that I could build a plan around. This allowed me to create a storyboard and then further detail each aspect – all of which is laid out below.
Storyboard
I usually try to sketch together a outline plan or storyboard when I have reviewed the data and have a general idea of what I want to show and how I want it to structure and flow. There are usually a few changes along the way, as you will see in the final product. However, you will note here that I wanted to show a main headline to capture productivity, morale and the policy, and then expand on each aspect in separate sections.
Productivity – Self Assessment
I wanted to calculate the productivity ratings in 2020 and 2021 and compare. Both surveys had a similar question to ask how the employee rated their own productivity when compared to working in the workplace. As such, I could use it as an effective comparison.
The answers were categorised in a large number of % values displaying degrees of more, less or the same productivity. I decided to combine and reduce the number of categories to 5, as shown below in my sandbox draft version. I also wanted to show the % change for each category to emphasize the increase in productivity between the two years, also shown below. Text could be further added to further summarize and explain the point.
Productivity – Management Assessment
As the above were self-assessments, I wanted to develop the next level of narrative and to see if managers saw the same impacts on productivity. I was able to find a survey question directed to managers, to see how they rated the productivity of their own employees.
Productivity – Occupation
To complete my narrative on productivity, I wanted to go slightly more granular. The results so far were showing generally improved productivity, but for balance I wanted to see if there were particular occupations that were more likely to experience lower productivity. This could allow management to then potentially request further analysis if their companies had employees that had occupations with low rates of productivity.
Morale – Barriers
There were no questions in the surveys which explicitly dealt with morale; however, I noted that in 2021 there were a set of questions related to potential barriers to remote working, and whether people had improved, worsened or largely the same experiences for each barrier when compared to normal working. These barriers included issues such as:
motivation
isolation
health
collaboration
poor management
I took these as being related to morale. I consolidated these queries using an unpivot technique in Power Query, and again consolidated the ratings to a simple three ratings of, “worsened”, “same/no barrier” or “improved”. This allowed me to quickly see how each issue affected employees in 2021.
To supplement this, I noticed there were additional related standalone questions related to isolation, health and wellbeing. My plan was to bring these all together in an extended visual with a supporting narrative (see below draft result). It was also an opportunity to try out Gestalt’s principle of connection by boxing around some of the bar chart graphics in an attempt to connect them.
Sandbox sample graphs for morale
Morale – Working Times
From looking at the data and seeing that people were more active and feeling better, I thought then next level would be to examine the typical working day to see what impact remote work was having on people’s general activities.
There were a set of questions in 2021 related to hours people were assigning to daily tasks. I note there were similar questions in 2020, but there were significant errors in the arithmetic, which meant I could not make an equivalent comparison, and therefore discarded them.
I again performed an unpivot of columns in Power Query to consolidate the data, and found an average significant drop in time required for commuting (expected), and an increase in time for personal and family time, which you could potentially relate to increased wellbeing and morale. This was in spite of people actually working slightly longer workday on average.
Policy – Employee Expectations
Finally, I thought the policy section should provide the story arc to bring the productivity and morale factors together. I came across a question that I thought could knit it together – This was how much time employees wished to work remotely in the future.
Again, the answers were categorised into % values in intervals of 10, with some text answers also. I decided to consolidate these into day values to give a more simplified and easier to read view. This would show management that there was significant appetite from employees to work remotely, but that there was also a large number of people who wanted very minimal or no remote work. I highlighted this in red to purposely draw attention, and show there was a potential need for flexibility.
Policy – Retention and Attraction
Finally, and to again supplement the above with closely related data, there were questions to managers as to whether they believed offering remote working would improve employee retention and also attract new talent. Instead of using charts or graphs for these types of supplements, I decided to just use aggregated numbers and supporting text.
Stitch it all together
Once I had completed all my visuals in separate sandbox tabs, I copied them into a combined report/dash, which resembled the sketch I had shown earlier as my initial storyboard.
My initial draft was complete, and I added some summary headings to each section to highlight the key take-aways from each set of charts.
I was reasonably happy with the draft, but I wasn’t quite finished just yet.
Part E – Review Outcomes and Goals
I felt that individual components of my goals had been achieved by each of the visuals I had included, as well as how they were grouped.
However, as part of my review, I took a tip from storytelling with data and Maven’s thinking like an analyst course, and used the opportunity to get feedback from a member of the Maven community, and also one of my subscribers to this blog (MarjoleinOpsteegh).
After working with the data on and off for a few days, the intention was to step back and get a set of fresh eyes to briefly study the collection of visuals and see if it was relatively understood, and whether it flowed naturally and captured the brief.
Marjolein was generous enough to take the time to give some constructive feedback and criticism, which is a real learning process I actually enjoy. There were some general layout and formatting tips, but the crucial comment was to bring the overall outcomes and policy to the fore, and put them top and centre of the dash, rather than at the bottom which I had in my draft.
Part F – Fine Tune and Submit
After the review, I rearranged the running order of the dashboard, which made the summary and overall policy the first thing you see. I then followed it by productivity and morale, with some slight formatting on the headers.
Overall, I was quite pleased with the result, and was happy to apply some of the techniques I had been reading about in Cole Nussbaumer Knaflic’s storytelling with data book, a highly recommended read.
One change I might like to make after having the chance to look at it for a few days however, might be to increase the overall size of the visual, and provide a little more white space to give some of the visuals a little more breathing space from each other.
[UPDATE: As noted at the intro – I opted to support this blog with a walkthrough video of creating each of the visuals, especially those that used the unpivot and consolidation functions in Power Query] – for further work on Maven Challenges, stay tuned on LinkedIn, follow my twitter account which is notified when I write a blog, or alternatively just follow this blog.
Any feedback or queries, please drop them in the comments below – Thanks!
From reading a lot of data viz experts and tutors comments on platforms such as LinkedIn, one recommendation that keeps popping up is that any aspiring or practising “visualizer” should maintain a portfolio of work to showcase the skills and breadth of work that they may have.
I have taken that advice, and put together this portfolio style blog. Most of the visualizations that I create in a professional field are not something that I can necessarily share in the public domain, therefore I am opting to showcase the fun challenges and trials I have taken part in over the last 6-8 months.
These mostly include the Maven Analytics challenges, and are mostly performed in Power BI, although there are a few Tableau examples in there too. There are other examples from Onyx and Dataworld.
I have decided to post the visualizations in reverse chronological order for each section, to (hopefully) show some kind of progress in technique and presentation.
I will keep this “live” as a repository for my public displays.
Maven Challenges
The details for all these challenges can be found here.
2022
Maven Space Challenge – Current Entry
This is the current challenge in August and September 2022, and relates to analysing space mission data from the 1950s to present, and coming up with a visual that captures the awe of space travel. I felt that the data required some supporting context to bring alive the “awe” of travel, and focused on missions which either visited other planets within our solar system, or journeyed beyond our realm.
As such, I experimented in some visuals to try and bring out that story. I took a little inspiration from the Beastie Boys, with the title of the visual, and the fonts taken from their album (Hello Nasty). Below is my entry, and my LinkedIn submission post.
Maven Telecoms Churn Challenge – Winner
This challenge was in finalised in July 2022, and related to a set of customer profile data for a fictional telecoms service provider in California. The challenge brief was to identify high value clients, examine churn risk, and look at what steps can be taken to retain clients.
This analysis involved creating a set of normalised metrics to look at both value and risk, based on the profile characteristics of the clients. I have done a more detailed write up on my approach here.
This challenge ran from May through to mid June 2022, and was in relation to a fictional airline which had crossed the line of having more than 50% of passengers feeling unsatisfied by their experience in travelling with them.
The challenge was to analyze around 130,000 survey responses which included Likert type data, and investigate key areas of improvement which could help get the airline back on track with their passengers. Here is a link to my LinkedIn post, and a snapshot below of my work performed in Power Query and Power BI.
Power BI
Maven Unicorn Challenge – Joint Winner
This challenge was between April and May 2022, and used a summary dataset looking at the valuation, funding, location and investors involved in global unicorns – the $1bn+ privately owned companies, and not the other fantasy figures!
The brief was to illustrate the global landscape of unicorns, and below is my proposed report, which was posted on LinkedIn as usual.
Maven Remote Work Challenge – Finalist (2nd)
This challenge took place during March 2022. It involved analysing the results of two comprehensive surveys which posed over 100 questions to in excess of 1,500 people on the subject of remote working both during and after the recent COVID pandemic lockdowns in NSW, Australia.
I had to provide advice on a proposed remote working policy in a post-COVID world, and translate the qualitative data from the survey to provide quantitative insights into what effects remote work had on productivity and morale. Below is my final report, which was also posted on LinkedIn. I note that I also used this visual in the March SWD challenge.
Power BI
Maven SuperBowl Challenge – Finalist
The challenge ran from January to February 2022, and involved analysing historical SuperBowl advertising data in order to propose an upcoming advertising strategy for an up and coming car company. My original posting is on LinkedIn.
Power BI
2021
Maven Magic (Harry Potter) Challenge – Finalist
This challenge ran from December 2021 to January 2022. It involved reviewing the film scripts for the Harry Potter movie series and coming up with a way to visualize the “magic” of the movies.
My submission scored as a Finalist on this challenge (my third in a row), with the critique for my submission recorded here by the team at Maven Analytics. I also created a few of my own videos showing some of the techniques I used in my data prep and analysis.
Power BI
Maven Taxi Challenge – Finalist
This challenge ran from November to December 2021, and involved following a set criteria on what needed to be presented. This was a real challenge from a data prep point of view, as it encompassed in the region of 27 million line items of journey data.
Because of the complexity of the data load and set of data cleaning steps, I created a video showing my strategy and steps, which has proved to be relatively popular. I made the finals for the second time, which I was very pleased with, and took into account the comments provided by the Maven team.
Power BI
Maven Restaurant Investments Challenge – Finalist
This challenge ran from October to November 2021. It involved assessing customer and restaurant related data in several cities in Mexico to assist investors in selecting a location and type of restaurant that would prove popular.
This was around the time I was experimenting in Tableau, and decided to use it for this challenge. A good advantage of Tableau, is the public platform allows everyone to visit and fully interact with each visualization or dashboard.
I was pretty chuffed and surprised to make the finals for the first time, and I learned that the use of key questions and summary take-aways were powerful techniques in laying out a presentation.
Tableau
Maven Olympics Challenge – Entrant
This challenge ran around the time of the Tokyo Olympics, and was my first foray into the Maven Challenges, after having taken their Excel courses and a few Power BI courses. Instead of using all the data from both the summer and winter games, I decided to focus on only female participation during the summer games only.
Looking back on it now, although I tried to provide some structure and flow, it was quite busy and maybe included too much information. I was quite interested to see the difference in participation between communist and non-communist countries during the cold war.
Looking back, I should have honed in and explored that in more detail. But it was a good introduction, and spurred me on to learn more.
Power BI
Onyx
I have followed Onyx challenges for several months, but finally decided to join in in June 2022, starting with the Forbes Billionaires challenge.
Forbes Billionaires
This base requirements of this challenge was reviewing the raw Forbes data and providing a summary of the greatest philanthropists, as well as the industries which had the most successful billionaires. Here, I focused on those base requirements only, and aimed for the feel of a Forbes type article, giving a high level synopsis rather than a deep dive. The work was done in Power BI, and the LinkedIn post is here.
Being an avid listener of the storytelling with data podcast and the book by Cole Nussbaumer Knaflic, I joined their online community to take part in their monthly challenges. These are more “sprint” like challenges when compared to Maven, and are good to hone in on particular aspects of improving visual communication.
February 2002 – Focus and Declutter
This challenge can be found here. The goal here was to take busy and potentially confusing data visual, and apply techniques that are taught by Cole in order to declutter the noise surrounding the visual, and attempt to focus in on the story behind the graph. Removing unnecessary components, and appropriate use of colour to focus, I wanted to be able to take the following away within 5 seconds of looking at the graph.
Philadelphia is the only city to have a higher crime rate than 1990
Other cities generally have seen reductions in crime rates
New York is the stand out in terms of overall reduction.
Power BI
Dataworld
I had started to look to take part in the Makeover Monday data viz series, but as soon as I started, it ceased to exist! I had thought I would have time to go back and try out the old challenges, but just haven’t had any spare time lately, so there is only one example below.
Workout Wednesday has however taken on the mantle, and I intend to work on some of the Power BI related challenges, which are sometimes a challenge to recreate a Tableau visual.
The datasets are usually quite small when compared to the Maven challenges, so in theory they should be used for “quick” practise.
The Dark Web
This challenge involved looking at the price index for various illegally obtained goods on the dark web.
I used this challenge to practise the use of parameters in Tableau, which I thought was put to relatively good use when I try it out in Tableau Public, as well as trying a kind of “dark mode” background with light coloured text.
Tableau
Other Miscellaneous Work
Social Media
I created some template Power BI dashboards to present the interactions of a fictional company on LinkedIn, as well as Twitter. I used this mostly as practise for Time Intelligence functions. I created an accompanying video and blog, which allowed people to recreate it, and it has been relatively popular.
Power BI
Weather Data
This was quite a simple dashboard put together in Power BI to look at data collected by a weather balloon travelling over Germany. I used this primarily to practise working with geospatial data as well as grouping scatter charts.
Power BI
Sankey Charts
This was in some way related to my work life, as I used it as a pilot to then present at work.
This is a visual rather than a full dashboard, but I had thought that Sankey Charts would be a good way of visualising the change in risk status before and after mitigating steps were applied.
I also learned how to use particular DAX functions to create the necessary data table structure to utilise the Sankey Chart visual in Power BI. Again, I created an accompanying video and blog, which I still see people interacting with most days.
Where it all began!
This is where it began about 9 months ago in mid 2021. As I was taking the basic Tableau course with Maven Analytics, I began to experiment with some data I picked up. In this instance it was the great debate of Ronaldo vs Messi.
Looking back now, I can see plenty of little touches I could make to improve or other ways I could display this, but it is also a good reminder of the journey taken, mostly with Maven, in the last 9 months!
Tableau
So, if anyone is looking for inspiration to create their own portfolio, or see what improvements you can make to your visuals inside a few months, hopefully you can find it here.
Rather than jumping onto the snapshots of the Power BI platform and going through the process of putting the visuals together, I thought I would take a few steps back, and go through the plan and strategy I now go through before attempting these types of challenges, as without that you can often create a confused message. I pick up new lessons with every challenge, and find it an excellent form of self-development.
Final Submission
My Key Steps in Planning and Strategizing
For this particular challenge, I broke my process down into the following steps, but they are equally applicable to other work I do:
Read the Brief
Consider the End User
Summarise Scope and Objectives
Review the Raw Data
Consider available metrics
Develop a story or flow
Sketch your layout and structure
Execute, Review and Polish
Step 1 – Read the Brief
The challenge noted the following:
We’ve just added a brand new data set to the Data Playground, containing data from Super Bowl commercials for 10 popular brands this century
and
For this challenge, you’ll be assuming the role of Marketing Analyst at Maven Motors, an up-and-coming US car manufacturer looking to make a splash in the market. They have approved the budget to run a TV spot during the 2022 Super Bowl, but need you to analyze historical data to help guide the creative direction.
Your task is to recommend a data-driven strategy for the Maven Motors Super Bowl spot, and present it in the form of a single page report or dashboard.
They are asking you for two things, namely to “analyze the historical data to help guide the creative direction…” and to “….recommend a data-driven strategy for the Maven Motors Super Bowl spot.”
So, we need to keep these two points in the forefront of our mind as we progress.
Step 2 – Consider the End User
Perhaps the most important step and question to ask yourself – who am I creating this report for and how do I want them to feel when they look at it? This will dictate the style, format and presentation.
For example, the Harry Potter challenge or the Jordan vs LeBron challenge would encourage the use of colours, graphics and specialised visuals to engage the general viewer and add an element of fun and excitement.
My Harry Potter Magic Submission
However, as this challenge is aimed at the decision makers within a large corporation, the style and format should be very different. For this, if I imagined being in their shoes, I would like to see the following type of summary report or dashboard:
Structured – easy to read and flows well
Professional – the product should align with the standards and values of the company
Related to the brief – no need for superfluous information that distracts or wastes time, something I don’t have a lot of!
Comprehensive but to the point – only pick and present the key findings
Able to invite a decision – use the data and recommendations to enable an easy decision to be made
Step 3 – Summarise Scope and Objectives
Following Project Management 101 techniques, I then documented my scope and objectives. This may seem like a duplication of the above, but is helpful as a reference to always come back to. For this reason, I usually write this out and have it sitting to the side to glance at as I work on the final solution:
Scope Notes
In case you can’t read my writing:
Scope
Provide a 1 Page Report
Analyze Historical Data
Provide Recommendations for a data-driven strategy
Objectives
Enable board members, who are “time-poor” people, to have sufficiently succinct information in order to make a key decision on the direction of the marketing strategy
Be clear, concise and professional
Enable board members to be confident in your analysis (i.e. structure, methodology, analysis all appear credible, therefore we can put a measure of trust in the recommendations).
Step 4 – Review the Raw Data
It’s only now that I actually come to open the data files and look to see what is in there.
I found that, having performed the above steps, when I started looking at the data I was immediately considering whether this particular type of information or column was relevant to my brief or objectives.
I will try and briefly go through the my main thoughts when reviewing the data (below snapshot for 2000 and 2001), and how it related back to the scope, and what I would look at for more detailed analysis.
I would note at this point that this usually takes a few days, as I often like to let it ferment a little while I sip on a flat white or two.
2000-2001 data snapshot
Data Not Considered Relevant
Here’s point number one – NOT ALL DATA WILL BE RELEVANT.
Therefore, it was important to put aside any items that are not particularly useful in relation to the scope of works. Looking at the above table, these are highlighted in light blue.
Individual Brands
You may argue that the brand is important, but I would argue otherwise.
What’s the difference between Coke and Pepsi, or Bud and Bud Light in the context of this data? I don’t think this data set is going to tell you especially with the individual sample sizes.
What may give insight is the type of brand (food, cars, alcohol, etc. which I will address later). So for me, I took the decision to ignore individual brands.
Ad Links
The links themselves have no data analysis value for me here. Their only use may be to look at the actual ads, or provide a link in your dashboard or report. Therefore, I ignored these.
Estimated Cost
You may think- Wowwww! How can you ignore cost??? Well, I am not totally ignoring cost, but the total cost provides little insight, as there is the variable of ad length to consider.
The longer the ad, the higher the cost, right?
So, I checked if there is a correlation with a quick cost/time calculation. Guess what? The $ cost per second is constant for each year. Therefore I ignored the total cost, and focussed on the $ cost per second instead.
YouTube Likes
Again, you may wonder why I might be interested in YouTube views, but not likes? Well, this is my personal insight. I watch a lot of YouTube, and I enjoy a lot of what I watch, but I rarely feel the need to click “like” – I would be more likely to subscribe. I heard that YouTube itself does not place too much importance on “likes” these days too, they are more likely to track views, duration, click through, etc.
Additionally, the number of views is not underestimating the number of people who have engaged with the video (even if they only watched for a few seconds), but the number of likes will actually underestimate the number of people who actually like the video, as it is not compulsory to click the like button if you liked it! Again, I knocked that one off my analysis list.
TV Viewers
This may or may not be another surprise to some. If you take the time to look at the TV views for any given year, you will see they are pretty much constant – see below ( I recall there was one outlier). You can read into this that the money you pay to advertise during the Super Bowl is giving you the direct access to everyone that is watching.
As it is constant, the duration, brand, type, etc. are not directly impacting the number of viewers. I would only take this into account if you had a live feed of figures while the ads were being watched.
So, you guessed it, I kicked TV Viewers to the kerb!
TV Viewers – Year 2000
Data Considered Relevant
OK, ok….. so I have pretty much now ignored a large part of the dataset. So what exactly am I actually interested in, and what information relevant to the scope do I think I can get out of it? Well, let’s see now….
Year
An easy one – we want to potentially see how our variables change over time, so we will definitely be keeping Year
Brand Category
With ten brands, its a little difficult to compare. However if we can reduce this to a smaller set of categories, it may be of benefit. In the back of my mind, I was thinking that as Maven Motors is a motor company, it might be useful to look at the performance of Kia, Hyundai and Toyota as a group, rather than as individual companies as it effectively provides a larger single sample size.
Following that, it made sense to group the others into Alcohol, Snacks and Drinks, Sports and Services, and see if there were any trends among those categories.
By the way, I created aYouTube videoshowing you how to create those categories.
Video Characteristics
It was difficult to glean any real insight into the 7 categories without more detailed analysis. Further, it is part of the brief to influence the “creative direction” of the advertisement. Therefore, it was a no-brainer to retain this data.
Again, I createdYouTube videoactually showing you how to “unpivot” these characteristics for more effective analysis.
Length
Further, I saw that there were varying lengths of ads, and there may be a potential trend over time which may allow me to impart some recommendation on what the current trend is – so that was kept in too.
YouTube Views
I retained this as one of my key data points. As Maven Motors were already gaining access to the TV network through having a large marketing budget, I believed that a true test of whether an ad was “creatively successful” (viral), was whether people then took the time to seek it out on YouTube to watch again, or felt the need to share it with friends, family and their social networks.
$/s
As discussed earlier, I saw that the $/s for advertisements was constant for each year, which makes sense. I kept and used this data point as I would be able to see the trend in cost over time and potentially give further assurance to the board on likely final expenditure (even though budgets were already approved).
Number of Ads per Brand per Year
One further data point I noticed was that some brands were having more than one ad per year. The below example shows Budweiser using 13 ads over 4 Super Bowl events. Again, this is something I thought would be interesting to look at further to see if there were additional recommendations I could make on the potential number of ads to run.
Budweiser Ads – 2000 – 2003
Step5 – Consider Available Metrics
Considering all of the above, I came up with the following list of data points or metrics I wished to examine and present and make data-driven recommendations on:
The number of ads in a campaign year – What was the average number of ads per year per brand category.
Ad Duration– what is considered a standard ad length, and what is the trend over time telling us we should do this year.
$/second– what is the historical trend in ad cost, and see if we can forecast the range for the coming year.
Average YouTube Views vs Ad Characteristics – analyze the each characteristic and see which is more likely to garner additional views.
Step 6 and 7 – Develop a Story or Flow and Sketch Out
The next steps were to capture the entire process above is a single structured sheet that would form my final report. My story needed to:
Set the scene (provide context),
Lay out my methodology,
Establish the scope of the data analysed
Present the key data and insights
Give a summary recommendation
Again, I find it useful to sketch this out on a piece of paper (see below), along with even some potential visuals I have in my head.
I note that for this report I included a significant amount of text. This was on purpose as one of my key objectives was to enable the the board members to have confidence in my analysis. I felt that if I laid out the context, methodology and recommendations in clear concise language, that is then further backed up by data, this will provide that level of confidence required in such situations.
Report Structure Sketch
Step 8 – Execute, Review and Polish
Once the above it set, it is now a matter of prepping and cleaning any data (categorization, etc.), establishing the measures, then building the visuals into the structure I had envisaged.
One thing I like about Tableau is creating visuals on separate pages, and then constructing a final dashboard. I sometimes replicate that feel in Power BI, where I will trial and build individual visuals on separate tabs before bringing them in. Below are a few samples.
Data Scope and Ad Length Visuals
This gets the concepts down, so I know the type of data I want to show and in what general format.
Options for Ad Characteristics – Phase 1
I will experiment with how things are presented and how well the visual can communicate the point I want to make.
Options for Add Characteristics – Phase 2
Once I am comfortable with the concept, I will look to “polish” it. What does this mean to me?
Polishing Steps
As I said, once I am happy with the story, flow, structure and type of visuals I am using, I will move on to “polish”, which for me usually involves the following:
Selecting an appropriate colour theme/palette and applying consistent colours for headings, text and visuals. There are lots of websites that help generate themes and palettes.
Apply consistent fonts and font sizes
Remove superfluous information:
axis titles (if already obvious)
axis values (if using data labels)
repeated legends, which can be done with use of consistent colouring
gridlines, axis lines, etc. unless really needed
give concise visual titles
any other item that is not conveying
Align titles, text boxes, visuals (the eagle eyed will note a few of my visuals are off centre!)
Provide consistent spacings
Applying these steps allows me to move from the above visuals to the below final versions, which I believe met my goal of giving clean, concise and professional looking visuals.
If you have made it to the end of this, congratulations!! I apologise that my blogs are maybe not as short and to the point as my dashboard, but I hope this provided some insight into the process I took in developing my challenge submission, and maybe provided you with some tips and tricks for the next one.
Let me know what you think – would you change anything, or do you have a good alternative approach? I would be interested to hear.
Also, If you would like to see a video version of this and maybe some exploration of actually creating the above visuals, leave a comment below, or drop me a message on LinkedIn.
This is a walkthrough of the cleaning I undertook on excess of 25 million lines of data as part of the #maventaxichallenge , which is the monthly data visualization challenge set up by Maven Analytics. This month the challenge involved looking at detailed NYC Taxi data between 2017 and 2020 and developing a usable single page dashboard to analyze weekly trends.
Details of the challenge can be found here. This includes the data files being provided and the requirements for cleaning and presenting the data. I have also created a walkthrough of the below on a video linkif you want to see it in live action.
Plan
My plan was fairly simple but structured:
Write a clear list of the cleaning steps linked back to the data dictionary
Create a Data Sample
Apply the cleaning steps to the sample and verify
Copy out the M Code for each step
Load the full data set and apply the M Code for each step
Step 1 – Cleaning List
So, the following were the steps required, and what that translated into using the data dictionary:
Cleaning Request
Data Dictionary Translation
Let’s stick to trips that were NOT sent via “store and forward”
store_and_fwd_flag = N
I’m only interested in street-hailed trips paid by card or cash, with a standard rate
Let’s assume any trips with no recorded passengers had 1 passenger
If passenger_count = 0 or null, then replace the 0 with 1
If a pickup date/time is AFTER the drop-off date/time, let’s swap them
If lpep_pickup_datetime > lpep_dropoff_datetime then lpep_dropoff_datetime else lpep_pickup_datetime
if lpep_dropoff_datetime < lpep_pickup_datetime then lpep_pickup_datetime else lpep_dropoff_datetime
We can remove trips lasting longer than a day, and any trips which show both a distance and fare amount of zero
add column lpep_dropoff_datetime – lpep_pickup_datetime then filter out values >= 24 hours
if trip_distance AND fare_amount = 0 then filter out values
If you notice any records where the fare, taxes and surcharges are ALL negative, please make them positive
if fare_amount <0 and mta_tax <0 and extra <0 and improvement_surcharge <0 and congestion_surcharge <0 then “all negative” else “ok”
then apply “trick” replacement to change values (see below for more detail)
For any trips that have a fare amount but have a trip distance of 0, calculate the distance this way: (Fare amount – 2.5)/2.5
If fare_amount > 0 And trip_distance = 0 Then ((Fare amount – 2.5) / 2.5) Else trip_distance
For any trips that have a trip distance but have a fare amount of 0, calculate the fare amount this way: 2.5 + (trip distance x 2.5)
If trip_distance > 0 And fare_amount = 0 Then (2.5 + (trip_distance x 2.5)) Else fare_amount
Cleaning Requests and Action Steps
Step 2 – Create Data Sample
It would be next to impossible to apply the steps to over 25 million lines of data and then easily verify that all the filters, additions, replacements and modifications had taken place and you got the results you were looking for.
A much more digestible method is to recreate a sample data set based on the data we were provided, and ensure that at least one example of each cleaning step scenario listed above is included.
To do this, I took the smallest file (2020), and performed a series of deduplications on values across the locations, passenger numbers, etc. so that I was able to get a small sample of approximately 50 varied line items.
Then, in order to recreate some of the cleaning scenarios, I made minor adjustments to some values. e.g. swap the drop off and pick up times. Above is the resulting sample data set.
Step 3 – Apply Cleaning Steps to Sample and Verify
Next, I created a new PowerBI file, uploaded the sample data set and then moved to edit the set in Power Query.
Sample data in Power Query
After performing the usual steps on checking data types (text, date, time, numbers, etc) have been applied, it was then a case of applying each cleaning step, and working through. For example, the first step became: =Table.SelectRows(#”Changed Type”, each [store_and_fwd_flag] = “N”)
M Code Replace Hack
There is one hack that is really worth highlighting here, and will save some added columns and processing time in your Power Query tasks, especially in larger data sets. The below gallery shows a snapshot of each step, but here is a brief description:
Create a “Dummy Replace” step for the column you which to replace values on.
Select the Transform ribbon, and click on Replace Values (picture 1)
Choose two dummy values to replace that would not be values in your column. As an example here, I chose replace 5000 with 6000 (picture 2), where most values would actually be single digit values in that column.
Click OK, and you will see the M code in the formula box at the top (picture 3)
Create a “Dummy Custom Column” step to allow you to write the conditional statement you want to insert into your Dummy Replace code
Select the Add Column ribbon, and click on Custom Column (picture 4)
use intellisense to help you write the conditional statement you wish to create (picture 5)
Copy the statement, and click cancel. This way you are not creating an additional column.
Return to your Dummy Replace step and perform the following:
For the first value (5000), amend this value to refer to the column you want to replace values in, and precede it with “each”.
In this example, I replace 5000 with each [fare_amount]
For the second value (6000), amend this value to refer to the conditional statement you want to apply to the selected column, and again precede it with “each”.
In this example. I replace 6000 with each if [negatives_charges] = “all negative” then [fare_amount]*-1 else [fare_amount]
Once you hit return, the replacement of values occurs within the column, without the need to create an additional column. This will increase efficiency for any refreshes.
If you are interested, the “normal language” logic behind this step is:
For each row value in the the column fare_amount, if the corresponding row in the negative_charges column is equal to “all negative”, then we would like to multiply the fare_amount by -1, thus changing it from a negative to a positive value. Else, if it is not “all negative”, then just keep the fare_amount value as it is (no change).
Step 4 – Copy out M Code
Once you have gone through each of the steps on your sample set and verified that it has had the desired effect on you line items, you will now have the list of steps under your Applied Steps on the right hand side of the Power Query interface. A good tip is to rename these steps based on your need (e.g. Change Negative Values or Swap Dates). This will help when you want to copy steps to use in the full data set.
You can see a list of this information in the Advanced Editor Window (see below). This can be copy and pasted out and saved for future use.
If you click on each step you will see the corresponding M code just above your data table.
Advanced Editor showing M Code Steps
There is another way that I learned from the Curbal Youtube page, which is pretty powerful. I wont repeat all the steps here, but here is a link to the tutorial.
Step 5 – Apply to Full Data Set
Now that you have your all your steps and code written, tested and verified, it is now a pretty straight forward proposition to apply them to your full data set.
An easy way to add steps, is to right-click on a step then select “Insert Step After” (see below). This will then allow you to paste in the M code that you have saved. One tip is to check the step reference. It will refer to the name of the preceding step. Therefore make sure you do that in the first instance. For following steps, it should be easy to use your copied values from the sample data, provided you use the same names for your steps.
Insert Step
Once you have completed all your steps, you are done – all that’s left to save is Apply and Close.
The “trial and error” approach is removed, which means that when Power BI applies the updates to the 25 + million line items, you can be reasonably confident you will not have to revisit your Power Query. This is important here, as with such large data sets, the updates can sometimes take hours to complete depending on your computer’s processing capabilities.
You are now free to move on to the next part of the challenge and concentrate on creating you DAX calculations and a nice neat dashboard.
If I have time to finish my dashboard on this one, I will add it on a future post. As always, any comments or queries, please let me know in the comments below!
You must be logged in to post a comment.