/ Article

Why Data is the Hottest Commodity in Business Today

May 26, 2020 12 Min Read

Jumping into the world of data science has been a big leap for me over the past year. I’ve felt like the humble tortoise watching the Crossfit-trained hare sprint past me. And with each passing lap, the hare is growing and advancing in form and function. But from the vantage point of my steady pace, I’ve noticed a fundamental flaw in data science practices today — a lack of user-centred design. This article is a meaty 2400 words, outlining some basics, a few insights from leaders in our network and my perspective on how human-centred principles can improve organizations’ efforts to derive meaning from data. So grab a cup of coffee (or a beer), sit back, and enjoy this long read.

The term “Big Data” has been bounced around for some time now. I first heard about it at an eConsultancy conference in New York in 2013. What’s interesting is that the challenges the speaker debated then are still present today. As she professed, data is a bunch of meaningless numbers until someone makes sense of them. Fast forward to 2020 and Big Data’s relevance, accessibility and governance is still hotly debated in every organization. Factor in artificial intelligence, Cambridge Analytica and Power BI and everyone’s experience and understanding of what Big Data is and how it can be used are different.

In 2009, Hal Varian, Google’s chief economist and UC Berkeley professor of information sciences, business, and economics predicted the importance of adapting to technology’s influence and reconfiguration of different industries:

The ability to take data — to be able to understand it, to process it, to extract value from it, to visualize it, to communicate it — that’s going to be a hugely important skill in the next decades.

Big Data, and the field of data science, is emerging as the hottest commodity in modern business enterprises. The role of Data Scientist is a trendy job title but there are no universally-accepted standards, methods, programs and requirements that govern the role. We’re seeing data products and services sprout up and major players such as Google, Amazon and Microsoft are investing heavily in building capabilities for enterprise organizations to use for a fee.

How is data science being used?

From our vantage point, we’ve recently watched our clients quickly ramp up their understanding and utility for data science within the enterprise. Every company is producing massive amounts of data in every facet of their organization.

Theoretically, with all this data, organizations should be able to learn more about their own work in order to drive efficiency and optimize business processes. Analyzing sales and market data should lead to improved foresight in deal-making, strengthening negotiating positions and positively affecting margin. We’ve witnessed data analysis on procurement spend to show immediate returns through better supplier management and spend channel optimization.

But how do organizations make sense of their data in order to learn from it? This is where the accessibility of third party software, consultants and data scientists comes into play. There are a whole bunch of very smart people with diverse backgrounds in math, computer science, accounting and business management consulting using their skills to find meaning in data.

What are some of the problems?

But overwhelmingly, we’re hearing that there is no standard approach, no Allen key for taking your organization’s data and turning it into gold. There are a few issues we’re hearing about.

  1. First, going back to the ambiguity of the data scientist role, there is no advanced university program pumping out readymade data scientists, although these programs and courses are emerging due to demand. It’s hard to find people who can blend the cross-functional skills necessary to be good data scientists. Tougher yet, try to find these people locally.
  2. Second, organizations are at very different stages of transformation. Data is siloed and, in some cases, inaccessible due to the age and diversity of business-critical systems that are the backbones of most organizations. The maturity of your company’s data analysis (more about this in a minute) affects your ability to derive meaning from it. The old saying, “garbage in, garbage out,” perfectly describes the problem many organizations face in the early stages of business transformation.
  3. Third, interpretation is a seemingly impossible activity to align on. Anyone with access to Excel, Power BI, or other charting tools have the ability to plug in numbers and derive meaning. Add on typical breadth and depth of departments and teams and insight depends on the user inputting the data and their point of view within the organization. Centralizing data is hard enough. Establishing trust in a single source for interpretation is very difficult. How does an organization establish data integrity and security through centralization while also allowing for personalization and customization? Tell me, I would love to learn for a project we’re working on.

These challenges come directly from leaders in various organizations who I’ve spoken with and research I’ve done over the past couple of years. Moving beyond the challenges (I’ve only mentioned a few, there are many more) and into practical advice, I have some insight to share.

Data maturity

A common theme that I’ve picked up on is the concept of data maturity. It happens to be where most organizations start their business transformation journeys. Getting to a set of core or master data is a massive undertaking — but it’s where most successful transformations begin. Once the core data is established, organizations can begin to adapt. This goes for any organization, big or small, as we’ve seen in our own transformation over the past decade.

Data maturity follows these three stages:

Data Maturity

  1. Data acquisition - Sourcing data is step one. Finding out where data exists and identifying the business-critical systems your organization uses should be your first activity. In our case, back in 2012, it was all over the place. We had a buggy proprietary in-house time tracking system, Excel-based accounting and sticky note sales data. Modernizing these tools was critical to our company’s maturity (grab a beer with Tony for the full story), but this transition moved us to data-driven management that has scaled over the years.

    Once a comprehensive list of data sources is documented, obtaining data is next. This part is what makes data acquisition so tricky. Over the years, systems were added by IT, Marketing, Sales, Operations etc. and few organizations had the foresight 10 to 20 years ago to set up systems with data portability in mind. On top of that, the emergence of SAAS and web tools means that some companies might not own the data they produce. Today, ask any IT leader, this is something they take very seriously when procuring software.

  2. Aggregation - Funneling all of that acquired data into a single source is next. Moving everything into a data warehouse or data lake is common practice and every major tech company is offering this service. From Google to Amazon, IBM and Microsoft, they all offer services for data aggregation. This is the stage where we find many zu clients. We’re exploring Google BigQuery ourselves and starting the process of automating the transfer of data from the various systems we use I’ll let our team comment further on that process in a future article.

    This step is important and becoming a best practice because every organization should own their data. Getting it out of the systems that run your business and into a single source is critical for security, portability and flexibility. It’s a critical step that leads to interpretation and visualization—how meaning can be derived from your data.

  3. Interpretation - The third stage of data maturity is interpretation, where business value increases exponentially. Having clean data in a trusted, central source is the foundation for developing meaning. It opens the door to leveraging artificial intelligence for predictive and even prescriptive modelling. It allows you to create dashboards and high-touch visualization that can align broad groups of stakeholders around key metrics and themes. It’s the area that most of us read about but few are able to achieve without going through the first two stages of data maturity.

Now, let’s assume your organization has achieved a high level of data maturity. There are a lot of companies that have and pat yourself on the back if you’re one of them. Most consumer-driven organizations and tech companies are at this stage. But challenges in interpreting data properly and finding actionable insight still emerge.

This is where I believe User-Centred Design and Design Thinking methods come into play.

How can human-centred design help?

As I’ve noted, data analysis projects begin by cultivating swaths of data and combing through it to derive meaning. This is where the data scientist is paid big bucks to find patterns and make sense of numbers. “Why did our sales drop last quarter?” Surely the answer to the question must be present somewhere in all the sales figures, customer data and market data we aggregate every day. Well, yes and no.

While researching, I came across this quote that perfectly encapsulates the challenge of using Big Data to answer business questions:

As the available quantity, quality and variety of data has increased, effectively eliminating the need to start a research process with the limitations of the data, the approach to framing the data science project has not evolved.

With so much data at our disposal, how do we answer critical business questions? The answer lies in how we frame the problem. This is where Design Thinking methods provide an advantage. One such method, the Google Design Sprint, is intended to gather the right people, problems, insights, and materials to understand, identify, ideate, prototype and test solutions in one week. It’s intended to be quick, hence the term ‘Sprint’. To run an effective sprint, a significant amount of time should be dedicated to problem framing. You must slow down to move fast.

Data science would benefit from applying the same logic. Properly framing the problem to ensure you’re asking the right question is a critical element that is largely overlooked. But how do we reframe the problem?

Problem framing

Problem Framing Diagram

Problem framing is the process of asking business questions through a human/discovery lens. Data science projects often begin within the closed system of data that exists it may not take into account the limitations of existing data sets. So back to our question, “why did our sales drop last quarter?” It’s a valid business question, but it may not be the right one. If the question is business-critical, teams should have the latitude to explore a number of potential answers but should begin with problem framing.

While at Google Sprint Conference in October, Tricia Wang of Sudden Compass provided this example of reframing business questions to human/discovery questions:

Problem framing table

The goal is to align stakeholders around the right question(s) to answer. With the right question, Design Thinking methods can be applied to find answers and arrive at conclusions. As our main priority in reaching data maturity is to derive meaning, we need to combine Big Data with Thick Data.

Tricia Wang promotes the idea that Thick Data brings businesses and customers closer together to achieve a greater understanding of problems. The Big Data approach to answering our sales question would highlight market trends, customer purchase behaviour and leading and lagging product skews, while the Thick Data approach would swing the other way. Who are our customers/users? What are the motivations, frustrations, needs, wants and pain points with our current sales experience? What questions can we ask our customers to either validate or invalidate assumptions in our sales data? Thick Data is used to develop understanding and empathy.

Tricia compared the two like this:

Big Data Thick Data table

Design Thinking provides us with the vehicle to cultivate Thick Data, combine it with Big Data and answer our business questions by advancing possible solutions. Our spin on the classic Design Thinking framework goes like this:

Design Thinking

  1. Do Some Research - Customer/stakeholder surveys, interviews, test the current state
  2. Identify Insights - Group sharing, affinity mapping, How Might We statements
  3. Come Up With Some Ideas - Rapid ideating, group, align
  4. Try Ideas Out - Prototype with sketches, low fidelity objects, workflows
  5. Get Some Feedback - Test prototypes with users/customers/stakeholders

Through this lens, our question about declining sales will be answered with customers. Applying Design Thinking to data projects enables us to validate or invalidate Big Data assumptions, come up with possible solutions to problems and get feedback quickly. These solutions may be new products, a reimagined sales experience, new marketing materials or key messages. And placing customers in the centre of this process enables leadership to make insight-driven decisions. Decisions that can be made confidently, validated, tested and adjusted in a much more rapid sequence than waiting to analyze quarterly earnings. The idea here is that all data, including customer feedback, should be used in a tangible way to arrive at conclusions.

Conclusion

Data is such a hot topic right now, and I’ve only scratched the surface. In writing this novel of an article, I’ve been able to reflect on how we at zu can tie human-centred design principles to the process of using data to derive meaning. Our journey is just beginning and it’s exciting. I/we will continue to learn by engaging with others and hearing about their journeys. So, please, if you have any insights from your own experiences, transformations and challenges please share them with me. And, if you’re still with me, congratulations, you must have bought Howard Berg’s speed reading guide in the 90s.

/ Author

zu Crew

On behalf of the team