The hardest parts of data science

Contrary to common belief, the hardest part of data science isn’t building an accurate model or obtaining good, clean data. It is much harder to define feasible problems and come up with reasonable ways of measuring solutions. This post discusses some examples of these issues and how they can be addressed.

The not-so-hard parts

Before discussing the hardest parts of data science, it’s worth quickly addressing the two main contenders: model fitting and data collection/cleaning.

Model fitting is seen by some as particularly hard, or as real data science. This belief is fuelled in part by the success of Kaggle, that calls itself the home of data science. Most Kaggle competitions are focused on model fitting: Participants are given a well-defined problem, a dataset, and a measure to optimise, and they compete to produce the most accurate model. Coupling Kaggle’s excellent marketing with their competition setup leads many people to believe that data science is all about fitting models. In reality, building reasonably-accurate models is not that hard, because many model-building phases can easily be automated. Indeed, there are many companies that offer model fitting as a service (e.g., Microsoft, Amazon, Google and others). Even Ben Hamner, CTO of Kaggle, has said that he is “surprised at the number of ‘black box machine learning in the cloud’ services emerging: model fitting is easy. Problem definition and data collection are not.”

Ben Hamner tweet on black box ML in the cloud

Data collection/cleaning is the essential part that everyone loves to hate. DJ Patil (US Chief Data Scientist) is quoted as saying that “the hardest part of data science is getting good, clean data. Cleaning data is often 80% of the work.” While I agree that collecting data and cleaning it can be a lot of work, I don’t think of this part as particularly hard. It’s definitely important and may require careful planning, but in many cases it just isn’t very challenging. In addition, it is often the case that the data is already given, or is collected using previously-developed methods.

Problem definition is hard

There are many reasons why problem definition can be hard. It is sometimes due to stakeholders who don’t know what they want, and expect data scientists to solve all their data problems (either real or imagined). This type of situation is summarised by the following Dilbert strip. It is best handled by cleverly managing stakeholder expectations, while stirring them towards better-defined problems.

Dilbert big data

Well-defined problems are great, for the obvious reason that they can actually be addressed. Examples of such problems include:

  • Build a model to predict the sales of a marketing campaign
  • Create a system that runs campaigns that automatically adapt to customer feedback
  • Identify key objects in images
  • Improve click-through rates on search engine results, ads, or any other element
  • Detect whale calls from underwater recordings to prevent collisions

Often, it can be hard to get to the stage where the problem is agreed on, because this requires dealing with people who only have a fuzzy idea of what can be done with data science. Dilbertian situations aside, these people often have real problems that they care about, so exploring the core issues with them is time well-spent.

Solution measurement is often harder than problem definition

Many problems that actually matter have solutions that are really hard to measure. For example, improving the well-being of the population (e.g., a company’s customers or a country’s citizens) is an overarching problem that arises in many situations. However, this problem gives rise to the hard question of how well-being can be measured and aggregated. The following paragraphs discuss issues that occur in solution measurement, often making it the hardest part of data science.

Ideally, we would always be able to run randomised controlled trials to measure treatment effects. However, the reality is that experimental data is often censored, there many constraints on running experiments (ethics, practicality, budget, etc.), and confounding factors may make it impossible to identify the true causal impact of interventions. These issues seriously influence many aspects of our lives. I’ve written a post on how these issues manifest themselves in research on the connection between nutrition and our health. Here, I’ll discuss another major example: the health effects of smoking and anthropogenic climate change.

While smoking and anthropogenic climate change may seem unrelated, they actually have a lot in common. In both cases it is hard (or impossible) to perform experiments to determine causality, and in both cases this fact has been used to mislead the public by parties with commercial and ideological interests. In the case of smoking, due to ethical reasons, one can’t perform an experiment where a random control group is forced not to smoke, while a treatment group is forced to smoke. Further, since it can take many years for smoking-caused diseases to develop, it’d take a long time to obtain the results of such an experiment. Tobacco companies have exploited this fact for years, claiming that there may be some genetic factor that causes both smoking and a higher susceptibility to smoking-related diseases. Fortunately, we live in a world where these claims have been widely discredited, and it is now clear to most people that smoking is harmful. However, similar doubt-casting techniques are used by polluters and their supporters in the debate on anthropogenic climate change. While no serious climate scientist doubts the fact that human activities are causing climate change, this can’t be proved through experimentation on another Earth. In both cases, the answers should be clear when looking at the evidence and the mechanisms at play without an ideological bias. It doesn’t take a scientist to figure out that pumping your lungs full of smoke on a regular basis is likely to be harmful, as is pumping the atmosphere full of greenhouse gases that have been sequestered for millions of years. However, as said by Upton Sinclair, “it is difficult to get a man to understand something, when his salary depends upon his not understanding it.”

Assuming that we have addressed the issues raised so far, there is the matter of choosing a measure or metric of success. How do we know that our solution works well? A common approach is to choose a single metric to focus on, such as increasing conversion rates. However, all metrics have their flaws, and there are quite a few problems with metric selection and its maintenance over time.

First, focusing on a single metric can be harmful, because no metric is perfect. A classic example of this issue is the focus on growing the economy, as measured by gross domestic product (GDP). The article What is up with the GDP? by Frank Shostak summarises some of the problems with GDP:

The GDP framework cannot tell us whether final goods and services that were produced during a particular period of time are a reflection of real wealth expansion, or a reflection of capital consumption.

For instance, if a government embarks on the building of a pyramid, which adds absolutely nothing to the well-being of individuals, the GDP framework will regard this as economic growth. In reality, however, the building of the pyramid will divert real funding from wealth-generating activities, thereby stifling the production of wealth.

[…]

The whole idea of GDP gives the impression that there is such a thing as the national output. In the real world, however, wealth is produced by someone and belongs to somebody. In other words, goods and services are not produced in totality and supervised by one supreme leader. This in turn means that the entire concept of GDP is devoid of any basis in reality. It is an empty concept.

Shostak’s criticism comes from a right-winged viewpoint – his argument is that the GDP is used as an excuse for unnecessary government intervention with the market. However, the focus on GDP growth is also heavily-criticised by the left due to the fact that it doesn’t consider environmental effects and inequalities in the distribution of wealth. It is a bit odd that GDP growth is still considered a worthwhile goal by many people, given that it can easily be skewed by a few powerful individuals who choose to build unnecessary pyramids (though perhaps this is the real reason why the GDP persists – wealthy individuals have an interest in keeping it this way).

Even if we decide to use multiple metrics to evaluate our solution, our troubles aren’t over yet. Using multiple metrics often means that there are trade-offs between the different metrics. For example, with the precision and recall measures that are commonly used to evaluate the performance of search engines, it is rare to be able to increase both precision and recall at the same time. Precision is the percentage of relevant items out of those that have been returned, while recall is the percentage of relevant items that have been returned out of the overall number of relevant items. Hence, it is easy to artificially increase recall to 100% by always returning all the items in the database, but this would mean settling for near-zero precision. Similarly, one can increase precision by always returning a single item that the algorithm is very confident about, but this means that recall would suffer. Ultimately, the best balance between precision and recall depends on the application.

Another issue with choosing metrics is the impossibility of reliably evaluating our choices. This is summarised well by Scott Berkun in his book The Year Without Pants:

All metrics create temptations. Even with great intentions and smart minds, data runs you faster and faster into a stupid self-destructive circle. Data can’t decide things for you. It can help you see things more clearly if captured carefully, but that’s not the same as deciding. Just as there is an advice paradox, there is a data paradox: no matter how much data you have, you still depend on your intuition for deciding how to interpret and then apply the data.

Put another way, there is no good KPI for measuring KPIs. There are no good metrics for evaluating metrics (or for evaluating metrics for evaluating metrics for evaluating metrics, and on it goes).

OK, so we’ve picked some flawed measures that we can’t really evaluate, and we’ve accepted the imperfections of the evaluation process. Are we done yet? No. There’s still the small matter of Goodhart’s Law, which states that “when a measure becomes a target, it ceases to be a good measure.” This is often the case because people will tend to manipulate results and game the system (not necessarily maliciously) in order to hit measured goals. However, even without manipulation and gaming, we often deal with moving targets. Just because the measure we’ve chosen is suitable today, it doesn’t mean it will still be relevant in a few months or years because reality changes. For example, in the 1990s, the number of page views was a good measure of interaction with websites, but nowadays it is a pretty weak measure because many websites are single-page applications. Reality changes and so should our problems, solutions, measures, and goals.

Embracing ambiguity and uncertainty

Personally, I find the complexities of measurement and problem definition quite interesting. However, many people aren’t that interested in this stuff – they just want working solutions and simple stories. As demonstrated by the examples throughout this article, over-simplification of complicated matters is a pervasive issue that goes beyond what’s commonly considered “data science”. This is why storytelling is seen as a key skill that data scientists should possess. I believe it’s also important to maintain one’s integrity and not just make up stories that people would buy, but it’d be naive to assume that this never happens. Either way, good data scientists embrace uncertainty and ambiguity, but can still tell a simple story if needed.

Note: The ideas in this post were first presented at The Sydney Data Science Breakfast Meetup Group. The slides for that talk are available here.

Public comments are closed, but I love hearing from readers. Feel free to contact me with your thoughts.

Excellent excellent common sense article which seems to be very uncommon nowadays in a hype-filled world. Thanks for reminding us that accurate problem description should trump everything else!

Thank you for a great article. Yes, well defined problems and well defined performance evaluation are keys to designing any data driven model.

I also found that sometimes we have the question we want to pursue, but getting to an answer is not straight forward. For instance, I’m trying to find affinity between food ingredients using only data analytics. One may think that this problem is trivial. In fact, to find this, one has to totally rethink how to represent data (having ingredients in a table or a dataset produced nothing.) Yes finding affinity between 2 ingredients is trivial, but when the number grows up, one has to change the setting. In my case, I had to think of ingredients as part of a complete network, where the network is a recipe. It is then and only then, that I was able to find affinity between many ingredients.

Yes, many good points here, thanks for this. There is even another difficulty apart from problem definition and solution measurement: the semantics of the data itself. Are the definitions real (referring to other concepts) or nominal (“a cheeseburger is a burger with cheese”)? Scope and context can easily be lost, and can only be put back by a human being taking a decision, no amount of empirical modelling can re-discover this. Also precision and accuracy of the data may be unknown and/or insufficient to solve the problem posed. If you have detected these issues, sometimes you can re-formulate the problem,but typically its not clear from the column headings alone (if you even have these). Even worse, the definitions may be incoherent or nonsensical: eg in classical econometric modelling the definition of a rational agent entails that the agent have knowledge of the future!
Thanks Andrew! I agree that often what you can do is very limited by the data. I’ve also encountered cases where I had to infer meaning from cryptic column names. In many cases the small arbitrary decisions that we make along the way can have a major influence on the final results!

Is this article a Poe? The amount of muddled priors throughout it is disturbing. The word “sophistry” keeps leaping to mind. E.g.:

> For instance, if a government embarks on the building of a pyramid […] s/“a government”/Paris/g s/pyramid/“Eiffel Tower”/g How persuasive is Shostak now? Let’s just ignore that, generally speaking, pyramids were for the glorification of a ruling class which elicited no guidance fr its populus.

> […] no matter how much data you have, you still depend on your intuition for deciding how to interpret and then apply the data. No, I draw conclusions from the data, ponder the evidence, challenge the null hypothesis & take a reasoned position based on best available evidence, accordingly. Mere superstitious nonsense has no part of it. Has Berkun ever heard of the tautology fallacy? Hume’s Fork? ISO/ANSI? I do so detest the implied anti-intellectualism of that block quote.

Thank you for your comment, Arthur.

I don’t think that replacing “a government” with Paris and pyramid with “Eiffel Tower” makes Shostak’s argument any less convincing. The point still stands that the GDP measure is flawed because it can easily be manipulated through activities that don’t contribute positively to society. Do you think that GDP growth is worth focusing on?

As to the Berkun quote, I suggest you read more of his writings before dismissing it. He is anything but superstitious and an anti-intellectual. Drawing conclusions from data, pondering the evidence, challenging the null hypothesis, and taking a reasoned position based on the best available evidence doesn’t contradict depending on your intuition for deciding how to interpret and then apply the data. Intuition is often based on experience and data that hasn’t been formally captured. For example, in Bayesian modelling you’re free to use your intuition when setting priors, but the weight of these priors becomes smaller as evidence accumulates.

Thank you Yansir! I found this post extremely enlightening. I was starting to question to what extent the value of a data scientist would be affected by the emergence of automated data cleaning and model building tools. It makes far more sense that the true value of the role is in the experimental design as opposed to applying algorithms.
Subscribe