News:

Skill.jobs Forum is an open platform (a board of discussions) where all sorts of knowledge-based news, topics, articles on Career, Job Industry, employment and Entrepreneurship skills enhancement related issues for all groups of individual/people such as learners, students, jobseekers, employers, recruiters, self-employed professionals and for business-forum/professional-associations.  It intents of empowering people with SKILLS for creating opportunities, which ultimately pursue the motto of Skill.jobs 'Be Skilled, Get Hired'

Acceptable and Appropriate topics would be posted by the Moderator of Skill.jobs Forum.

Main Menu

How Leaders Can Get Better At Using Data To Make Decisions

Started by Shahriar Tasjid, September 27, 2018, 08:58:26 AM

Previous topic - Next topic

Shahriar Tasjid

Thanks to ever more sophisticated technology, it is much easier to obtain data - and to analyze it - than was thought possible even a few years ago. As a result, there is an understandable feeling that decision-making in organizations today is more scientific and soundly based than was the case in the days when hunches, gut feel and experience guided much activity. However, leaders must not be seduced into thinking that basing decisions on data necessarily makes them beyond reproach.

Indeed, in an article in the Fall 2018 issue of Rotman Management, the magazine of the Rotman School of Management at the University of Toronto, Megan MacGarvie and Kristina McElheran argue that "in some instances, data and analytics actually make matters worse." This is because, even with impressively large sets of data and the latest, most effective analytical tools, executives can still fall into various traps, particularly if they take shortcuts in reasoning in an effort to overcome information overload.

In the article, which was previously published in the HBR Guide to Data Analytics Basics for Managers, MacGarvie, associate professor in the markets, public policy and law group at Boston University's Questrom School of Business, and McElheran, assistant professor of strategic management at the Rotman School of Management and a digital fellow at the MIT Initiative on the Digital Economy, point out that it is widely acknowledged by academics in a variety of disciplines that people do not carefully process every piece of information in every decision. Instead, they say, we rely on heuristics, or simplified procedures that help us make decisions when faced with uncertainty or where there is insufficient money or time for extensive analysis. The result is we think we are making sound decisions when in fact we are making systematic mistakes. On top of this, even with access to data, human brains are geared to adopting certain biases that distort choices often without us being aware of them.

MacGarvie and McElheran identify "three main cognitive traps that regularly bias decision-making, even when informed by the best data" and offer suggestions for avoiding them.

1.The Confirmation Trap. This is what happens when we pay more attention to findings that align with our existing beliefs and ignore other facts and patterns in the data. Confirmation bias can become much harder to avoid when individuals are under pressure from bosses and colleagues to come up with data that supports a pre-existing view of the world, say MacGarvie and McElheran. Their advice for dealing with the issue? Don't avoid information that does not fit with you or your boss's beliefs. Instead, embrace it by such means as specifying in advance the data and analytical approaches that will be used in the decision-making in order to reduce the temptation to "cherry-pick" findings; actively seek out findings that disprove your beliefs; do not automatically dismiss findings that fall below your starting point for statistical or practical significance; assign several independent teams to analyze the data separately and if they do not come to similar conclusions concentrate on the points where they diverge in order to see whether the differences are due to error, inconsistent methods or bias; and treat your findings as if they were predictions and test them.

2.The Overconfidence Trap. Senior decision-makers are especially prone to this, say MacGarvie and McElheran, for the simple reason that they tend to assume that they have been promoted on the basis of past successes that have themselves been based on making decisions. But overconfidence can also reinforce many other pitfalls of data interpretation. "It can prevent us from questioning our methods, our motivation and the way we communicate our findings to others; and it also makes it easy to under-invest in data analysis in the first place," they write. This can be a particularly difficult problem to crack because, while overconfidence can dissuade us from spending enough time or money on acquiring more information or doing extra analysis, just acquiring more information can make matters worse by adding to the sense of confidence. As MacGarvie and McElheran say, moving from data to insights "requires quality inputs, skill and sound processes." They stress the importance of processes, making several procedural tips for escaping this trap. Among them are describing your ideal experiment and then comparing this with your actual data to see where it might be lacking; making being devil's advocate a formal part of the process; keeping track of predictions and systematically comparing them with what actually happens in order to test their accuracy; and making such processes a part of the decision-making routine in order to avoid persistent biases creeping back in.

3. The Over-Fitting Trap. This is what happens when a statistical model describes "random noise" rather than the underlying relationship an organization is seeking to capture. As MacGarvie and McElheran put it, "When your model yields surprising or counterintuitive predictions, you may have made an exciting new discovery - or it may be the result of 'over-fitting.'" They quote Nate Silver, the statistician who found fame for predicting the winner of all 50 states in the 2012 U.S. presidential election, as describing this concept as "the most important scientific problem you've never heard of." The problem is that over-fit models look like they do a very good job of explaining the nuances of the past but struggle to predict the future. To overcome this bias, they suggest randomly dividing the data into a training set, on which the model will be estimated, and a validation set, which will be used to test the accuracy of the model's predictions; as with the confirmation trap, avoid cherry picking data; look for relationships that measure important effects related to clear and logical hypotheses before looking at nuances; look to see whether it is possible to construct another story using the same data; and beware the common tendency to see patterns in random data.

Source: www.forbes.com