by Bill Franks, Thomas H. Davenport, Robert Morison, Dec 10, 2018

Download the PDF


Each year, the International Institute for Analytics takes time to focus on the latest analytics trends and the most pressing analytics challenges currently facing organizations. We gather the basis for our predictions from our day-to-day work supporting and advising analytics leaders and programs. Our insights arise from the breadth of expertise and cross-industry perspectives we encounter every day from our clients, partners, and members of the IIA expert network.

This is our 9th annual look forward into the upcoming year, and the annual Predictions and Priority research brief and the associated webinar have become among IIA’s most popular content of the year. As in the past few years, we have augmented our predictions by also providing some specific priorities for leaders to focus on as they attempt to address each prediction. Therefore, each priority provides specific guidance as to how to best prepare for, and adapt to, its corresponding prediction.

IIA’s Predictions and Priorities for 2019


The ethical use of analytics has always been a concern. This is especially true in areas like healthcare and finance, where strict guidelines are in place for how analytics should be used for purposes such as issuing credit. Detailed data gathering on consumers, increasingly sophisticated algorithms, and the rise of AI all mean that ethics is coming to the forefront.

While it is absolutely true that unethical activities have been undertaken with analytics in the past, a human being was required to guide the analysis in a bad direction. With the rise of opaque artificial intelligence (AI) processes, analytical models can go down an unethical path without human guidance. Indeed, it may not even be possible for humans to discern that an algorithm is doing something unethical when making a loan decision. A variable or feature that is a proxy for race, for example, may not be identifiable in a model.

Examples already abound of ethical problems arising from AI. Whether it is an AI model being accused of being racist, sexist, or just generally unpleasant, there is clearly a need for attention to be paid to ethics. The ethical issues that the analytics and AI communities must address are broad and varied. For example, with the rise of realistic chatbots, we must decide if humans deserve to be alerted when they are talking to a bot instead of a person. In 2019, ethics in analytics will finally start getting the attention they deserve.


There is a distinct lack of sufficient legal, regulatory, and social frameworks related to analytics and AI ethics today. Therefore, it is incumbent upon organizations to be proactive in ensuring that their analytics and AI models are being built and utilized in an ethical way. Analytics and data science teams should take the lead in making sure that ethics is addressed for every analytics process.

A useful guideline might be to mimic the ethical advisory boards (sometimes called “Institutional Review Boards”) utilized for university or government research. Any project involving humans or animals must be approved by a review board that validates that the proposed project is ethical and that subjects are treated appropriately. Analytics organizations should similarly set up an ethics review board to vet analytics and AI projects.

In addition to up-front analysis of the input data and analytical plan itself, there should also be back-end validation of the ethics of analytics and AI. This includes validation that a model hasn’t learned to use data in an inappropriate way over time. It also includes validation that users aren’t applying the results of an analysis in ways or contexts to which the results should not be applied.

Ideally, organizations will develop written ethical guidelines that also outline processes for validation and enforcement. As highly sensitive data proliferates, and more and more models are built that impact people’s lives every day, it is critical to make ethics a central focus in 2019.


At IIA’s spring symposium and Analytics Leadership Consortium events in Santa Clara, we heard perspectives from several venture capital (VC) firms about where they are looking to invest within the analytics space. Other than the unsurprising uptick in AI investments, all of the VC’s made reference to a new focus on data. As outlined in an IIA blog post, many VCs think that unique data can differentiate a firm as much or more than analytics today.

Going back less than a decade ago, many companies had substantially the same data. For example, all retailers had the same point of sale data and all phone companies had the same call detail records. As a result, differentiation came from doing better analytics against that common data. Also, access to the algorithms and processing power for those analytics was difficult and expensive. Today, per IIA’s 2018 predictions & priorities report, algorithms are ubiquitous and processing is cheap.

The point the VCs made is that replicating even complex analytics against a given set of data is achievable with some investment. On the other hand, unique data can be much more difficult and costly to replicate. As a result, it can require more investment to replicate a data source than to replicate analytics on top of that data source. Hence the focus on data.

One example the VCs mentioned involves training data for self-driving vehicles. Vendors are spending millions to create and sell robust training data. Once contracts are in place, new competitors will need to create data from scratch and then displace incumbent providers. Such training data is profitable and tough to compete against.


Given the increased value being placed upon novel data, organizations should strive in 2019 to take stock of the data they’re creating or capturing today. In addition, organizations should investigate what other data sources they might collect that would provide a competitive advantage. Firms should keep employing novel analytics, of course, but they should also try to develop additional unique data.

Once that unique data is identified, consider ways to monetize that data. There have been partnerships, investments, and acquisitions made in recent times that were driven by the desire to gain access to a unique data source. One example is here. The value of novel data isn’t just in how it can help an organization, but it can also be translated into real dollars as well.

Most importantly, recognize that it is very hard in today’s world to stay too far ahead purely through “out analyzing” the same data that the competition is also analyzing. The ideal scenario, of course, is to combine novel data with novel analytics that make use of that data. Such efforts provide an amplification of value and competitive advantage beyond what data or analytics can provide individually. Organizations that figure out how to do this best will enjoy a double advantage.


As happens with any technology, the power of AI will certainly be used for evil as well as good. As discussed in a recent IIA blog, the ability to produce totally convincing but fake video, audio, and signatures is already here. These capabilities have already led to an arms race in the areas of fraud and cybersecurity, and it will only accelerate in 2019.

Many people are unaware that hackers and pornography providers actually drove a lot of Internet innovation. Why? Because they wanted to find ways to push the technology to its limits. Similarly, fraudsters are driving innovations in fraud and cyberattacks, which in turn lead to innovations in defenses against those attacks.

While humans soon will not be able to tell a video is fake, an AI process might be able to. For example, fake videos may have fewer eye blinks than real videos. Edges in fake pictures may be a little too smooth compared to real pictures. For every new innovation by fraudsters, a new innovation of defense will follow.

It is entirely possible that we’ll soon have a political scandal that destroys a career or campaign in which it is later proven that the damning video or audio was actually a fake. There will be numerous cases of money being stolen after a person’s voice has confirmed his or her identity over the phone. This arms race cannot be ignored as it is being waged around us. We need to keep up with both sides and adopt defensive innovations as fast as possible.


Large organizations already have dedicated security teams watching for network compromises or breaches. Moving forward, it will be impossible to have effective cybersecurity without including sophisticated AI processes to identify and flag problems. It is also wise to use analytics and AI to constantly test one’s own networks and physical security measures to ensure that they are solid.

On the flip side, organizations should consider how serious new cyber approaches might be before widely releasing the details of new discoveries. Work should be done to identify weaknesses and potential attack vectors and to mitigate them as much as possible before making the weaknesses public. It’s also important not to adopt AI-based cybersecurity measures before they are mature; existing approaches often lead to many “false positives.”

The value of traditional security measures like signatures is moving close to zero as we enter 2019. The only path forward in the world of deep AI fakes is to fight in the arms race. We can expect security firms to lead this charge, but companies will have to also step up to the fight.


Talent continues to be a major issue in the world of analytics. As AI continues to increase in importance, it will be necessary to scale up a team to drive it. Luckily, organizations can accelerate AI adoption if they take advantage of what IIA has called Analytic Athleticism.

Analytic Athleticism describes the fact that the same underlying skills that drive success for one type of analytics also drive it for others. In other words, the underlying mindset and skillset required for success is very similar across different types of analytics. This also holds true for AI—particularly the machine learning components of it.

The overall process for utilizing most forms of AI is very similar to the process used for other analytic disciplines. Certainly, AI is far closer to other analytics disciplines than it is to any other organizational skill or activity. As a result, organizations are starting to realize that AI is best placed within their analytics organizations.

This same trend has played out in the past. Companies at various points in time may have had distinct big data, forecasting, geospatial, or text analytics teams. Over time these teams have typically been merged together under a parent analytics umbrella. While certain team members may focus on one specific analytical or AI discipline, the organizational structure reflects the similarities. In organizations where AI wasn’t placed with analytics in the first place, the integration of AI into the analytics organization is starting to happen already and will accelerate through 2019.


Not everyone will recognize the logical fit of AI with the broader analytics organization. As a result, analytic leaders should be proactive in making the case for the consolidation of AI within the analytics organization. They should also drive the integration by adopting AI methods at an early stage.

By taking ownership of AI, analytics leaders will help ensure the technology’s successful adoption while also positioning themselves and their organizations well for the future. The required skills, value propositions, and business outcomes are similar enough between AI and traditional analytics to make a merger seem logical to most stakeholders.

One key benefit that an analytics organization brings to the table is the existence of relationships with key business stakeholders. These relationships will be even more crucial for AI since so few business people fully understand it today. It doesn’t make sense to have a distinct AI team building those relationships from scratch. The knowledge of how existing analytics processes work—from strategic analysis to organization-wide deployment—will also be of great benefit to AI initiatives.

Finally, it is undeniable that AI is currently the shiny new object everyone wants to touch. From a political perspective, there is probably no better way to raise the profile of the analytics organization and to get support for additional investment than to lead the AI charge. Politics shouldn’t be a driver of taking ownership of AI, but the increased influence of analytics/AI groups is simply a side benefit of doing the right thing for the right reasons. Just be careful to keep the hype and expectations in check!


With all of the analytics being done today, companies are successfully deploying much of what is created, correct? Wrong! According to the Rexer Data Science Survey, barely 10 to 15% of companies “almost always” deploy results and another 50% only deploy “often.” That leaves 35% to 40% of companies that only occasionally or rarely successfully deploy analytical models. We have encountered some organizations that say their successful deployment rates are less than 10%.

Of course, there is no economic value to an analytical model that isn’t deployed. Given rising investment in analytics, deployment rates must increase or the investments won’t be sustainable. There are clearly too many “science experiments” taking place within analytics groups today. Especially in the AI space, many companies are doing pilots, but very few companies have truly deployed AI at enterprise scale. To be fair, this is partly due to the immature state of AI technology. However, some AI is mature enough to put into production applications, and many traditional analytical models are also being left undeployed.

There are both technological and political barriers that inhibit the deployment of analytics processes. These must be worked through. The year 2019 will see companies increasing pressure on analytics organizations to deploy more analytics and drive more value.

The historical technological barriers to deployment, which often involve integration with existing systems and processes, are largely solvable today. The political and policy barriers are much harder to resolve. This puts a premium on the ability of analytical leaders to successfully work with business leaders to bring about organizational change. Those that can’t will fall behind the competition that is successfully deploying more analytics models and driving more value. The risk of not succeeding in this regard are de-emphasis and de-funding of analytics within the company.


There is no time like 2019 to make deployment of analytics a true enterprise priority. This will involve not just the analytics team, but the broader organization as well. With historical barriers to overcome and a pattern of non-deployment to reverse, purposeful and meaningful action must be taken.

One powerful way to push deployments forward is to set up measures and related compensation structures that will reward those who enable deployments and penalize those who do not. Regardless of the initiative, there is no better way to get people to push through political and bureaucratic barriers than to tie doing so to their paychecks.

It also makes sense to charter a task force to examine the historical barriers to deployment and what the root causes of those barriers really are. The history of past model development and deployment efforts can serve as your research database in this regard. Once patterns are identified, they can be addressed proactively. Put a plan in place to mitigate or remove those barriers. This will require commitment and coordination among the business, IT, and analytics teams to push through the effort.

Organizations that figure out how to enable faster, more efficient deployments of analytics processes will create a virtuous circle of more deployments driving more value. This, in turn, will support more investment.


The tools available to create and execute analytics have evolved massively in recent years. Activities that used to require a lot of custom coding can now be done with just clicks, drags, and drops. Whether it be visual workflows, visualization tools, search-based analytics, or automated modeling tools, creating and executing analytics has never been easier.

In today’s world, people without a deep and formal education in math or statistics can now make use of sophisticated analytics. It’s no longer as important to understand the detailed math behind a logistic regression, for example, as to understand when a logistic regression should be used (which machines can even recommend) and how to interpret its results. Citizen data scientists and business analysts can handle this and are starting to do a lot of it.

By making use of easy-to-use graphical or search-based interfaces, citizen data scientists are able to create analytics that go well beyond simple spreadsheet formulas. Some are able to generate complex machine learning models with the aid of automated machine learning tools. These types of tools lower the level of coding and technical knowledge required and enable a much broader spectrum of employees to both develop and drive value from analytics and machine learning.

The biggest challenge with this trend is to ensure that guardrails are put in place so that a citizen data scientist can increase his or her productivity while minimizing the risk that an inappropriate process is created due to their lack of deeper technical knowledge. Some automated analytics and machine learning tools incorporate guardrails, while others rely on the expertise of the user.


Most IT departments have resigned themselves to the fact that people will find a way to access the data they need and to use their tools of choice. When these abilities are fully blocked, talent flees an organization. As a result, many IT teams now focus on monitoring and facilitating access rather than blocking it.

Analytics organizations must come to a similar acceptance that citizen data scientists and business analysts are here to stay. These people will find ways to push the limits of the analytics that they can produce. Fighting the trend is a losing battle, so focus instead on embracing it and making it safe.

Rather than trying to ban, block, or minimize these users, focus on safely enabling them. Put guardrails in place that keep citizen data scientists within bounds and out of trouble. Focus on stopping them from overextending themselves.

This includes creating processes for experts to double check the work of citizen data scientists before a model is broadly used and adopted. It also includes curating the data citizen data scientists have access to so that it is as clean and ready to use as possible. Last, provide formal support mechanisms so that the “citizens” have a way to reach out for help when they need it.

Helping citizen data scientist safely reach their full potential will help analytics drive more value for an organization. A big benefit for the analytics organization is that these users can start to handle many of the common, more basic analytic requests on their own. This will free up the analytics professionals to focus on the bigger, more complex, more valuable problems. That’s where many of them want to focus anyway. This trend also enables analytical professionals and data scientists to spend more time focusing on business understanding and organizational change, as many of the technical tasks they have historically performed are taken over—at least in part—by machines.

About the Authors

Bill Franks

Bill Franks Headshot Bill Franks is IIA’s Chief Analytics Officer, where he provides perspective on trends in the analytics and big data space and helps clients understand how IIA can support their efforts and improve analytics performance. His focus is on translating complex analytics into terms that business users can understand and working with organizations to implement their analytics effectively. His work has spanned many industries for companies ranging from Fortune 100 companies to small non-profits.

Franks is the author of the book Taming The Big Data Tidal Wave (John Wiley & Sons, Inc., April, 2012). In the book, he applies his two decades of experience working with clients on large-scale analytics initiatives to outline what it takes to succeed in today’s world of big data and analytics. Franks’ second book The Analytics Revolution (John Wiley & Sons, Inc., September, 2014) lays out how to move beyond using analytics to find important insights in data (both big and small) and into operationalizing those insights at scale to truly impact a business. He is an active speaker who has presented at dozens of events in recent years. His blog, Analytics Matters, addresses the transformation required to make analytics a core component of business decisions.

Franks earned a Bachelor’s degree in Applied Statistics from Virginia Tech and a Master’s degree in Applied Statistics from North Carolina State University. More information is available at www.bill-franks.com.

Tom Davenport

Tom Davenport Headshot Tom Davenport is the co-founder of IIA. He is the President’s Distinguished Professor of IT and Management at Babson College, and a research fellow at the MIT Center for Digital Business. Tom’s “Competing on Analytics” idea was named by Harvard Business Review as one of the twelve most important management ideas of the past decade and the related article was named one of the ten ‘must read’ articles in HBR’s 75 year history. His most recent book, co-authored with Julia Kirby, is Only Humans Need Apply: Winners and Losers in the Age of Smart Machines.

Robert Morison

Robert Morison HeadshotRobert Morison serves as IIA’s Lead Faculty member. An accomplished business researcher, writer, discussion leader, and management consultant, he has been leading breakthrough research at the intersection of business, technology, and human asset management for more than 20 years. He is co-author of Analytics At Work: Smarter Decisions, Better Results (Harvard Business Press, 2010), Workforce Crisis: How to Beat the Coming Shortage of Skills And Talent (Harvard Business Press, 2006), and three Harvard Business Review articles, one of which received a McKinsey Award as best article of 2004. He holds an A.B. from Dartmouth College and an M.A. from Boston University.