Why Blockchain And Analytics Don’t Mix Well
By Bill Franks, Nov 09, 2017
The concept of a blockchain is quite a phenomenon in recent times. It has quickly risen from a relatively obscure idea known mostly within some small circles to one that is being discussed as having potential to literally change some of the fundamentals of the world’s economic systems.
I don’t claim to be a blockchain expert, but given that it is a new paradigm for generating and storing data, my mind has naturally drifted toward thinking about how the mechanics and performance of analyzing data within a blockchain environment would be different from how we analyze data within other platforms. My initial thoughts point to some significant challenges.
A System That Isn’t Built For Analytics Isn’t Optimized For Analytics
Let’s start with a historical perspective by examining the early days of data warehousing and 3rd normal form data. Storing data in 3rd normal form does have a range of benefits, particularly when it comes to storing massive amounts of data at an enterprise scale. For one example, it minimizes data duplication and, therefore, storage costs. However, for building models and executing deeper analytics, we need to denormalize such data. So, 3rd normal form adds overhead to our analytic processing. There are benefits to the overhead of starting with 3rd normal form data, such as increased data integrity. But, the mechanics of preparing the data for any given analysis are actually more involved.
I see the same theme playing out with blockchain data. Blockchain has a number of advantages due to the fact that the data is validated by a range of unrelated servers and that data can’t be changed once logged. But, as with 3rd normal form data, blockchain technology is not aimed first and foremost at enabling complex analytics. Rather, it is aimed at ensuring secure and accurate recording of transactions. It is natural, therefore, that the ability to perform deeper analytics is a secondary consideration today (at best!) to the ability to validate and log transactions.
Some Potential Blockchain “Gotchas” To Think About
First, within a blockchain, the history of each transaction is stored. The history of a given chain of events can therefore be recreated by traversing the entire chain. So, for example, I can go back and determine who held a specific Bitcoin at a specific point in time. The problem is that I must recursively scan the history of that coin to get the answer. This isn’t too bad when I need to validate a single historical record. But, what about if I want to do some analytics across millions of people? To do that, I must traverse the records for all of the holdings of each person to get back to the point in time of interest. In other words, just to get the balances for each person one year ago requires quite a bit of work beyond simply querying a table. This is a time and processing intensive task in the aggregate and will impact what type of analytics can be done within a given timeframe or cost.
Next, inherent within blockchain is the idea that the entire history of the chain is stored everywhere. This is terrific, but it also means that the size of the data stored will blow up very quickly. As opposed to 3rd normal form data, which minimizes data redundancy, blockchain actually maximizes redundancy since every single node in the system hosts a complete copy of the data. This keeps the chain secure. However, as the data grows, so must the capacity of the individual nodes. At some point, the data will outgrow the capacity of the servers.
As blockchain data grows, analytics will become an issue even before outgrowing the servers. The servers are configured to validate transactions, update the blockchain, and store the results. The servers are not configured for allowing complex analysis across a range of time and people. Performance for analytic logic will be highly spotty at best.
Last, the entire point of a blockchain is the historical record. In other environments, we have the ability to build additional views or summaries of data to support our analytics. I don’t believe there is any mechanism for this built into blockchain today. And, per the prior points, even if it existed it likely wouldn’t be very good. What this means is that to support analytics, we’ll need to build separate environments that have the rollups and analytic views we require. And, these views will need to be updated periodically to reflect the latest data.
One big plus of blockchain related to the building of analytic views is that data can’t be edited or deleted. So, a summary table built once on historical data will stay fresh. There is no need to process anything but the latest data when updating our analytics views unless we require metrics that force us to process through all the data again to get accurate information.
Bringing Analytics To Blockchain
Over time, more focus will certainly be given to how to effectively use blockchain data for analytics. In the meantime, expect the prior issues and more like them to make the analysis of blockchain data more cumbersome and less efficient than we’d like. As with any new tool or technology, leave plenty of extra time to deal with the “gotchas” that will certainly accompany early efforts at doing analytics against blockchain data.
As I said before, I’m not a blockchain expert and I’m simply providing some initial thoughts I have on its implications for analytics. I welcome comments to help refine the thinking for me and other readers.
Originally published by the International Institute for Analytics
About the author
Bill Franks is IIA’s Chief Analytics Officer, where he provides perspective on trends in the analytics and big data space and helps clients understand how IIA can support their efforts and improve analytics performance. His focus is on translating complex analytics into terms that business users can understand and working with organizations to implement their analytics effectively. His work has spanned many industries for companies ranging from Fortune 100 companies to small non-profits.
Franks is the author of the book Taming The Big Data Tidal Wave (John Wiley & Sons, Inc., April, 2012). In the book, he applies his two decades of experience working with clients on large-scale analytics initiatives to outline what it takes to succeed in today’s world of big data and analytics. Franks’ second book The Analytics Revolution (John Wiley & Sons, Inc., September, 2014) lays out how to move beyond using analytics to find important insights in data (both big and small) and into operationalizing those insights at scale to truly impact a business. He is an active speaker who has presented at dozens of events in recent years. His blog, Analytics Matters, addresses the transformation required to make analytics a core component of business decisions.
Franks earned a Bachelor’s degree in Applied Statistics from Virginia Tech and a Master’s degree in Applied Statistics from North Carolina State University. More information is available at www.bill-franks.com.
Accelerate your organization’s journey to analytics maturity
Get the data sheet to learn how the Research & Advisory Network advances analytics capabilities and improves performance.