applancer Advertise

Blockchain and AI (Artificial Intelligence)

Nov 04, 2017 Posted /  14819 Views

Blockchain and AI (Artificial Intelligence)

In the last few years, a lot of problems have been cracked by AI (artificial intelligence) researchers. From Go to human-level speech recognition, they have been researching on for decades. A major advantage is its ability to gather data and learn on mountains of it, eliminating error rate and raising the success line.


In simple words, AI has been transformed by big data, to an almost unreasonable level.


Moreover, blockchain technology might transform AI too, in its particular manner. Some of the popular presentations of blockchains to AI - Mundane, such as audit trails on AI models. Whereas, some appears to be unreasonable, such as AI DAOs, AI that can own itself. These all are opportunities. In these articles, we're going to explore these applications.

As Blue Ocean Databases - Blockchains

Before we shed light on applications, it perhaps good to first review the difference - blockchains as compared to traditional big-data distributed databases like MongoDB.


You can imagine blockchains as a “blue ocean” databases, escaping the “bloody red ocean” of sharks that are establishing in an existing market - establishing rather dipping into a blue ocean of unchallenged market space. Some of the well-known blue ocean examples are - Wii for video game consoles (settle raw performance, but incorporates a completely new mode of interaction), or Yellow Tail for wines (For wine lovers, it eliminate the pretentious specs - making wine more accessible to beer lovers).


As per traditional database standards, outdated blockchains, include Bitcoin are poor. The reason behind - little throughput, small capacity, high latency, poor query support, and so on. However, for blue-ocean thinking, it is considered fine - as blockchains presented three new features: decentralized/shared control, immutable / audit trails, and native assets/exchanges.


Inspired from Bitcoin, customers are happy in overlooking the traditional database-centric shortcomings because the growing benefits have potential to transform industries and society completely, in new ways.


The introduction of these three new “blockchain” database characteristics seems to be potentially advantageous for AI applications. However, real-world AI mainly works on large volumes of data, like working on large datasets or high-throughput stream processing.


Therefore, for applications of blockchain to AI, there is a need of blockchain technology along with big-data scalability and querying. BigchainDB and its public network IPDB are some emerging technologies acts similar to that. Eventually, you don't need to compromise with the benefits of traditional big-data databases to take the benefits of blockchains.

An Introduction - Blockchains for AI

To unlock the potential of blockchain tech for AI applications, let`s explore start form the three blockchain benefits.

Following are the opportunities, these blockchain benefits lead for AI practitioners:

Decentralized/shared control

Encourages data sharing


(1) Increase data, therefore better models.

(2) Qualitatively new data, therefore qualitatively new models.

(3) Enable to share control of AI training data & models.

Native Assets / Exchanges:


(4) It leads to training/testing data & models as intellectual property (IP) assets, which in turn decentralize data & model exchanges. Moreover, it gives better control for upstream usage of your data.

Immutability / Audit Trail:


(5) It leads to provenance on training/testing data & models in order to enhance the trustworthiness of the data & models. Data wants reputation too.


(6) For AI DAOs (Decentralized Autonomous Organizations), AI with blockchains unlocks the possibility. AIs accumulate wealth that you can’t shut off. Hence, software-as-a-service on steroids.


Probably, there is an ample number of ways through which blockchains can help AI. Vice-versa there are numerous ways that AI can help blockchains, including mining blockchain data (e.g. Silk Road investigation). Left it for another discussion.


At the same time, it is also true that a lot of these opportunities come with AI’s special relationship with data. Hence, it’s better to explore them first. Following this, we’ll explore the applications of blockchains for AI in more detail.

AI & Data

Here, the discussion held on - how much of present AI leverages the copious amount of data for its impressive results. (This perhaps not the case always, but it is a common theme and is worth describing.)

“Ancient” History of AI & Data


AI research in the 90s was having this typical approach:

  • A fixed dataset (typically small).
  • Propose algorithm to advance performance, for instance, propose a new kernel for a support vector machine classifier, reducing area-under-curve (AUC).
  • Distribute algorithm in a conference or journal. Ten percent relative upgrading is about the “minimum publishable unit”, as long as your algorithm itself was impressive enough. In case, you got a 2x-10x, you’re looking at best-paper territory, especially if the algorithm was really fancy.


However, it might sound academic because most AI work was still in academia. Along with, their real-world applications. In our experience, it seems to be like this in many subfields of AI i.e. neural networks, fuzzy systems, and evolutionary computation. For slightly less AI-ish techniques, such as nonlinear programming or convex optimization.

Towards Modern AI & Data

In 2001, there is a huge transformation in the world. A paper with remarkable results release by Microsoft researchers Banko and Brill. Initially, they have defined - how most of them work in the domain of natural language processing which is less than a million words - small datasets. There were 25% of error rates for the old/boring / least fancy algorithms, such as Naive Bayes and Perceptron. However, 19% error was achieved in fancy newer memory-based algorithms. That’s the four data points on the far left of the plot below.


No surprises, up to a great extent. Simultaneously, Banko and Brill have shown something remarkable: As soon as you supplement more data: not simply a bit more data but in the form of orders of magnitude more data in order to keep the algorithms same while reducing the error rates to a large extent. At that point of time, the datasets were larger - three orders of magnitude and error were less than 5%. In an ample number of domains, there seems a lot of difference around 18% and 5%, as the latter is good enough for a real-world application.

Modern AI & Data

Not only, Banko and Brill were alone, even Google researchers Halevy, Norvig and Pereira of Google in 2007, published a paper showing how data could be “unreasonably effective” across many AI domains.


Click here to read more, Halevy, Norvig, and Pereira, 2007


Like an atom bomb, it hit the AI field.

Applancer is an open platform for discussion on all things like Blockchain , Cryptocurrency and Ico news updates. As such, the opinions expressed in this article are the author's own and do not necessarily reflect the view of Applancer .

For more details on how you can submit an opinion or any news , view our Editorial Policy or email [email protected].

Tags: blockchain Blockchain contribution artificial intelligence decentralized model blockchain data

Hottest Blockchain Newsletter

For updates and exclusive offers, enter your e-mail below.