Manifesting Responsible AI: Blockchain Meets Artificial Intelligence

Elon Musk warned senators in a private bipartisan gathering on Capitol Hill, held in mid-September, that artificial intelligence (AI) poses a “civilizational risk” to governments and societies, and called for a U.S. “referee” for AI, according to a senator in the room.

“It’s important for us to have a referee,” Musk told reporters, and added that a regulator would “ensure that companies take actions that are safe and in the interest of the general public.”

Musk made his remarks at a closed-door summit, dubbed the AI Insight Forum, hosted by Senators Chuck Schumer, Mike Rounds, Todd Young, and Martin Heinrich. It featured Big Tech titans including Mark Zuckerberg, Bill Gates, Sundar Pichai, Jensen Huang, Arvind Krishna and Sam Altman. These Big Tech titans, competitors who may not agree on many issues, do agree that AI should be regulated.

“We got some consensus on some things…I asked everyone in the room does government need to play a role in regulating AI, and every single person raised their hand, even though they had diverse views,” said Schumer. “So that gives us a message here that we have to try to act, as difficult as the process might be.”

That same week, the Senate Subcommittee on privacy, technology, and the law held a hearing called “Oversight of AI: Legislating on Artificial Intelligence,” and invited for testimony Microsoft’s president Brad Smith, Nvidia’s chief scientist William Dally, and Woodrow Hartzog, a professor of law at Boston University School of Law. Both Microsoft and Nvidia commended the Senate on its work to create a legal framework that would require “high-risk” AI to be certified by an oversight board, being sure to draw a distinction between advanced AI and less capable systems.


Hartzog argued for a mandatory rigorous regulatory framework, urging Congress to steer clear of half-measures and industry-led approaches without also implementing means to enforce liability and other important regulatory mechanisms. Advocacy groups agree with Hartzog. “Big tech has shown us what ‘self-regulation’ looks like, and it looks a lot like their own self-interest,” said Bianca Recto, Accountable Tech’s communication director.

Legislators seem to agree with this sentiment. “Make no mistake, there will be regulation,” said Senator Richard Blumenthal. “The only question is how soon and what.”

The U.S. is lagging on AI regulations. Senator Rounds cautioned it would take time for Congress to act. “Are we ready to go out and write legislation? Absolutely not,” he said. “We’re not there.” The hearing came just days after the leaders of the subcommittee unveiled their one-page legislative framework for regulating AI.

By contrast, the EU has been working for a few years on an EU AI Act. In June, it started talks with EU countries in the Council on the final form of the law. The aim is to reach an agreement by the end of this year and hope to have the Act enacted soon after.

The Biden administration secured a second round of voluntary commitments from eight additional AI companies to manage the risks posed by AI and help advance the development of safe, secure, and trustworthy AI. The first set of commitments came from seven leading AI companies including Google, Amazon, Meta, and Microsoft. These commitments, which the companies have chosen to undertake, underscore three principles that are fundamental to the future of AI – safety, security, and trust – and mark a critical step toward developing responsible AI.

Regulation is coming, but the question is how we can obtain safe, secure, and trustworthy AI. This is where blockchain technology can assist in the implementation and manifestation of Responsible AI. Here’s how it could work:

Blockchain Meets AI

Blockchain is conceivably everything that AI is not: transparent, traceable, trustworthy, and tamper-free. It could help offset the opaqueness of AI’s black box solutions, and establish the safety, security and trustworthiness needed for AI applications. More specifically, blockchain can assist in the following, but not limited to:

Verifying Data and Ensuring Reliable Sources of Information

AI is data-driven. It heavily relies on data for learning and making accurate predictions. The quality and trustworthiness of the data can greatly impact the outcomes – whatever is fed into the algorithms will result in the outcomes. AI systems fed with unreliable or biased data can generate flawed insights, leading to undesirable consequences.

When integrating blockchain technology into the data pipeline, AI systems can leverage blockchain’s immutability and transparency to verify the authenticity and integrity of the data, ensuring that algorithms are a reliable source of information to learn from and make informed decisions.

Enhancing Data Privacy and Security

Robust privacy and security measures are critical for all emerging technologies and especially with AI’s reliance on extensive personal and sensitive data. Blockchain’s cryptographic algorithms and decentralized architecture ensures that sensitive information remains encrypted and accessible only to authorized parties, reducing data breaches’ risk, and securing data privacy.

Ensuring Transparency and Accountability – Immutable Audit Trails

Blockchain’s immutable nature allows for the creation of transparent audit trails, where every transaction or data interaction is recorded and timestamped. This enables stakeholders to trace back and verify the decisions made by AI systems, promoting accountability, and instilling trust in the technology.

Restraining AI Bias and Manipulation

Biased data or algorithms can perpetuate and amplify societal biases, leading to unfair outcomes and discrimination. Blockchain can act as a countermeasure by ensuring transparency in the data used for training AI models. By recording and validating the sources and characteristics of the data on a blockchain, biases can be detected and addressed more effectively.

A Real-World Use Case Implementation

Blockchain has already been utilized for AI data and model governance to ensure Responsible AI. In February, FICO received a patent for “Blockchain for Data and Model Governance,” officially registering a process it has been using for years to ensure Responsible AI practices. FICO uses an Ethereum-based ledger to track end-to-end provenance “of the development, operationalization, and monitoring of machine learning models in an immutable manner.” Notably, the terms “AI” and “machine learning” are often used interchangeably.

Using blockchain technology enables auditability and furthers model and corporate trust, Scott Zoldi, chief analytics officer of FICO, wrote in an AI publication earlier this year. AI tools need to be well-understood, and they need to be fair, equitable and transparent for a just future, Zoldi said, adding, “And that’s where I think blockchain technology will find a marriage potentially with AI.”

Deepfakes: Verifying and Authenticating the Sources

At the subcommittee hearing, several senators brought up the issue of disinformation ahead of the election. Blumenthal said Congress was facing a huge dilemma as deepfakes become more sophisticated and harder to distinguish from authentic images, audio, or videos.

“We need to do something about it, we can’t delude ourselves by thinking with a false sense of comfort that we’ve solved the problem if we don’t provide effective enforcement,” Blumenthal continued

A suitable solution would be that when these AI creations are generated, the algorithm will issue them with a watermark, which will immediately signify the creation as non-authentic and generated by AI. Better yet, this should be a cryptographic stamp verified and authenticated with blockchain technology, a solution I discussed in an article published in April.

The stamp would identify that the image or video is computer-generated, making it transparent that this is a deepfake video or image. The data is cryptographically sealed into the file; tampering with the image breaks the digital signature and prevents the credentials from appearing when using trusted software. Dutch company Revel.ai and Truepic, a California company, have been exploring such solutions.

It seems that the Biden administration supports such a solution. In mid-September, when announcing the tech companies’ voluntary commitments for responsible AI, it said: “The companies commit to developing robust technical mechanisms to ensure that users know when content is AI-generated, such as a watermarking system. This action enables creativity and productivity with AI to flourish but reduces the dangers of fraud and deception.”

Protecting Creators’ Copyrights: How Blockchain Can Assist

Generative AI tools such as ChatGPT, Bard, Midjourney and the like enable content creation – images, text, audio, or video. AI is data-driven – the algorithms that enable content creation are trained on actual content that has been created by authors, artists, journalists, and all other creative vocations. This raises the question of copyright protection and remuneration of the content creators.

Several lawsuits have been filed by creators. For example, comedian and author Sarah Silverman and other authors are suing OpenAI and Meta for copyright infringement; Getty Images has filed a case against Stability AI, alleging that the company copied 12 million images to train its AI model “without permission … or compensation.”

Legislators around the world are trying to understand how creator’s rights can be protected. The U.S. Copyright Office is launching a consultation on AI and copyright enforcement; the EU in its AI Act requires that Generative AI, like ChatGPT, would have to comply with transparency requirements, such as “publishing summaries of copyrighted data used for training;” and six deputies from the French National Assembly unveiled a bill which aims to frame the development of AI in the context of copyright law.

The question is how these companies can comply with transparency requirements in the context of copyrights; as described above blockchain technology could enable the traceability, transparency as well as the authenticity and accountability of the data and algorithm.

AI regulation is coming. When blockchain meets AI, we can achieve Responsible AI that is safe, secure, and trustworthy. It may even secure us from the “civilizational risk” Elon Musk warned us about.

The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.