Date:

Super Protocol: Self-Sovereign AI on NVIDIA Confidential Computing

Confidential and Self-Sovereign AI: A New Approach to AI Development

The problem being solved is most clearly shown through the use of personal AI agents. These services help users with many tasks, from writing emails to preparing taxes and looking at medical records. Needless to say, the data being processed is of a sensitive and personal nature.

In a centralized system, this data is processed in clouds by providers of AI services, which are generally not transparent. When a user’s data leaves their device, they lose control of their own data, which could be used for training, leaked, sold, or otherwise misused. There’s no way to track personal data at that point.

This problem of trust has impeded specific aspects of the evolution of the AI industry, especially for startups and AI developers who do not yet have the reputation or proof to back up their honest intent. A confidential and self-sovereign AI cloud provides a solution for customers who must secure their data and ensure data sovereignty.

Solving the Self-Sovereign AI Cloud Need

Super Protocol has built an eponymous AI cloud and marketplace based on the principles of confidentiality, decentralization, and self-sovereignty. In the Super Protocol cloud, confidential computing technology protects data during execution, while blockchain-based decentralized networks provide orchestration, transparency, and verifiability of all processes.

NVIDIA Confidential Computing uses CPUs and NVIDIA GPUs to protect the data in use, rendering it invisible and inaccessible by malicious actors and even the owners of the host machines.

Use Case: Fine-Tuning and Deploying an AI Agent-as-a-Service in Super Protocol

Here’s a practical use case: An AI developer wants to launch a commercial AI agent service by leasing a pretrained base model from the Super Protocol AI Marketplace and fine-tuning a new layer for a specific purpose that involves processing the end users’ private and sensitive data.

The pretrained model is proprietary and may not be downloaded, only leased on certain conditions set by its owner. Fine-tuning may include various methods such as knowledge distillation, low-rank adaption (LoRA), retrieval-augmented generation (RAG), and other approaches that don’t change the structure and weights of the base model.

Uploading and Publishing

As a prerequisite, the owner of the base model uploaded their pretrained model to their account in a decentralized file storage (DFS) system and published an offer (an open listing for the model) on the Super Protocol AI Marketplace (steps 1-3 in Figure 2). This enables the model to be leased on preset conditions, which in this use case are payments for each hour of usage.

Now, you, as the AI developer, securely upload datasets to your account in a DFS system (steps 4-5 in Figure 2). These are private datasets to be used to fine-tune the base model.

Results for the AI Agent-as-a-Service Use Case

The fine-tuning and deploying an AI agent-as-a-service in Super Protocol scenario produces the following results:

* The developer adds new capabilities to the base model by training a new layer and launches a confidential AI agent as a commercial service.
* The base model owner gets paid for each hour of usage of their pretrained model.
* Providers of the CC resources are compensated for the use of their machines on an hourly basis.
* End users receive web access to a useful AI agent with convenient payment options and confidence that their sensitive data will not be leaked or used for model training.
* The Super Protocol cloud ensures fault tolerance and decentralization of the deployed AI services.

Security, Transparency, and Verifiability

Super Protocol achieves security and transparency through process integrity and the authenticity of components, which may be verified by independent security researchers:

* Blockchain and smart-contract transparency
* Content verification by the trusted loader
* TCB verification
* Open-source verification
* AI engine open-source verification
* E2E encryption
* TEE attestation
* Distributed secrets

Conclusion

Historically, most AI models have been open-source and available for anyone to take and reuse freely. However, the emerging trend is that models and datasets are becoming increasingly proprietary.

CC and self-sovereign AI provides an opportunity for you to protect and commercialize your work, and further incentivizes you to provide AI services that are secure, transparent, and verifiable. This is especially important in the face of increasing government scrutiny over the AI industry.

FAQs

Q: What is confidential computing?
A: Confidential computing is a technology that protects data during execution, making it invisible and inaccessible by malicious actors and even the owners of the host machines.

Q: What is self-sovereign AI?
A: Self-sovereign AI is an approach to AI development, training, and inference where the user’s data is decentralized, private, and controlled by the users themselves.

Q: How does Super Protocol ensure security and transparency?
A: Super Protocol achieves security and transparency through process integrity and the authenticity of components, which may be verified by independent security researchers.

Q: What is the benefit of using Super Protocol?
A: Super Protocol provides a solution for customers who must secure their data and ensure data sovereignty, allowing them to protect and commercialize their work, and further incentivizes them to provide AI services that are secure, transparent, and verifiable.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here