IOSG OFR13th Panel Recap: <GPU Symphony: Decentralized Compute Power>

IOSG
9 min readSep 29, 2024

--

Brief Intro:

Caladan: One of the pioneering crypto market-making and trading firms in digital assets, Caladan has been generating exceptional returns since 2017.

Gensyn: a blockchain-based marketplace protocol connecting developers (anyone who is able to train a machine learning model) with solvers (anyone who wants to train a machine learning model), by tapping into the long tail of idle, machine-learning-capable computing around the world — such as in smaller data centers, personal gaming computers, M1 and M2 Macs, and eventually even smartphones.

Hyperbolic: Currently building a GPU marketplace that offers an intuitive and automated interface for anyone to rent and supply GPUs in seconds.

Akash: Akash is an open network that lets users buy and sell computing resources securely and efficiently. Purpose-built for public utility.

RISC ZERO: RISC Zero is a zero-knowledge verifiable general computing platform based on zk-STARKs and the RISC-V microarchitecture.

Render Network: the world’s first decentralized GPU rendering platform designed to supercharge your creative workflow. We harness idle GPU power worldwide, providing you with near-limitless rendering capacity at a fraction of the usual cost.

Question 1: What are the killer futures for Decentralized Compute compared to Centralize Compute?

Greg: the vast, untapped potential of distributed computing across data centers, homes, and everyday devices like gaming consoles. It highlights the challenges in fully utilizing this decentralized supply, such as making these devices useful for tasks like distributed training and inference. While progress is being made, accessing the potential of a truly permissionless and trustless system requires viability. The example of the U.S. Air Force using PlayStations to create a supercomputer demonstrates how underutilized devices, like gaming consoles, could contribute significant computational power.

Jasper: Greg Touch’s ideas on how aggregating resources and allowing permissionless access to compute power could reduce costs and increase accessibility. Instead of needing approval from centralized data centers, more people could benefit from computing resources. Looking ahead, in a world where AGI (Artificial General Intelligence) emerges, a permissionless network with a verification layer would be essential to ensure transparency and verify each step in the AI pipeline, from data collection to inference. This network could be supported by players like Hyperbolic and Akash.

Question 2: Decentralized computer has a lot of advantages over centralized ones, but how do we address the question of security and privacy? What are the ways that you’re seeing that decentralized solutions like yours are tackling these problems?

Shiv: To be honest, that really is the thesis of zero knowledge as well. The growing relevance of zero-knowledge cryptography, which allows computations to be run anywhere while cryptographically proving their correctness without needing assumptions. Shiv reflected on how they once underestimated this concept but now see its rapid growth, largely driven by blockchain’s need to run applications off-chain and ensure secure, verifiable results. The focus is on using commodity hardware to scale computation rather than relying on expensive, specialized machines. Zero-knowledge cryptography can help democratize compute by enabling secure computations to be done anywhere, even on everyday devices like an Xbox, while adhering to data regulations.

Ben: Ben reflected on privacy as a technological advancement rather than a standalone product and shared insights from their experience in a privacy-focused startup, noting that while people claim to value privacy, they rarely choose privacy-preserving solutions if it involves extra effort. Privacy should be integrated into technology to create better outcomes for consumers and businesses, not sold as a separate product. A major hurdle has been the lack of investment in making privacy technologies efficient, although the crypto space has pushed forward in this area. He also argued that privacy technologies need to be baked into systems over time as part of the infrastructure, rather than treated as products, and decentralized protocols could provide new ways to build businesses around open-source progress.

Shiv: To further elaborate on that point, we also made an announcement yesterday regarding Boundless, which is now live on Twitter — you can find the details there. Our aim is to build a comprehensive stack that allows users to get started seamlessly. This might be a bold statement, but I would argue that zero-knowledge technology is to blockchain what blockchain is to traditional development — there is a knowledge curve to overcome. Our goal is to abstract much of this complexity and move toward a marketplace solution where computational tasks are distributed, driving down costs in the process.

Ben: There’s an interesting aspect to our approach: we operate on both sides of the market in a way. Gensyn is specifically designed with a forward-looking view to incorporate privacy technologies when they become mature. While we don’t build these technologies ourselves, we design our systems assuming certain ways they will eventually interact, positioning ourselves as the ultimate users of these innovations. We focus on designing from first principles to ensure that, when privacy technologies are ready, we are fully equipped to integrate and support them.

Question 3: My next question is for Render. You talked about getting a lot of users on board. Your specialty is scaling the platform that you’re building. So I think if you look back over the last few years, gaming has brought a lot of users in crypto, and I think AI is largely seen as the next wave that will bring millions of users. What are you guys doing on the Render side that’s differentiated enough to attract more users to crypto?

Trevor: Our approach is distinctly centered on a creator mindset, particularly in relation to AI and similar tools, by examining how they intersect with users. For years, we have engaged in rendering, incorporating a process known as denoising, which involves AI. We believe that convergence is forthcoming, making it increasingly difficult to differentiate between AI-generated content and that which is rendered.

Our focus is on the end users and the products they create, rather than the intricacies of the processes involved, as both utilize GPU resources. As user interfaces continue to simplify, we aim to onboard the “Adobe generation” — users of Photoshop who may not identify as 3D artists — and extend our reach to TikTok and user-generated content creators who produce content at scale. As these tools evolve to be more accessible and leverage various models, we anticipate a significant increase in user engagement. Ultimately, our goal is to prioritize the needs of creators while minimizing the emphasis on the underlying technology.

Jasper: I would like to echo that sentiment. We also prioritize a user-friendly approach, focusing on understanding the needs of both developers and users. This is why, in building our GPU network and AI inference service, we ensure compatibility with existing Web 2 interfaces. Our infrastructure supports a wide range of state-of-the-art open-source models and includes an OpenAI-compatible API, facilitating a smooth migration for developers from their current solutions to Hyperbolic.

As we integrate more crypto elements, we aim to empower AI developers to tokenize their models, enabling them to monetize their work by selling tokens. Currently, platforms like Hugging Face offer a public AI repository without monetization options for model creators. By incorporating crypto elements into our ecosystem, we provide AI developers with the opportunity to generate revenue without the need for external investment.

Audience Question: I wanted to ask about privacy. So how do you handle it, or do you keep the data or the model way? It’s private. And how do you generally approach privacy in a distributed compute context?

Gensyn: From our perspective, we have designed Gensyn with the flexibility to support machine learning operations over both obfuscated and encrypted tensors, as well as plain text tensors. While we do not specifically build privacy-focused tools, our architecture accommodates such capabilities. For instance, functional encryption can be integrated into neural network training, where the initial layer can utilize an encryption key, allowing subsequent training to occur homomorphically over encrypted data.

There are certainly challenges associated with this approach, but it is feasible. Additionally, a portion of training or feature extraction can occur on the user side, which involves data obfuscation. Essentially, this means that data can be transformed into parameter space before being transmitted.

Ultimately, our compute-based protocol operates on tensors, and the contents of those tensors are at the discretion of the user. They can implement various privacy techniques, such as obfuscation, differential privacy, or encryption. However, creating a verification system that functions at a higher level poses challenges, particularly if it cannot validate specific operations, like matrix multiplications, without relying on plain text. Thus, while our approach remains close to verification processes, we focus on implementing robust privacy techniques.

Hyperbolic: Hyperbolic features a modular design that includes a GPU network to orchestrate global GPUs, enabling users to build various services on top of this infrastructure. We are also developing a modular privacy layer alongside a verification layer. For the privacy layer, we are exploring Trusted Execution Environments (TEEs) in collaboration with several partners.

In the near future, users will be able to deploy confidential virtual machines on our GPUs, allowing them to run any services they choose, such as AI inference services. This approach ensures that AI developers can protect the privacy of their data and models, while users can have confidence that their data will not be stored or misappropriated by developers or node operators.

Render: On the AI front, we utilize compute clients, which raises various questions for many of our users. However, on the rendering side, the stakes are much higher, as we support creators who produce Hollywood movies and rely on these processes for their livelihoods. Consequently, our architecture was designed from the outset with their needs in mind.

We stream work directly to the GPU, which is installed locally, resulting in a fundamentally different architecture compared to the typical machine learning and AI Docker container setups commonly seen today. This approach underscores our commitment to safeguarding the interests of artists and creators, ensuring that their core work and stories are protected throughout the process.

Question 4: I have a question about, like, as you know, Nvidia launches more and more newer generations of GPUs. Like, how do you think that’s going to change the supply and demand dynamics for GPU in general? Like, do you think it’s gonna be a race to the bottom? Or, you know, the reason that there is a market for decentralized GPU is because GPU is expensive. So how do you think that changes over time?

Greg: We are already witnessing a decline in video demand, particularly as we explore the ability to deploy and infer on non-immediate use cases. With AMD’s MI250, for example, you can effectively run models like LLaMA 3.145. New techniques from Exo Labs are also promising, enabling cluster inference on Mac systems. As models become open-sourced and optimizations for non-NVIDIA chips increase, I anticipate a gradual decline in NVIDIA’s dominance.

Furthermore, the MI250 boasts 128 GB of memory, significantly higher than the 88 GB available in some competing chips, which presents a clear advantage. We have also seen impressive performance from emerging chips, like Graph, known for its speed, as well as future technologies like Next Topic, which is exploring innovative thermodynamic computing methods.

In light of these developments, I believe that while NVIDIA will remain a key player, its dominance will be challenged. It’s essential to diversify away from reliance on a single company in the digital field, especially given the current difficulties in acquiring chips. Securing GPUs has become increasingly challenging, even for the wealthiest individuals, highlighting the pressing need for a more competitive landscape.

Shiv: I echo that sentiment as well. It’s important to remember that we operate in a free market, and stakeholders won’t passively allow NVIDIA to dominate without competition. Even as we see a diversification of products, overall spending on chips is likely to continue rising, as demand for computing power grows.

The emergence of companies like Exo Labs illustrates this point; developing new technology is complex and time-consuming, but the potential for profit is driving innovation in this space. As a result, we can expect an influx of new products that cater to various needs. The overall compute capacity will likely increase, with accelerators like those from Exo playing a significant role in meeting that demand.

--

--