IOSG OFR 14th Fireside Chat: IOSG VC x AltLayer

IOSG
16 min readNov 26, 2024

--

  • YQ, Co-Founder, AltLayer
  • Gokhan Er, Managing Director, IOSG Ventures

Gokhan: I’m going to ask some of the questions about rollups overall to Yauchi. If you guys have any questions also let us know. Happy to answer those. Thanks a lot for the presentation. YQ. I think one of the first things maybe I’m interested in and you mentioned that like Arbitrum and Optimism, right? They’re actually not too far away from stage two rollups. With Arbitrum, Maybe there are some technical things that they’re still waiting for. What keeps them away from implementing at this point? It seems like just a couple more governance decisions than anything like technical if you could share a bit more light on that would be interesting.

YQ: Yeah, as I mentioned so far I think the technical blocker will just be for all the rollups we need some multiprovers system, right? And for OP at least they have a bunch of teams working on these as I mentioned, right, The ZK proof or validity proof? All these ZK fault-proof systems for Optimism, right? Like this Succinct, RISC Zero, and Nebra. So in that case I think as you mentioned, right for OP basically it’s much easier for them to move forward to this state 2 and for Arbitrum. But for both of them, as I just mentioned, we need their Security Council to really sort of config the extra delay from seven days to 30 days. But of course, there are some considerations on their side. We are closely following up with their technical team.

We can see that for OP or Arbitram, they actively push new updates and new features into either OP stack or Arbitrum Orbit. I just feel like probably that’s a reason why they don’t want to enable this longer sort of delay when they do the upgrade because there are just so many new updates. So for example OP is mainly focusing on their interop for the superchain, right? That part I believe for the past few months and also the upcoming few months they spent a lot of effort on that. And also for this Arbitrum side, they have, you know, right the stylus. They also stylus for wasm. They also have these upcoming basically this super-fast bridge between the L3 and also L1. So these are the things I believe they spend a lot of time on.

Of course, technically I would say they also have some bunch of teams working on either this ZK or TEE sort of proof systems. I don’t think right now the technical blocker is that huge. It takes a little bit of time. But once these major updates they made into their repo or into their current master branch, that’ll be good. They can probably finish all the major upgrades and then they can, at least on the nontechnical side, they can quickly move to the next stage.

Gokhan: Perfect. Thanks a lot. Maybe another question is about while we’re talking about all these rollups launching, right? And obviously, there are many roll up as a service company just making it very easy to one-click launch these. Maybe you could share some insights about like how difficult it is to maintain what the is doing at this point as an L2. How difficult and obviously Base, Uniswap when they’re launching their own L2s, they’re making a lot of customizations to the chain. So it’s not easy. But then I think if you could give us an idea to maintain these chains once they’re launched. Like what sort of an infrastructure investment are we looking at? Is it something that all the developers should consider or is it only after some stage that it makes sense?

YQ: Yeah, really good question. So far, Like I would say either us or the other sort of rollup providers we already make, this deployment of a rollups as straightforward as possible. Also, Op, Arbitra,m and these ZK rollup teams are spending a lot of time improving their rollup stacks to make it as easy as possible for developers to launch. So right now I think back to the point, the difficulty to launch a rollup right now I think it’s minimal for some developers who have a lot of experience in blockchains. Right. I believe it probably took him or her a few days to launch a rollup. The tricky part is really about the maintenance. The maintenance and also the cost. We did a lot of this sort of research and also measurement on the cost of manpower infrastructure, especially for maintenance.

You launch one is very straightforward. But how about as I mentioned to you, every week they have some major updates. So after that, you probably need to upgrade from the old version to a new version during this upgrade. Typical they don’t have detailed instructions. Sometimes there are some caveats and hidden bugs. If you trigger that, can you quickly fix it? If you can’t, basically your chain probably will be stuck there. That’s a part at least on the manpower side of the maintenance experts in rollups or blockchains. Once there’s something wrong, you can basically help to fix it, especially for the upgrade. And beyond that, it’s about cost and for cost for us, for the other providers, we typically have some special deal with cloud service providers.

In that case, the cost can be minimal compared to, for example, having your own team, you set up everything yourself. Also for the maintenance, sometimes as I mentioned, it’s about this SLA and also the Leibniz guarantee. In that case, it’s not as straightforward. For example, for the existing rollup we can see that it’s been one in a few minutes. That’s good. But after that, for example, how about the RPC, how about the Explorer, how about the Bridge? As I mentioned, you need to upgrade and have some issues. And then all these services, as I just mentioned, you also need to make sure they have this very good SLA, like probably 99.9%, something like that. So in that case you have to put a lot of effort on, make sure these services are fine. So these services are not on the rollout deployment.

It’s like for example, how you make sure your RPC service is so strong you can handle probably tons of thousands of requests per second. And that’s definitely not something like the existing sort of rollout team they can provide or they can give you the guidance. That’s something like basically on our side, we typically automatically deploy for the rollout. It’s just the first step. It’s really about the maintenance and also the cost. You need to constantly make it very low. And beyond that, it’s also about the ecosystem. You go through us, right? We have like probably 200 or 300 partners on Wallet, Bridge, Explorer, and also Defi, Game. So we can introduce to you. But if you start everything yourself, right, you have to find all the partners about yourself. So in the end, as you mentioned, in the end it’s about the cost or the budget.

If you are a very big project, like Base or Uniswap or some big ones, definitely host everything themselves because they have the budget, they have the manpower. They also want to have everything in-house, especially for the customization. But for some small or medium-scale teams, if they want to launch rollup for their applications or for their ecosystem, that won’t be that straightforward. It would be great to go through us or some professional providers first. With a limited budget we can provide the deployment and beyond that with a minimum cost, we can also maintain the RPC Bridge, Explorer, and all these services for them. As you can see, I feel like right now we are at that stage where everything becomes a commodity infrastructure space. A few years ago, if you launch so difficult. All use zk. So difficult.

But right now, as I mentioned in the talk, everything becomes a commodity. Everyone has basically the opportunity to choose different combinations of different rollup stacks, DAs, and also proof systems. But in the end, it’s about the cost and budget. So what’s your budget? Basically maintaining this rollup, basically building a team, or even you outsourcing to us. Right? So how do you want to onboard sort of multi other partners together with us? Yeah, perfect.

Gokhan: Yeah, maybe another question I have is I think in the recent AMA on Reddit someone asked that Base is talking about all this like 1-gigabyte block size, right that they want to come up to and I think Vitalik or someone else was making the match that it could actually enable 10k transactions per second. But that is assuming that all the block sizes of Ethereum DA actually is consumed by this one rollup. Right? If you look at the main reason people are using rollups is scalability and at this point, we’re at 260tps. Yes, this is 10x, 20x more than what Ethereum L1 is doing, but also nowhere near close to getting real adoption from the Masses. So what do you think is the path to 10k tps? How realistic that is? I know the roadmap is in that direction. If you would have to guess maybe the steps taken there like maybe 2000 and 5000, like when are you going to reach these milestones you think?

YQ: I still feel like that’s also something. As I mentioned in the past months in Chiang Mai I had a lot of dinner conversations with Vitalik. He also showed his blog post before he posting on his account. I feel like he’s quite conservative on the numbers because he only considered the rollups that post data to Ethereum. But you know, right we have so many alternative DAs, and with all the DAs from the L2Beat, I think over half or even more than half of rollups don’t use Ethereum as DA. So for me, I just feel like he’s quite conservative on the throughput for Ethereum right now if we consider like sort of the rollups using Ethereum as a settlement layer, doing bridging and probably also proof verification but not using Ethereum as a DA, right.

I think easily we can achieve probably millions of transactions per second. You get my point, right? Because the dependency on Ethereum is very minimal. It’s only like sort of if something is wrong we post the proof on Ethereum and it does the verification. But then back to the point, right? So when people, especially the other ecosystem projects like us, do you think still like Ethereum has this scalability issue? I just feel like not really right now, at least throughput-wise we really have all the sort of numbers we want. And if we look at Arbitrum right now the block time is less than 250 milliseconds. It’s faster than Solana. During some sort of internal testing, we can also achieve higher throughput than Solana using the existing sort of rollup stacks.

So I don’t think this throughput is a problem for Ethereum anymore. But it’s back to the point as you mentioned. That’s why last week we discussed these AI agents and all the applications. So I think right now we need to redefine more applications. If you look at the existing roller, they can easily achieve 1000 or 2000 TPS. But if you look at numbers on L2beat, the throughput we have right now is less than 100. So that’s a problem. So we already have a lot of block space. A lot of block space. But what kind of application we can run on top of it? That’s also another observation when I had conversations with a lot of projects I figured okay, so especially from the project in Hackathon hackerhouse I just feel that right now developers still have this kind of mindset. We can only deploy smart contracts. They just feel like okay even for the smart contract, right? We can only wait for this kind of confirmation from Ethereum. So there is still like 12-second latent second latency or something like that. So people still feel that’s the sort of limitation for the developers in the Ethereum ecosystem. However, I always told them now it’s very straightforward to launch a sort of rollup. You don’t really have all the limitations anymore. Sometimes you don’t even need to write a solidity code. You can write whatever code and put it into an AVS and you can also use different EVM rollups.

So you don’t have the limitations. But I will say right now still most of the developers, they haven’t really recognized this change so far. As you mentioned, throughput is no longer a problem. Let’s build some Mass-Adoption mobile applications. Let’s build some crypto-native grab, a crypto-native bot. Let’s build some crypto-native Airbnb. So basically right now the throughput is no longer a problem. But the real problem I would say is really about how we really onboard the new developers to build this kind of Very ambitious crypto native and Internet-scale applications. It’s not about the existing defi or existing application we build due to the constraints of Ethereum but it’s really about how is the future sort of oriented and assumes there’s no more throughput limit. It’s as good as web2 probably sometimes even faster and cheaper.

So let’s view the next generation crypto native Internet skill applications. But right now when went we just came back from a bunch of hackerhouses. We just feel like people are still stick to the old sort of man side of from the past few crypto cycles. People just want to be like let’s improve AMM, let’s build a better sort of NFT marketplace, let’s build some Perp exchanges, let’s build some new sort of DePIN projects or social applications. But people haven’t thought like okay, assume this crypto is no longer the bottleneck for the technology and it gives you much cheaper. You also have the crypto incentives. How we build the next generation social app or next generation application everyone can use.

Gokhan: Like you’re working with almost all of the technology providers in this space, right? When a developer or a company comes to you they’re like hey we want to launch our rollup. Could you assist us in this process? Do they have any tech providers in their mind usually or is it like an exploration process where you give them the options and they have to figure it out or do they usually have an opinion and they want to build on this and you’re basically giving them the ways to do it?

YQ: Yeah, it’s a good question. Early this year I would say it’s a half basically we either have the new inbounds or we have some recharge and basically half of them were introduced by the rollup stack providers like OP Arbitrum or probably some ZK rollups like this CDK and the other half basically from sort of our connections we can recommend what kind of roll up stacks we provide for them. However right now I would say most of these new, I would say 80% of the new inbounds introduced by either the rollup stack providers or the DA providers.

So, in that case, they typically have their preference because usually they already signed some grants and also some sort of this partnership, but then at the same time as you can see, right, some of the projects want to launch something urgently so they may switch from one roller stack to another rollup stack but in general I would say so far it’s still the rollup stack Providers, they are a little bit dominant basically in this space and a lot of projects, to be honest, their preference on roll up stack is not about this technical decision like okay, which stack is much better than the other rollup stacks? As I mentioned right now most of this stuff becomes sort of the commodity.

So in that case the decision maker really wants to figure out okay, what are the nontechnical advantages or benefits they can get? For example, financially, can we get tokens from you? You check out the Kraken case, right? The OP offered I think over 20 million op tokens to them. And also a lot of these CDK or Zksync projects, for example, I remember it’s a Lens protocol or someone they also got a big grant from the. Yes, yes. So that sort of is typically a non-technical decision. And beyond that, it’s also about the support. For example, if I use your roller stack, I’ll use your DA. What kind of marketing support you can help us? Can we do an AMA, can we do an event together and can you introduce more clients to us?

So far I would say that technical-wise the impact on the decision-making of a project to choose with roller style or DA is less than probably 10% as you can see, right? 90% of these decision making process really based on these non technical parts which people can offer for this project. It seems to us right, with typical as a provider we need to offer not just the technical stuff, not like this rollup deployment, not the maintenance. They also want to be in our event. They also want to basically the different introduction from us to the VCs, partners, and some marketing opportunities.

Gokhan: And speaking of commoditization of the stack, I think looking back a couple of years ago there were the fraud proofs and then the ZK proofs. And I think it would be the case that people would launch TEE L2s, right? But then now it is at the point where you got OP stack and there are multiple provers all working for the OP stack. So one specific question actually. So if you have structures like OP stack getting proven through Succinct, does that make it like a native ZK rollup versus the ZK rollups? Like, let’s say zkSync and Starkware, right? Where does the proof generation happen and they have this customized Circuit how do these two structures compete with each other? Is the OP stack truly Succinct, can we call it actually a ZK rollup as well in this case?

YQ: Yeah. So the thing is like right now, even with Succinct or RISC Zero probably also Nebra. And we have these different variant of OP style right here. I want to basically specify a bunch of these different definitions. So we all know OP has this Superchain, right? So if you want to fit into this superchain, it typically has some specific criterias and requirements. Basically, if you want to go into this standard tier or standard charter, right, you can’t change your code, you have to use Valida-like sort of OP stack. And you also need to fit into all the parameters they have for all the superchains. For example, block time has to be two seconds and you have to post the data to Ethereum. And there are also a bunch of requirements on sort of these rollup fit into the super chain.

Beyond that, we also have the other tiers in the super chain, they also have some tiers now into this standard tier or standard charter, but you can use alternative DA, and also some tiers you can use Succinct or the other proof system. So in that case, as you can see, if we don’t discuss these standard tiers, standard charters, and if we discuss these variants from this OP Stack, there’s OP Succinct, there’s OP Risk Zero, OP Nebra there are OP Automata on the TEE. So and in that case, right, even for the OP Succinct or this kind of ZK-proof system, it has a minimum of one or two options. One is as you can see, right? Okay, they change the smart contract, make sure, okay, they change the proof system. Basically, we replace the fault-proof system with the ZK validity-proof system.

So in that case I would say it’s exactly the same as a normal ZK rollup. So it’s basically all the proof generated by the prover and post it to Ethereum and do a verification and then you can do the withdrawal. So there’s no more seven-day challenge period. However, a lot of the project analysts we talked to, prefer to use this hybrid mode. It’s still fraud-proof. But as I mentioned, right, once there’s a challenge, we leverage a sort of ZK proofer, generate the proof for this kind of the past few blocks and then we let the L1 to arbitrate like, sort of like who wins and who loses. You get my point, right? So that’s sort of like this kind of average optimistic ZK proof.

So in that case, as you can see, right, it really depends on the sort of the client or the project and what kind of mode they want to choose. They want to choose OP’s fraud-proof mode, they want to choose this kind of ZK validity-proof mode. Or they want to choose ZK fraud-proof mode. And at the same time, right now it’s one more option as you mentioned, right? TEE prover. So that’s why right now I said everything has become commodity. All the projects have the freedom to choose what kind of combination of the proof system they want. So previously none of us really highlight. Okay, we can really get the rollup into stage two since day one. But right now, since most of these, I would say these approved systems are actually production-ready.

So now the project really has the freedom to directly choose the multiprover system since day one. So they can directly into this stage two. But as back to, the, back to your question, it’s no longer just like, okay, you only have one option to choose. But right now that’s so diversified and you can have multiprover. You can also change this proof system, change the frequency, for example, how frequent you want to post your ZK proof. So in the end, as I mentioned earlier, it’s still about the budget. Everything becomes a commodity. If you have a lot of budget, let’s just do this ZK validity proof. Probably if you have more budget, let’s just make it as frequent as possible. We can engage with tons of thousand of GPUs to generate the proof within a few minutes. It’s doable right now, it’s feasible.

So in that case you can see, right, okay, we do this ZK proof system. It’s as fast as the normal sort of Optimistic rollup. You get my point, right? It’s not like the cost. How about the cost? Probably every month you need to pay $100,000 for the prover system. You need to pay another $200,000 for the prover verification on Ethereum. But right now most of them are production-ready. You can just choose whatever you want. If you have less budget, of less than 3,000, just use a standard OP stack with the fraud proof. But if your budget is super high, let’s just use zk. Or sometimes you’re in the middle, let’s do a hybrid mode.

Gokhan: Thank you for all your insights. That’s all for this session.

YQ: Yeah, thanks for your great questions, really enjoyed it. Thank you.

--

--

IOSG
IOSG

Written by IOSG

Community & Thesis Driven Investing iosg.eth

No responses yet