The developers refute Vitalik: the assumptions are wrong, and RISC-V is not the best choice
This article is from: Ethereum developer levochka.eth
compilation|Odaily Planet Daily (@OdailyChina); @azuma_eth Editor's
note:
Yesterday, Ethereum co-founder Vitalik released a radical article on the upgrade of Ethereum's execution layer (see "Vitalik's Radical New Article: Execution Layer Scaling "Doesn't Break, Doesn't Stand", EVM Must Be Iterated in the Future), mentioning that he hopes to replace it with RISC-V EVM as a virtual machine language for smart contracts.
As soon as this article came out, it immediately caused an uproar in the Ethereum developer community, and many technical bigwigs expressed different views on the plan. Shortly after the article was published, Tier-1 Ethereum developer levochka.eth wrote a lengthy article below the original article refuting Vitalik's argument that Vitalik had made the wrong assumptions about the proof system and its performance, and that RISC-V might not be the best choice because it could not balance "scalability" and "maintainability".
The following is the original article on levochka.eth, compiled by the Odaily daily newspaper.
Please don't do this.
This plan doesn't make sense, because you're making the wrong assumptions about proving the system and its performance.
Validation Hypothesis
As I understand it, the main arguments for this scheme are "scalability" and "maintainability".
First, I want to talk about maintainability.
In fact, all RISC-V zk-VMs require "precompiles" to handle compute-intensive operations. The pre-compiled list of SP 1 can be found in Succinct's documentation, and you'll find that it covers almost all of the important "computational" opcodes in the EVM.
As a result, all modifications to the base layer cryptographic primitives require new "circuits" to be written and audited for these precompilations, which is a serious limitation.
Indeed, if the performance is good enough, it may be relatively easy to perform the serviceability of the "non-EVM" part of the client. I'm not sure if it's good enough, but I'm less confident in this part for the following reasons:
-
"state tree computation" can indeed be greatly accelerated with friendly pre-compilation like Poseidon.
-
However, it is not clear whether "deserialization" can be handled in an elegant and maintainable manner.
-
In addition, there are some tricky details (such as gas metering and various checks) that may fall under the "block evaluation time" but should actually be classified as "non-EVM" parts, which tend to be more vulnerable to maintenance pressure.
Secondly, the part about scalability.
I need to reiterate that it's impossible for RISC-V to handle EVM loads without using precompilation, absolutely not.
So, the statement in the original text that "the final proof time will be dominated by the current precompile operation" is technically correct, but it is overly optimistic - it assumes that there will be no precompilation in the future, when in fact (in this future scenario) precompilation will still exist, and is exactly the same as the computationally intensive opcodes in the EVM (such as signatures, hashes, and possibly large analogues).
It's hard to judge the Fibonacci example without getting into the most subtle details, but the advantages come at least in part:
-
the difference between Interpretation and execution overhead;
-
Cyclic unwinding (reducing the "control flow" of RISC-V, it is uncertain if Solidity can be implemented, but even a single opcode can still generate a large number of control flow/memory accesses due to interpretation overhead);
-
use smaller data types;
Here I would like to point out that in order to realize the benefits of point 1 and point 2, the "interpretation overhead" must be eliminated. This is in line with the philosophy of RISC-V, but this is not the RISC-V we are currently discussing, but rather a similar "(?)RISC-V" - it requires certain additional capabilities, such as support for contract concepts.
Here comes the problem
, so, there are some problems here.
-
To improve maintainability, you need a RISC-V (with precompilation) that compiles the EVM - that's pretty much it.
-
To improve scalability, something completely different is needed – a new architecture that might be similar to RISC-V that understands the concept of "contracts", is compatible with Ethereum runtime limitations, and can execute contract code directly (without the "interpretation overhead").
I'm now assuming you're referring to the second case (since the rest of the article seems to imply that). I need to draw your attention to the fact that all code outside of this environment will still be written in the current RISC-V zkVM language, which has a significant impact on maintenance.
As an alternative,
we can compile the bytecode from the high-level EVM opcode. The compiler is responsible for ensuring that the builder remains invariant, such as not experiencing a "stack overflow". I'd like to see this shown in a normal EVM. Properly compiled SNARKs can be provided with contract deployment instructions.
We can also construct a "formal proof" that proves that some invariants are preserved. As far as I can tell, this approach (not "virtualization") has been used in some browser contexts. By generating SNARKs of this formal proof, you can achieve similar results.
Of course, the easiest option is to bite the bullet and do ......
Building a minimal "chained" MMU
, which you may implicitly express in the article, but let me warn that if you want to eliminate virtualization overhead, you have to execute the compiled code directly — which means that you need to somehow prevent the contract (now executable) from writing to the kernel (not an EVM implementation) memory.
Therefore, we need some kind of "memory management unit" (MMU). The paging mechanism of traditional computers may not be necessary because the "physical" memory space is nearly infinite. This MMU should be as lean as possible (because it is at the same level of abstraction as the architecture itself), but some features such as atomicity of transactions can be moved to this layer.
At this point, the provable EVM becomes the kernel program running on this architecture.
RISC-V may not be the best option
Interestingly, in all of these conditions, the best "instruction set architecture" (ISA) for this task may not be RISC-V, but something similar to EOF-EVM for the following reasons:
-
"Small" opcodes actually result in a lot of memory access, which is difficult for existing attestation methods to handle efficiently.
-
To reduce branching overhead, we showed how to prove "static jumps" (EOF) code with precompile-level performance in our recent paper, Morgana.
My suggestion is to build a new proof-friendly architecture with a minimal MMU that allows the contract to run as a separate executable. I don't think it's supposed to be RISC-V, but rather a new ISA optimized for SNARK protocol limitations, and even an ISA that partially inherits a subset of EVM opcodes might be better - as we know, precompilation is here to stay, whether we like it or not, so RISC-V doesn't bring any simplification here.