In the second half of 2017, the ICO boom catapulted Ethereum’s popularity. Infura had been in production for over a year and we felt confident about the state of our infrastructure and our ability to scale it. As token sales hammered the Ethereum mainnet, and more Web3 neophytes installed wallets like Metamask to explore and interact with this new ecosystem, Infura’s architecture was truly tested and we encountered new opportunities to improve and optimize our service.
Overnight, our small team organized an on-call, rotating shift system so we could maintain Infura’s availability and service our growing user base, 24 hours a day, 7 days a week. It was an exciting moment in Ethereum’s history and it was exhilarating to be a part of it. As the market continued to mature, we entered a slower period that some have referred to as “Crypto Winter.” This quieter era provided us with the time and space to take a step back, evaluate the state of our infrastructure and pour our collective energy into making engineering improvements to our service. Addressing
eth_call traffic was one of the first places to start.
The Importance of
Ethereum smart contracts are just pieces of code waiting to be executed by the Ethereum Virtual Machine (EVM). To interact with these smart contracts your application will be doing one of two things. It will either be sending an
eth_call or sending a transaction to the contract. The state of the Ethereum network changes as each block of transactions gets added to the network.
If you are trading on Uniswap, or interacting with other DeFi applications, you will be reading data from the contract based on the current state of the network before sending a transaction which will modify the resulting state of the network. An analogy would be making sure you know the balance of your bank account before sending someone a check. In technical terms, it means you need to have an Ethereum node with the full chain downloaded and up-to-date before you read the data from the smart contract.
That gives you the required data to prepare the transaction you send to the network which will mint tokens, swap tokens, open a new position, or any of the other mutable methods allowed by a smart contract. Each of those
eth_call reads executes by taking the parameters of the request and evaluating it within an execution environment (the Ethereum EVM with the latest network state). What is returned is the calculated output of the EVM.
By crypto winter, we were handling billions of API requests and about 300MM of those were eth_calls.
Eth_call is a complex API request and serving all that traffic with zero downtime was a great accomplishment but we knew there were areas where we had to improve. Our alerting systems noticed when spikes in traffic resulted in higher latency times or occasional error rate anomalies. We knew we could do better and that to prepare for traffic spikes in the future, we would need to revamp our systems. Over the next 18 months we had the opportunity to develop those improvements with hindsight and battle scars informing our decisions. We started by revisiting the monitoring and observability of our platform. Did we have enough visibility into any potential bottlenecks?
The most common way to scale API request traffic is to cache. You would put a server-side cache or CDN in front of your origin servers to protect your infrastructure. The cache would serve these common requests and shield your origin servers from the bulk of the traffic. If I ask for Resource A and a million other people ask for Resource A, the ideal scenario is that the cache loads Resource A from the origin once and the remaining 999,999 requests get served from the cache. The difference with serving blockchain data is that there is less cache commonality between similar requests. A million users interacting with the same smart contract may get a million different results.
Battle-Test: Running eth_calls Locally vs Infura’s Ethereum API
Two billion eth_calls per day averages out to about 23k requests per second, but in the last month we’ve observed spikes of up to 30k requests per second! Ethereum nodes are busy machines. They are doing a lot of things at the same time: validating blocks, storing and reading data from the local disk, keeping track of pending transactions from the peer-to-peer network, and handling RPC API requests from your applications. The good news is that the teams that have written Ethereum clients like Geth, Besu, and Open Ethereum have made them very performant.
Using two of the open-source tools developed by the Infura team, Versus and Ethspam, we ran a load test against a local Ethereum node to establish the number of
eth_call requests that can be handled by a local node. The results of this test revealed that the local node was able to handle a simulated
eth_call load of up to 5,249 requests per second:
This is no small feat! An
eth_call request is more compute heavy than a traditional API request. As a result, performance can vary depending on what else is going on inside the node at that point in time. Take a look at a sample of profiling data of what exactly is going on in a node during a load test like this:
As there’s so much going on, the request throughput a node is able to deliver tends to vary. We loaded the test parameters from the same file to ensure that the same 5000 requests would be sent to the node with each run. The performance of the test varied by as much as 22.2%! Below are the results from two additional test results, performed within seconds of each other:
If you were wondering how many requests an application like Uniswap makes in the course of making a trade for something like ETH to an ERC-20 token, inspecting the network traffic shows that the Metamask Web3 browser makes about 24 requests during the course of presenting a transaction estimate to the user.
By contrast, during the activity peak in 2017 Infura was processing +300MM eth_calls per day. Since that peak, we’ve scaled Infura’s architecture to over two billion eth_calls per day during the recent DeFi activity spike of 2020 - that’s nearly a 7x increase in volume! Our increased ability to handle such high volumes, in large part, comes down to our approach to scaling methods like
eth_call. Back in 2017, scaling was high-touch and manual. Today, Infura’s architecture dynamically scales to support our APIs. The process is seamless, fully automated and means that developers can expect reliability and stability from their Infura service, even during unprecedented network or usage spikes.
We feel a deep responsibility to give developers peace of mind so they can focus less on infrastructure and more on building great user experiences. If you’re interested in learning more about our architecture, you might find this post very helpful.
Yours in Infra,
Want more insight from the Infura engineering team? Subscribe to the Infura newsletter and never miss a post. As always, if you have questions or feature requests, you can join our community or reach out to us directly.