Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Host your own RPC Proxy
Loading...
Loading...
Basic code examples
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Load Network Releases: Understanding Our Testnets
Designed for user/dev exploration and testing.
Features low frequency of breaking changes.
Provides a more reliable environment for developers and users to interact with Load Network.
Acts as a testing ground for the Alphanets.
Characterized by frequent breaking changes and potential instability.
Playground for testing new features, EIPs, and experimental concepts.
Defining Load Network
Load Network is a high performance blockchain for onchain data storage - cheaply and verifiably store and access any data .
As a high-performance data-centric EVM network, Load Network maximizes scale and transparency for L1s, L2s and data-intensive dApps. Load Networn is dedicated to solving the problem of onchain data storage. Load Network offloads storage to Arweave, and is secured by EigenLayer and Ethereum, giving any other chain a way to easily plug in a robust permanent storage layer powered by a hyperscalable network of EVM nodes with bleeding edge throughput capacity.
Before March 2025, Load Network (abbreviations: LOAD or LN) was named WeaveVM Network (WVM). All existing references to WeaveVM (naming, links, etc.) in the documentation should be treated as Load Network.
A list of Load Network Alphanet Releases
Load Network is a high performance blockchain for data storage - cheaply and verifiably store, access, and compute with any data.
Load Network did not issue a token yet. Currently running on testnet.
Get set up with the onchain data center
Let's make it easy to get going with Load Network. In this doc, we'll go through the simplest ways to use Load across the most common use cases:
The easiest way to upload data to Load Network is to use a bundling service. Bundling services cover upload costs on your behalf, and feel just like using a web2 API.
The recommended testnet bundling service endpoints are:
The above example demonstrates posting data in a single Load Network base layer tx. This is limited by Load's blocksize, so tops out at about 8mb.
For practically unlimited upload sizes, you can use the large bundles spec to submit data in chunks. Chunks can even be uploaded in parallel, making large bundles a performant way to handle big uploads.
Chains like Avalanche, Metis and RSS3 use Load Network as a decentralized archive node. This works by feeding all new and historical blocks to an archiving service you can run yourself, pointed to your network's RPC.
With 125mb/s data throughput and long-term data guarantees, Load Network can handle DA for every known L2, with 99.8% room to spare.
Right now there are 4 ways you can integrate Load Network for DA:
DIY
If your data is already on another storage layer like IPFS, Filecoin, Swarm or AWS S3, you can use specialized importer tools to migrate.
The load-lassie import tool is the recommended way to easily migrate data stored via Filecoin or IPFS.
Just provide the CID you want to import to the API, e.g.:
https://lassie.load.rs/import/<CID>
Switching from Swarm to Load is as simple as changing the gateway you already use to resolve content from Swarm.
The first time Load's Swarm gateway sees a new hash, it uploads it to Load Network and serves it directly for subsequent calls. This effectively makes your Swarm data permanent on Load while maintaining the same hash.
Exploring Load Network key features
Let's explore the key features of Load Network:
Load Network achieves enterprise-like performance by limiting block production to beefy hardware nodes while maintaining trustless and decentralized block validation.
Block production is centralized, block validation is trustless and highly decentralized, and censorship is still prevented.
These "super nodes" producing Load Network blocks result in a high-performance EVM network.
Raising the gas limit increases the block size and operations per block, affecting both History growth and State growth (mainly relevant for our point here).
Load Network Alphanet V2 (formerly WeaveVM V2) has raised the gas limit to 500M gas (doing 500 mg/s), and lowered the gas per non-zero byte to 8. These changes have resulted in a larger max theoretical block size of 62 MB, and consequently, the network data throughput is ~62 MBps.
This high data throughput can be handled thanks to the approach of beefy block production by super nodes and hardware acceleration.
Up until now, there's been no real-world, scalable DA layer ready to handle high data throughput with permanent storage. In LOAD Alphanet V2, we've already reached 62 MBps with a projection to hit 125 MBps in mainnet.
To reduce the gas fees consumed by EVM opcode execution, we're aiming to use a parallel execution EVM client for the Reth node in mainnet.
Load Network uses a set of Reth execution extensions (ExExes) to serialize each block in Borsh, then compress it in Brotli before sending it to Arweave. These computations ensure a cost-efficient, permanent history backup on Arweave. This feature is crucial for other L1s/L2s using LOAD for data settling (LOADing).
And we can see that Borsh serialization combined with Brotli compression gives us the most efficient compression ratio in the data serialization-compression process.
Even compared to temporary blob-based solutions, Load Network still offers a significantly cheaper permanent data solution (calldata).
Load Network can be used as either a DA solution or for data settlement (like Ethereum). Since storing data on Load Network is very cheap compared to other EVM solutions, the network can be labeled as an L0 for other L1s or L2s.
Leveraging EigenLayer's AVS technology, Load network will run several AVSs (primarily LOAD1 and LOAD2) to decentralize the access to Load Network data, create interoperability with Ethereum and the wider EigenLayer ecosystem, and offer temporal data storage.
The table below does not include the list of minor releases between major Alphanet releases. For the full changelogs and releases, check them out here:
v0.1.0
v0.2.0
v0.3.0
v0.4.0 (LOAD Inception)
To easily feed Load Network docs to your favourite LLM, access the compressed knowledge (aka LLM.txt) file from Load Network:
(upload)
(retrieve)
Instantiate an uploader in the using this endpoint and the public testnet API key:
Limits are in place for the public testnet bundler. For production use at scale, we recommend running your own bundling service as explained , or
...Or to avoid copy-pasting.
The makes it possible for developers to spin up their own bundling services with support for large bundles.
As well as storing all real-time and historical data, Load Network can be used to reconstruct full chain state, effectively replicating exactly what archive nodes do, but with a decentralized storage layer underneath. Read to learn how.
DIY docs are a work in progress, but the to add support for Load Network in Dymension can be used as a guide to implement Load DA elsewhere.
Work with us to use Load DA for your chain - get onboarded .
The provides a 1:1 compatible development interface for applications using AWS S3 for storage, keeping method names and parameters in tact so the only change should be one line: the import
.
The importer is also self-hostable and further documented .
before: <hash>
after: <hash>
What this means, is that anyone with a sufficient amount of $LOAD tokens meeting the PoS staking threshold, plus the necessary hardware and internet connectivity (super-node, enterprise hardware), can run a node. This approach is inspired by Vitalik Buterin's work in post.
In the , we show the difference between various compression algorithms applied to Borsh-serialized empty block (zero transactions) and JSON-serialized empty block.
LOAD's hyper computation, supercharged hardware, and interface with Arweave result in significantly cheaper data settlement costs on WeaveVM, which include the Arweave fees to cover the archiving costs. .
Load Network offers self-DA secured by network economics along with a permanent data archive, secured by .
The LOAD team has developed the first precompiles that achieve a native bidirectional data pipeline with the Arweave network. In other words, with these precompiles (currently supported by Load Network testnet), you can read data from Arweave and send data to Arweave trustlessly and natively from a Solidity smart contract.
Load Network Compatibility with the standards
Load Network EVM is built on top of Reth, making it compatible as a network with existing EVM-based applications. This means you can run your current Ethereum-based projects on LN without significant modifications, leveraging the full potential of the EVM ecosystem.
Load Network EVM doesn't introduce new opcodes or breaking changes to the EVM itself, but it uses ExExes and adds custom precompiles:
gas per byte: 8
gas limit: 500_000_000
block time: 1s
gas/s: 500 mg/s
data throughput: ~62 MBps
ELI5 Load Network
Load Network mainnet is being built to be the highest performing EVM blockchain focusing on data storage, having the largest baselayer transaction input size limit (~16MB), the largest ever EVM transaction (~0.5TB 0xbabe transaction), very high network data throughput (multi-gigagas per second), high TPS, decentralization, full data storage stack offering (permanent and temporal), decentralized data gateways and data bundlers.
Load Network achieves high decentralization by using Arweave as decentralized hard drive, EigenLayer as decentralized cloud stack, and allowing network participation (nodes) . Load Network will offer both of permanent data storage and temporal data storage while maintaining decentralized and censorship-resistant data retrieval & ingress (gateways, bundling services, etc).
Load Network offers scalable and cost-effective storage by using Arweave as a decentralized hard drive, and EigenLayer as a decentralized cloud. This makes it possible to store large data sets and run web2-like applications without incurring EVM storage fees.
Load Network is an EVM compatible blockchain, therefore, rollups can be deployed on LN as same as the rollups state on Ethereum. In contrast to Ethereum or other EVM L1s, rollups deployed on top of LN benefit out-of-the-box from the data-centric features provided by LN (for rollup data settlement and DA).
Rollups deployed on Load Network use the native LN gas token (tLOAD on Alphanet), similar to how ETH is used for OP rollups on Ethereum.
Load Network is being built to serve as an onchain data center, offering a full stack of solutions for decentralized storage and data management:
Permanent Data Storage: Through LN native integration with Arweave via ExExes and precompiles.
Temporal Data Storage: Via Load Network LOAD2 AVS
Decentralized Infrastructure: Including gateways and a bundlers network functioning as an EigenLayer AVS (LOAD1)
LOAD1 AVS, powered by EigenLayer, will decentralize the access of Load Network Bundlers built on top of LN Bundler's data protocol network (which is already live on Alphanet v0.4.0 — permissionless). Decentralizing 0xbabe2 data broadcasting and retrieval via LOAD1, will unlock the largest EVM transaction (0xbabe2) wider in the Ethereum & EigenLayer ecosystem, giving access to 492GB size bundle.
Useful Links
Using Load Network's 0xbabe2 transaction format for large data uploads - the largest EVM transaction in history
In simple terms, a Large Bundle consists of n
smaller chunks (standalone bundles) that are sequentially connected tail-to-head and then at the end the Large Bundle is a reference to all the sequentially related chunks, packing all of the chunks IDs in a single 0xbabe2 bundle and sending it to Load Network.
Broadcasting an 0xbabe2 to Load Network can be done via the Bundler Rust SDK through 2 ways: the normal 0xbabe2 broadcasting (single-wallet single-threaded) or through the multi-wallet multi-threaded method (using SuperAccount).
Uploading data via the single-threaded method is efficient when the data isn't very large; otherwise, it would have very high latency to finish all data chunking then bundle finalization:
Multi-Threaded Broadcasting
Multi-Threaded 0xbabe2 broadcasting is done via a multi-wallet architecture that ensures parallel chunks settlement on Load Network, maximizing the usage of the network's data throughput. To broadcast a bundle using the multi-threaded method, you need to initiate a SuperAccount
instance and fund the Chunkers:
A Super Account is a set of wallets created and stored as keystore wallets locally under your chosen directory. In Bundler terminology, each wallet is called a "chunker". Chunkers optimize the DevX of uploading Large Bundle's chunks to LN by allocating each chunk to a chunker (~4MB per chunker), moving from a single-wallet single-threaded design in data uploads to a multi-wallet multi-threaded design.
0xbabe2 transaction data retrieval can be done either using the Rust SDK or the REST API. Using the REST API to resolve (chunk reconstruction until reaching final data) is faster for user usage as it does chunks streaming, resulting in near-instant data usability (e.g., rendering in browser).
Rust SDK
REST API
Claude 3 Haiku (70B params)
3.51 models (16-bit) or 14.06 models (4-bit)
Claude 3 Sonnet (175B params)
1.41 models (16-bit) or 5.62 models (4-bit)
Claude 3 Opus (350B params)
0.70 models (16-bit) or 2.81 models (4-bit)
Claude 3.5 Sonnet (250B params)
0.98 models (16-bit) or 3.94 models (4-bit)
Claude 3.7 Sonnet (300B params)
0.82 models (16-bit) or 3.28 models (4-bit)
GPT-4o (1500B params est.)
0.16 models (16-bit) or 0.66 models (4-bit)
GPT-4 Turbo (1100B params est.)
0.22 models (16-bit) or 0.89 models (4-bit)
Llama 3 70B
3.51 models (16-bit) or 14.06 models (4-bit)
Llama 3 405B
0.61 models (16-bit) or 2.43 models (4-bit)
Gemini Pro (220B params est.)
1.12 models (16-bit) or 4.47 models (4-bit)
Gemini Ultra (750B params est.)
0.33 models (16-bit) or 1.31 models (4-bit)
Mistral Large (123B params est.)
2.00 models (16-bit) or 8.00 models (4-bit)
Solana's State Snapshot (~70GB)
~7 instances
Bitcoin Full Ledger (~625 GB)
~78% of the ledger
Ethereum Full Ledger (~1250 GB)
~40% of the ledger
Ethereum blobs (~2.64 GB per day)
~186 days worth of blob data
Celestia's max throughput per day (112.5 GB)
4.37× capacity
MP3 Songs (4MB each)
123,000 songs
Full HD Movies (5GB each)
98 movies
4K Video Footage (2GB per hour)
246 hours
High-Resolution Photos (3MB each)
164,000 photos
Ebooks (5MB each)
100,000 books
Documents/Presentations (1MB each)
492,000 files
Database Records (5KB per record)
98 billion records
Virtual Machine Images (8GB each)
61 VMs
Docker container images (500MB each)
1,007 containers
Genome sequences (4GB each)
123 genomes
The Load Network Gateway Stack: Fast, Reliable Access to Load Network Data (to be decentralized with LOAD1)
All storage chains have the same issue: even if the data storage is decentralized, retrieval is handled by a centralized gateway. A solution to this problem is just to provide a way for anyone to easily run their own gateway – and if you’re an application building on Load Network, that’s a great way to ensure content is rapidly retrievable from the blockchain.
The LN Gateway Stack introduces a powerful new way to access data from Load Network bundles, combining high performance with network resilience. At its core, it’s designed to make bundle data instantly accessible while contributing to the overall health and decentralization of the LN.
The gateway stack solves several critical needs in the LN ecosystem:
Rapid data retrieval
Through local caching with SQLite, the gateway dramatically reduces load times (4-5x) for frequently accessed bundled data. No more waiting for remote data fetches – popular content is served instantly from the gateway node.
Network health
By making it easy to run your own gateway, the stack promotes a more decentralized network. Each gateway instance contributes to network redundancy, ensuring data remains accessible even if some nodes go offline.
Running your own LN gateway is pretty straightforward. The gateway stack is designed for easy deployment, directly to your server or inside a Docker container.
With Docker, you can have a gateway up and running in minutes:
Under the hood, the gateway stack features:
SQLite-backed persistent cache
Content-aware caching with automatic MIME type detection
Configurable cache sizes and retention policies
Application-specific cache management
Automatic cache cleanup based on age and size limits
Health monitoring and statistics
The gateway exposes a simple API for accessing bundle data:
GET /bundle/:txHash/:index
This endpoint handles the job of data retrieval, caching, and content-type detection behind the scenes.
The Load Network gateway stack was built in response to problems of scale – great problems to have as a new network gaining traction. LN bundle data is now more accessible, resilient and performant. By running a gateway, you’re not just improving your own access to LN data – you’re contributing to a more robust, decentralized network.
And for a complete dencentralized and robust network of Load Network gateways and data bundlers, we will build LOAD1 AVS, to secure the cloud infrastrucutre of LN, decentralize it, and add financial incentives to participate in the network.
Test the gateways:
Load is a high-performance blockchain built towards the goal of solving the EVM storage dilemma with and . It gives the coming generation of high-performance chains a place to settle and store onchain data, without worrying about cost, availability, or permanence.
Load Network offers scalable and cost-effective permanent storage by using Arweave as a decentralized hard drive, both at the node and smart contract layer, and EigenLayer as stack decentralization offering temporal storage and decentralizing the cloud infrastructure and data access of LN. This makes it possible to store large data sets and run web2-like applications without incurring EVM storage fees. Load Network's storage as calldata
Chains like Metis, RSS3 and Dymension use Load Network to permanently store onchain data, acting as a decentralized archival node. If you look at the common problems that are flagged up on , a lot of it has to do with centralized sources of truth and data that can’t be independently audited or reconstructed in a case where there’s a failure in the chain. LN adds a layer of protection and transparency to L2s, ruling out some of the failure modes of centralization. Learn more about the .
Load Network can plug in to a typical EVM L2's stack as a DA layer that's 10-15x cheaper than solutions like , and guarantees data permanence on Arweave. LN was built to handle DA for the coming generation of supercharged rollups. With a throughput of ~62MB/s, it could handle DA for and still have 99%+ capacity left over.
You can check out the custom to make use of LOAD-DA in any Reth node in less than 80 LoCs, also the to use EigenDA's data availability along with Load Network securing its archiving.
We have developed the first-ever Reth precompiles to facilitate, natively, a from the smart contract API level. Check out the full list of LN precompiled contracts .
For example, we released a technical guide for developers interested in deploying OP-Stack rollups on LN. .
— Dropbox onchain alternative
— Onchain Instagram
— onchain publishing toolkit
— import Swarm data to Load Network
— Filecoin/IPFS data importer to Load Network
— IPFS pinning service with Load Network permanent storage sidecar
0xbabe2 is the newest data transaction format from the Bundler data protocol. Also called "Large Bundle," it's a bundle under version 0xbabe2
(address: ) that exceeds the Load Network L1 and 0xbabe1
transaction input size limits, introducing incredibly high size efficiency to data storage on Load Network.
For example, with Alphanet v0.4.0 metrics running at 500 mgas/s, a Large Bundle has a max size of 246 GB. However, to ensure a smooth DevX and optimal finalization period (aka "safe mode"), we have limited the 0xbabe2 transaction input limit to 2GB at the level. If you want higher limits, you can achieve this by changing a simple constant!
If you have 10 hours to spare, make several teas and watch this 1 GB video streamed to you onchain from the Load Network! 0xbabe2 txid:
To dive deeper into the architecture design behind 0xbabe2 and how it works, check out the 0xbabe2 section in the .
RPC URL:
Alphanet Faucet:
Explorer:
Chainlist:
When – a photo sharing dApp that uses LN bundles for storage – started getting traction, the default LN gateway became a bottleneck for the Relic team. The way data is stored inside bundles (hex-encoded, serialized, compressed) can make it resource-intensive to decode and present media on demand, especially when thousands of users are doing so in parallel.
In response, we developed two new open source gateways: one , and .
For , this slashed feed loading times from 6-8 seconds to near-instant.
For rustaceans, rusty-gateway is deployable on a Rust host like – get the repo and Shuttle deployment docs .
-
About load:// data retrieving protocol
Load Network Data Retriever (load://) is a protocol for retrieving data from the Load Network (EVM). It leverages the LN DA layer and Arweave’s permanent storage to provide trustless access LN transaction data through both networks, whether that’s data which came from LN itself, or L2 data that was settled to LN.
Many chains solve this problem by providing query interfaces to archival nodes or centralized indexers. For Load Network, Arweave is the archival node, and can be queried without special tooling. However, the data LN stores on Arweave is also encoded, serialized and compressed, making it cumbersome to access. The load:// protocol solves this problem by providing an out-of-the-box way to grab and decode Load Network data while also checking it has been DA-verified.
The data retrieval pipeline ensures that when you request data associated with a Load Network transaction, it passes through at least one DA check (currently through LN's self-DA).
It then retrieves the transaction block from Arweave, published by LN ExExes, decodes the block (decompresses Brotli and deserializes Borsh), and scans the archived sealed block transactions within LN to locate the requested transaction ID, ultimately returning the calldata (input) associated with it.
Currently, the load:// gateway server provides two methods: one for general data retrieval and another specifically for transaction data posted by the load-archiver nodes. To retrieve calldata for any transaction on Load Network, you can use the following command:
The second method is specific to load-archiver
nodes because it decompresses the calldata and then deserializes its Borsh encoding according to a predefined structure. This is possible because the data encoding of load-archiver data is known to include an additional layer of Borsh-Brotli encoding before the data is settled on LN.
The latency includes the time spent fetching data from LN EVM RPC and the Arweave gateway, as well as the processing time for Brotli decompression, Borsh deserialization, and data validity verification.
The LN Bundler is the fastest, cheapest and most scalable way to store EVM data onchain
Bundler as data protocol and library is still in PoC (Proof of Concept) phase - not recommended for production usage, testing purposes only.
Reduces transaction overhead fees from multiple fees (n
) per n
transaction to a single fee per bundle of envelopes (n
transactions)
Enables third-party services to handle bundle settlement on LN (will be decentralized with LOAD1)
Maximizes the TPS capacity of LN without requiring additional protocol changes or constraints
Supports relational data grouping by combining multiple related transactions into a single bundle
Bundler: Refers to the data protocol specification of the EVM bundled transactions on Load Network.
Envelope: A legacy EVM transaction that serves as the fundamental building block and composition unit of a Bundle.
Bundle: An EIP-1559 transaction that groups multiple envelopes (n > 0
), enabling efficient transaction batching and processing.
Large Bundle: A transaction that carries multiple bundles.
Bundler Lib: Refers to the Bundler Rust library that facilitates composing and propagating Bundler's bundles.
A bundle is a group of envelopes organized through the following process:
Envelopes MUST be grouped in a vector
The bundle is Borsh serialized according to the BundleData
type
The resulting serialization vector is compressed using Brotli compression
The Borsh-Brotli serialized-compressed vector is added as input
(calldata) to an EIP-1559 transaction
The resulting bundle is broadcasted on Load Network with target
set to 0xbabe
addresses based on bundle version.
Bundles versioning is based on the bundles target address:
An envelope is a signed Legacy EVM transaction with the following MUSTs and restrictions.
Transaction Fields
nonce
: MUST be 0
gas_limit
: MUST be 0
gas_price
: MUST be 0
value
: MUST be 0
Size Restrictions
Total Borsh-Brotli compressed envelopes (Bundle data) MUST be under 9 MB
Total Tags bytes size must be <= 2048 bytes before compression.
Signature Requirements
each envelope MUST have a valid signature
Usage Constraints
MUST be used strictly for data settling on Load Network
MUST only contain envelope's calldata, with optional target
setting (default fallback to ZERO address)
CANNOT be used for:
tLOAD transfers
Contract interactions
Any purpose other than data settling
The selection of transaction types follows clear efficiency principles. Legacy transactions were chosen for envelopes due to their minimal size (144 bytes), making them the most space-efficient option for data storage. EIP-1559 transactions were adopted for bundles as the widely accepted standard for transaction processing.
Envelopes exist as signed Legacy transactions within bundles but operate under distinct processing rules - they are not individually processed by the Load Network as transactions, despite having the structure of a Legacy transaction (signed data with a Transaction type). Instead, they are bundled together and processed as a single onchain transaction (therefore the advantage of Bundler).
Multiple instances of the same envelope within a bundle are permissible and do not invalidate either the bundle or the envelopes themselves. These duplicate instances are treated as copies sharing the same timestamp when found in a single bundle. When appearing across different bundles, they are considered distinct instances with their respective bundle timestamps (valid envelopes and considered as copies of distinct timestamps).
Since envelopes are implemented as signed Legacy transactions, they are strictly reserved for data settling purposes. Their use for any other purpose is explicitly prohibited for the envelope's signer security.
A Super Account is a set of wallets created and stored as keystore wallets locally under your chosen directory. In Bundler terminology, each wallet is called a "chunker". Chunkers optimize the DevX of uploading LB chunks to LN by splitting each chunk to a chunker (~4MB per chunker), moving from a single-wallet single-threaded design in data uploads to a multi-wallet multi-threaded design.
Large Bundles are built on top of the Bundler data specification. In simple terms, a Large Bundle consists of n
smaller chunks (standalone bundles) that are sequentially connected tail-to-head and then at the end the Large Bundle is a reference to all the sequentially related chunks, packing all of the chunks IDs in a single 0xbabe2
bundle and sending it to Load Network.
Determining Number of Chunks
To store a file of size S (in MB) with a chunk size C, the number of chunks (N) is calculated as:
N = ⌊S/C⌋ + [(S mod C) > 0]
Special case: if S < C then N = 1
Maximum Theoretical Size
The bundling actor collects all hash receipts of the chunks, orders them in a list, and uploads this list as a LN L1 transaction. The size components of a Large Bundle are:
2 Brackets [ ] = 2 bytes
EVM transaction header without "0x" prefix = 64 bytes per hash
2 bytes for comma and space (one less comma at the end, so subtract 2 from total)
Size per chunk's hash = 68 bytes
Therefore: Total hashes size = 2 + (N × 68) - 2 = 68N bytes
Maximum Capacity Calculation
Maximum L1 transaction input size (C_tx
) = 4 MB = 4_194_304 bytes
Maximum number of chunks (Σn
) = C_tx
÷ 68 = 4_194_304 ÷ 68 = 61_680 chunks
Maximum theoretical Large Bundle size (C_max
) = Σn
× C_tx
= 61_680 × 4 MB = 246,720 MB ≈ 246.72 GB
Build an envelope, build a bundle
Example: Build a bundle packed with envelopes
Example: Send tagged envelopes
Example: construct and disperse a Large Bundle single-threaded
Example: construct and disperse a Large Bundle multi-threaded
Example: Retrieve Large Bundle data
from
's envelope property derived from sig)N.B: All of the
/v1
methods (0xbabe1
) are available under/v2
for0xbabe2
Large Bundles.
About Load Network Native JSON-RPC methods
eth_getArweaveStorageProof
JSON-RPC methodThis JSON-RPC method lets you retrieve the Arweave storage proof for a given Load Network block number
eth_getWvmTransactionByTag
JSON-RPC methodFor Load Network L1 tagged transactions, the eth_getWvmTransactionByTag
lets you retrieve a transaction hash for a given name-value tag pair.
About Load Network precompiled contracts
Ethereum uses precompiles to efficiently implement cryptographic primitives within the EVM instead of re-implementing these primitives in Solidity.
The following precompiles are currently included: ecrecover, sha256, blake2f, ripemd-160, Bn256Add, Bn256Mul, Bn256Pairing, the identity function, modular exponentiation, and point evaluation.
Ethereum precompiles behave like smart contracts built into the Ethereum protocol. The ten precompiles live in addresses 0x01 to 0x0A. Load Network supports all of these 10 standard precompiles and adds new custom precompiles starting at the 23rd byte representing the letter "W" position (index) in the alphabet. Therefore, Load Network precompiles start at address 0x17 (23rd byte).
The LN Precompile at address 0x17 (0x0000000000000000000000000000000000000017
) enables data upload (in byte format) from Solidity to Arweave, and returns the data TXID (in byte format). In Alphanet V4, data uploads are limited to 100KB. Future network updates will remove this limitation and introduce a higher data cap.
Solidity code example:
This precompile, at address 0x18 (0x0000000000000000000000000000000000000018
), completes the data pipeline between LN and Arweave, making it bidirectional. It enables retrieving data from Arweave in bytes for a given Arweave TXID.
The 0x18 precompile allows user input to choose their Arweave gateway for resolving a TXID. If no gateway URL is provided, the precompile defaults to arweave.net
.
The format of the precompile bytes input (string representation) should be as follows: gateway_url;arweave_txid
Solidity code example:
This precompile, at address 0x20(0x0000000000000000000000000000000000000020
), lets smart contract developers not access only the most recent 256 blocks, but any block data starting at genesis. To explain how to request block data using the 0x20 precompile, here is a code example:
As you can see, for the query variable we have three “parameters” separated by a semicolon ”;” (gateway;load_block_id;block_field
format)
Load Network's block number to fetch, target block: 141550
The field of the block struct to access, in this case: hash
Only the gateway is an optional parameter, and regarding the field of the block struct to access, here is the Block struct that the 0x20 precompile uses:
Therefore, with 0x21, KYVE clients will have, for the first time, the ability to fetch their archived blobs from an EVM smart contract layer instead of wrapping the Trustless API in oracles and making expensive calls.
The eip-4844 transaction fields that you can access from the 0x21 query are:
blob (raw blob data, the body)
kzg_commitment
kzg_proof
slot
Advantages of 0x21 (use cases)
Native access to blob data from smart contract layer
Access to permanently archived blobs
Opens up longer verification windows for rollups using KYVE for archived blobs and LN for settlement layer
Enables using blobs for purposes beyond rollups DA, opening doors for data-intensive blob-based applications with permanent blob access
You can find the proxy server codebase here:
To upload data to Load Network with the alphanet bundling service, see in the quickstart docs for the and .
Load Network Bundler is a data protocol specification and library that introduces the first bundled EVM transactions format. This protocol draws inspiration from Arweave's specification.
For the JS/TS version of LN bundles, .
A Large Bundle is a bundle under version 0xbabe2 that exceeds the Load Network L1 and 0xbabe1
transaction size limits, introducing incredibly high size efficiency to data settling on LN. For example, with running @ 500 mgas/s, a Large Bundle has a max size of 246 GB. For the sake of DevX and simplicity of the current 0xbabe2 stack, Large Bundles in the Bundler SDK have been limited to 2GB, while on the network level, the size is 246GB.
For more examples, check the tests in .
Base endpoint:
You can find the proxy server codebase here:
An Arweave gateway (optional and fallback to arweave.net if not provided):
This precompile, at address 0x21 (0x0000000000000000000000000000000000000021
), is a unique solution for native access to blobs data (not just commitments) from the smart contract layer. This precompile fetches from the the blobs data that KYVE archives for their supported networks.
0x21 lets you fetch KYVE's Ethereum blob data starting at Ethereum's block - the first block with a recorded EIP-4844 transaction. To retrieve a blob from the Trustless API, in the 0x21 staticcall you need to specify the Ethereum block number, blob index in the transaction, and the blob field you want to retrieve, in this format: block_number;blob_index.field
N.B: blob_index represents the blob index in the KYVE’s Trustless API JSON response:
Check out the 0x21 precompile source code .
500 mgas/s (current)
4MB
4MB
246 GB
1 gigagas/s (upcoming)
8MB
8MB
492 GB
LN L1 Calldata
1,000,000
1
8,500,000 (8M for calldata & 500k as base gas fee)
1 Gwei
-
-
~$0.05
Ethereum L1
1,000,000
41
202,835,200 gas
20 Gwei
4.056704
$3641.98
$14774.43
Polygon Sidechain
1,000,000
41
202,835,200 gas
40 Gwei (L1: 20 Gwei)
8.113408
$0.52
$4.21
BSC L1
1,000,000
41
202,835,200 gas
5 Gwei
1.014176
$717.59
$727.76
Arbitrum (Optimistic L2)
1,000,000
41
202,835,200 gas (+15,000,000 L1 gas)
0.1 Gwei (L1: 20 Gwei)
0.020284 (+0.128168 L1 fee)
$3641.98
$540.66
Avalanche L1
1,000,000
41
202,835,200 gas
25 Gwei
5.070880
$43.90
$222.61
Base (Optimistic L2)
1,000,000
41
202,835,200 gas (+15,000,000 L1 gas)
0.001 Gwei (L1: 20 Gwei)
0.000203 (+0.128168 L1 fee)
$3641.98
$467.52
Optimism (Optimistic L2)
1,000,000
41
202,835,200 gas (+15,000,000 L1 gas)
0.001 Gwei (L1: 20 Gwei)
0.000203 (+0.128168 L1 fee)
$3641.98
$467.52
Blast (Optimistic L2)
1,000,000
41
202,835,200 gas (+15,000,000 L1 gas)
0.001 Gwei (L1: 20 Gwei)
0.000203 (+0.128168 L1 fee)
$3641.98
$467.52
Linea (ZK L2)
1,000,000
41
202,835,200 gas (+12,000,000 L1 gas)
0.05 Gwei (L1: 20 Gwei)
0.010142 (+0.072095 L1 fee)
$3641.98
$299.50
Scroll (ZK L2)
1,000,000
41
202,835,200 gas (+12,000,000 L1 gas)
0.05 Gwei (L1: 20 Gwei)
0.010142 (+0.072095 L1 fee)
$3641.98
$299.50
Moonbeam (Polkadot)
1,000,000
41
202,835,200 gas (+NaN L1 gas)
100 Gwei
20.283520
$0.27
$5.40
Polygon zkEVM (ZK L2)
1,000,000
41
202,835,200 gas (+12,000,000 L1 gas)
0.05 Gwei (L1: 20 Gwei)
0.010142 (+0.072095 L1 fee)
$3641.98
$299.50
Solana L1
1,000,000
98
490,000 imports
N/A
0.000495 (0.000005 deposit)
$217.67
$0.11
LN Bundler 0xbabe1
5,000,000
1
40,500,000 (40M for calldata & 500k as base gas fee)
1 Gwei
-
-
~$0.25-$0.27
LN L1 Calldata
5,000,000
5
42,500,000 (40M for calldata & 2.5M as base gas fee)
1 Gwei
-
-
~$0.22
Ethereum L1
5,000,000
204
1,009,228,800 gas
20 Gwei
20.184576
$3650.62
$73686.22
Polygon Sidechain
5,000,000
204
1,009,228,800 gas
40 Gwei (L1: 20 Gwei)
40.369152
$0.52
$20.95
BSC L1
5,000,000
204
1,009,228,800 gas
5 Gwei
5.046144
$717.75
$3621.87
Arbitrum (Optimistic L2)
5,000,000
204
1,009,228,800 gas (+80,000,000 L1 gas)
0.1 Gwei (L1: 20 Gwei)
0.100923 (+0.640836 L1 fee)
$3650.62
$2707.88
Avalanche L1
5,000,000
204
1,009,228,800 gas
25 Gwei
25.230720
$44.01
$1110.40
Base (Optimistic L2)
5,000,000
204
1,009,228,800 gas (+80,000,000 L1 gas)
0.001 Gwei (L1: 20 Gwei)
0.001009 (+0.640836 L1 fee)
$3650.62
$2343.13
Optimism (Optimistic L2)
5,000,000
204
1,009,228,800 gas (+80,000,000 L1 gas)
0.001 Gwei (L1: 20 Gwei)
0.001009 (+0.640836 L1 fee)
$3650.62
$2343.13
Blast (Optimistic L2)
5,000,000
204
1,009,228,800 gas (+80,000,000 L1 gas)
0.001 Gwei (L1: 20 Gwei)
0.001009 (+0.640836 L1 fee)
$3650.62
$2343.13
Linea (ZK L2)
5,000,000
204
1,009,228,800 gas (+60,000,000 L1 gas)
0.05 Gwei (L1: 20 Gwei)
0.050461 (+0.360470 L1 fee)
$3650.62
$1500.16
Scroll (ZK L2)
5,000,000
204
1,009,228,800 gas (+60,000,000 L1 gas)
0.05 Gwei (L1: 20 Gwei)
0.050461 (+0.360470 L1 fee)
$3650.62
$1500.16
Moonbeam (Polkadot)
5,000,000
204
1,009,228,800 gas (+NaN L1 gas)
100 Gwei
100.922880
$0.27
$26.94
Polygon zkEVM (ZK L2)
5,000,000
204
1,009,228,800 gas (+60,000,000 L1 gas)
0.05 Gwei (L1: 20 Gwei)
0.050461 (+0.360470 L1 fee)
$3650.62
$1500.16
Solana L1
5,000,000
489 tx
2445.00k imports
N/A
0.002468 (0.000023 deposit)
$218.44
$0.54
Total Data Size
40 MB
40 MB
Transaction Format
40 separate EIP-1559 transactions
5 bundle transactions (8MB each, 40 * 1MB envelopes)
Transactions per Bundle
1 MB each
8 x 1MB per bundle
Gas Cost per Tx
8.5M gas (8M calldata + 500k base)
64.5M gas (64M + 500k base) per bundle
Number of Base Fees
40
5
Total Gas Used
340M gas (40 x 8.5M)
322.5M gas (5 x 64.5M)
Gas Price
1 Gwei
1 Gwei
Total Cost
~$1.5-1.7
~$1.3
Cost Savings
-
~15% cheaper
0x01 (0x0000000000000000000000000000000000000001
)
ecRecover
3000
hash, v, r, s
publicAddress
Elliptic curve digital signature algorithm (ECDSA) public key recovery function
0x02 (0x0000000000000000000000000000000000000002
)
SHA2-256
60
data
hash
Hash function
0x03 (0x0000000000000000000000000000000000000003
)
RIPEMD-160
600
data
hash
Hash function
0x04 (0x0000000000000000000000000000000000000004
)
identity
15
data
data
Returns the input
0x05 (0x0000000000000000000000000000000000000005
)
modexp
200
Bsize, Esize, Msize, B, E, M
value
Arbitrary-precision exponentiation under modulo
0x06 (0x0000000000000000000000000000000000000006
)
ecAdd
150
x1, y1, x2, y2
x, y
Point addition (ADD) on the elliptic curve alt_bn128
0x07 (0x0000000000000000000000000000000000000007
)
ecMul
6000
x1, y1, s
x, y
Scalar multiplication (MUL) on the elliptic curve alt_bn128
0x08 (0x0000000000000000000000000000000000000008
)
ecPairing
45000
x1, y1, x2, y2, ..., xk, yk
success
Bilinear function on groups on the elliptic curve alt_bn128
0x09 (0x0000000000000000000000000000000000000009
)
blake2f
0
rounds, h, m, t, f
h
Compression function F used in the BLAKE2 cryptographic hashing algorithm
0x0A (0x000000000000000000000000000000000000000A
)
point evaluation
50000
bytes
bytes
Verify p(z) = y given commitment that corresponds to the polynomial p(x) and a KZG proof. Also verify that the provided commitment matches the provided versioned_hash.
0x17 (0x0000000000000000000000000000000000000017
)
arweave_upload
10003
bytes
bytes
upload bytes array to Arweave and get back the upload TXID in bytes
0x18 (0x0000000000000000000000000000000000000018
)
arweave_read
10003
bytes
bytes
retrieve an Arweave TXID data in bytes
0x20 (0x0000000000000000000000000000000000000020
)
read_block
10003
bytes
bytes
retrieve a LN's block data (from genesis) pulling it from Arweave
0x21 (0x0000000000000000000000000000000000000021
)
kyve_trustless_api_blob
10003
bytes
bytes
retrieve a historical Ethereum blob data from LN's smart contract layer
About Load Network LOAD2 EigenLayer AVS (Status: WIP)
The LOAD2 AVS will be Load Network's own data storage network to offer temporal data storage, leveraging EigenLayer AVS stack.
As a temporal data storage network, the LOAD2 AVS will complete Load Network data storage solutions offering (permanent and temporal data solutions).
The AVS LOAD2, along with LOAD1, construct Load Network's decentralized cloud infrastructure. More info to come about Load Network AVSs on EigenLayer.
v0.1.0
0xbabe1
v0.2.0
0xbabe2
Connect any EVM network to Load Network
Load Network Archiver is an ETL archive pipeline for EVM networks. It's the simplest way to interface with LN's permanent data feature without smart contract redeployments.
LN Archiver is the ideal choice if you want to:
Interface with LN's permanent data settlement and high-throughput DA
Maintain your current data settlement or DA architecture
Have an interface with LN without rollup smart contract redeployments
Avoid codebase refactoring
Run An Instance
Reconstruction an EVM network using using its load-archiver node instance
The World State Trie, also known as the Global State Trie, serves as a cornerstone data structure in Ethereum and other EVM networks. Think of it as a dynamic snapshot that captures the current state of the entire network at any given moment. This sophisticated structure maintains a crucial mapping between account addresses (both externally owned accounts and smart contracts) and their corresponding states.
Each account state in the World State Trie contains several essential pieces of information:
Current balance of the account
Transaction nonce (tracking the number of transactions sent from this account)
Smart contract code (for contract accounts)
Hash of the associated storage trie (linking to the account’s persistent storage)
This structure effectively represents the current status of all assets and relevant information on the EVM network. Each new block contains a reference to the current global state, enabling network nodes to efficiently verify information and validate transactions.
An important distinction exists between the World State Trie database and the Account Storage Trie database. While the World State Trie database maintains immutability and reflects the network’s global state, the Account Storage Trie database remains mutable with each block. This mutability is necessary because transaction execution within each block can modify the values stored in accounts, reflecting changes in account states as the blockchain progresses.
The core focus of this article is demonstrating how Load Network Archivers’ data lakes can be leveraged to reconstruct an EVM network’s World State. We’ve developed a proof-of-concept library in Rust that showcases this capability using a customized Revm wrapper. This library abstracts the complexity of state reconstruction into a simple interface that requires just 10 lines of code to implement.
Here’s how to reconstruct a network’s state using our library:
The reconstruction process follows a straightforward workflow:
The library connects to the specified Load Network Archive network
Historical ledger data is retrieved from the Load Network Archiver data lakes
Retrieved blocks are processed through our custom minimal EVM execution machine
The EVM StateManager applies the blocks sequentially, updating the state accordingly
The final result is a complete reconstruction of the network’s World State
We built this PoC to showcase what’s possible when you combine permanent storage with proper EVM state handling. Whether you’re analyzing historical network states, debugging complex transactions, or building new tools for chain analysis, the groundwork is now laid.
Guidance on How To Deploy OP-Stack Rollups on Laod Network
The goal of optimistic rollups is to increase L1 transaction throughput while reducing transaction costs. For example, when Optimism users sign a transaction and pay the gas fee in ETH, the transaction is first stored in a private mempool before being executed by the sequencer. The sequencer generates blocks of executed transactions every two seconds and periodically batches them as call data submitted to Ethereum. The “optimistic” part comes from assuming transactions are valid unless proven otherwise.
In the case of Laod Network, we have modified OP Stack components to use LN as the data availability and settlement layer for L2s deployed using this architecture.
As a result, OP Stack rollups using LN for data settlement and data availability (DA) will benefit from the cost-effective, permanent data storage offered by Load Network and Arweave. Rollups deployed on LN use the native network gas token (tLOAD on Alphanet), similar to how ETH is used for OP rollups on Ethereum.
In this example we will use the First of all, install the package:
To run your own node instance of the load-archiver
tool, check out the detailed setup guide on github:
This proof-of-concept implementation is available on GitHub:
has evolved beyond its foundation as a decentralized archive node. This proof of concept demonstrates how our comprehensive data storage enables full EVM network state reconstruction - a capability that opens new possibilities for network analysis, debugging, and state verification.
Before deploying, make sure the Load Network network is configured in your MetaMask wallet. .
For this example, we will use the ERC20 token template provided by the smart contract library.
The is a generalizable framework spawned out of Optimism’s efforts to scale the Ethereum L1. It provides the tools for launching a production-quality Optimistic Rollup blockchain with a focus on modularity. Layers like the sequencer, data availability, and execution environment can be swapped out to create novel L2 setups.
We’ve built on top of the to enable the deployment of optimistic rollups using LN as the L1. The key difference between deploying OP rollups on Load Network versus Ethereum is that when you send data batches to LN, your rollup data is also permanently archived on Arweave via to
We’ve released a detailed technical guide on GitHub for developers looking to deploy OP rollups on Load Network. Check it out and the
About Load Network LOAD1 EigenLayer AVS (Status: WIP)
LOAD1
AVS will be built to decentralize Load Network gateway stack and create a robust network of packed gateway+bundler nodes.
Built using EigenLayer and LN's Bundler data protocol, LOAD1 will bring LN's permanent data to the EigenLayer ecosystem, delivering the first permanent data storage solution with near-instant data availability and incentivized data serving.
LOAD1 operators will offer 2 main services: data bundling (0xbabe2 storage) and data serving (0xbabe2 optimistic caching and native reconstruction from Load Network).
About Reth Execution Extensions (ExEx)
ExEx is a framework for building performant and complex off-chain infrastructure as post-execution hooks.
In the following pages we will list the ExExes developed and used by Load Network.
Reth ExExes can be used to implement rollups, indexers, MEV bots and more with >10x less code than existing methods. Check out the Reth ExEx announcement by Paradigm
Plug Load Network high-throughput DA into any Reth node
Adding a DA layer usually requires base-level changes to a network’s architecture. Typically, DA data is posted either by sending calldata to the L1 or through blobs, with the posting done at the sequencer level or by modifying the rollup node’s code.
This ExEx introduces an emerging, non-traditional DA interface for EVM rollups. No changes are required at the sequencer level, and it’s all handled via the ExEx, which is easy to add to any Reth client in just 80 lines of code.
First, you’ll need to add the following environment variables to your Reth instance:
The archiver_pk
refers to the private key of the LN wallet, which is used to pay gas fees on the LN for data posting. The network
variable points to the path of your network configuration file used for the ExEx. A typical network configuration file looks like this:
stands for Binary Object Representation Serializer for Hashing and is a binary serializer developed by the team. It is designed for security-critical projects, prioritizing consistency, safety, and speed, and comes with a strict specification.
The ExEx utilizes Borsh to serialize and deserialize block objects, ensuring a bijective mapping between objects and their binary representations.
For a more detailed setup guide for your network, check out this .
Finally, to implement the Load Network DA ExEx in your Reth client, simply import the DA ExEx code into your ExExes directory and it will work off the shelf with your Reth setup. .
Explore WeaveVM developed ExExes
In the following section you will explore the Execution Extensions developed by our team to power WeaveVM
Load Network AO's WeaveDrive ExEx
This ExEx is the first data upload pipeline between an Ethereum client (reth) and Arweave, the permanent data storage network. The ExEx uses to bundle data and send it to Arweave. .
Load Network has created the first Reth ExEx that attest data to AO network following the WeaveDrive data protocol specification — check & learn more about
About LN-ExEx Data Protocol on Arweave
After the rebrand from WeaveVM to Load Network, all the data protocol tags have changed the "*WeaveVM*" onchain term (Arweave tag) to "*LN*"
The data protocol transactions follow the ANS-104 data item specifications. Each Load Network block is posted on Arweave, after borsh-brotli encoding, with the following tags:
Protocol
LN-ExEx
Data protocol identifier
ExEx-Type
Arweave-Data-Uploader
The Load Network ExEx type
Content-Type
application/octet-stream
Arweave data transaction MIME type
LN:Encoding
Borsh-Brotli
Transaction's data encoding algorithms
Block-Number
$value
Load Network block number
Block-Hash
$value
Load Network block hash
Client-Version
$value
Load Network Reth client version
Network
Alphanet vx.x.x
Load Network Alphanet semver
LN:Backfill
$value
Boolean, if the data has been posted by a backfiller (true) or archiver (false or not existing data)
The LN-ExEx
data protocol on Arweave is responsible for archiving Load Network's full block data, which is posted to Arweave using the
Reth ExEx Archiver Address:
Arweave-ExEx-Backfill Address:
Description of Laod Network integration as a Data Availability client for Dymension RollApps
Load Network provides a gateway for Arweave's permanent with its own (LN) high data throughput of the permanently stored data into .
Current maximum encoded blob size is 8 MB (8_388_608 bytes).
Laod Network currently operating in public testnet (Alphanet) - not recommended to use it in production environment.
Understand how to boot basic Dymension RollApp and how to configure it.
How it works
and to enable tls next fields should be add to the json file:
web3_signer_tls_cert_file
web3_signer_tls_key_file
web3_signer_tls_ca_cert_file
Web3 signer
in rollap-evm log you will eventually see something like this:
Obtain test tLOAD tokens through our for testing purposes.
Monitor your transactions using the .
You may choose to use Load Network as a DataAvailability layer of your RollApp. We assume that you know how to boot and configure basics of your dymint RollApp. As an example you may use
repository.
Example uses "mock" DA client. To use Load Network you should simply set next environment variable
before config generation step using init.sh
export DA_CLIENT="weavevm" # This is the key change
export WVM_PRIV_KEY="your_hex_string_wvm_priv_key_without_0x_prefix"
init.sh will generate basic configuration for da_config.json
in dymint.toml
which should look like.
da_config = '{"endpoint":"https://alphanet.load.network","chain_id":9496,"timeout":60000000000,"private_key_hex":"your_hex_string_load_priv_key_without_0x_prefix"}'
In this example we use PRIVATE_KEY
of your LN address. It's not the most secure way to handle transaction signing and that's why we also provide an ability to use web3signer as a signing method. To enable web3signer you will need to change init.sh script and add correspondent fields or change da_config.json
in dymint.toml
directly.
e.g
da_config = '{"endpoint":"https://alphanet.load.network","chain_id":9496,"timeout":"60000000000","web3_signer_endpoint":"http://localhost:9000"}'
is a tool by Consensys which allows remote signing.
Using a remote signer comes with risks, please read the web3signer docs. However this is a recommended way to sign transactions for enterprise users and production environments. Web3Signer is not maintained by Load Network team. Example of the most simple local web3signer deployment (for testing purposes): Example of used configuration:
Permanent EigenDA blobs
It's a Load Network integration as a secondary backend of eigenda-proxy. In this scope, Load Network provides an EVM gateway/interface for EigenDA blobs on Arweave's Permaweb, removing the need for trust assumptions and relying on centralized third party services to sync historical data and provides a "pay once, save forever" data storage feature for EigenDA blobs.
Current maximum encoded blob size is 8 MiB (8_388_608 bytes).
Load Network currently operating in public testnet (Alphanet) - not recommended to use it in production environment.
Review the configuration parameters table and .env
file settings for the Holesky network.
Please double check .env file values you start eigenda-proxy binary with env vars. They may conflict with flags.
Start eigenda proxy with LN private key:
POST command:
GET command:
start eigenda proxy with signer:
start web3signer tls:
About the Data Protocol of Load Network Precompile Contracts
After the rebrand from WeaveVM to Load Network, all the data protocol tags have changed the "*WeaveVM*" onchain term (Arweave tag) to "*LN*"
The data protocol transactions follow the ANS-104 data item specifications. Each LN precompile transaction is posted on Arweave, after brotli compression, with the following tags:
Explaining LaoD Framework
LoaD modular framework is the first shared sequencing framework that makes ao and any EVM chain interoperable, creating a bidirectional data pipeline, facilitating new possibilities such as trustless assets bridging, cross communication, and adding decentralization to single-sequencer EVM networks.
Trustless asset bridging: ERC20s <> AO tokens with no centralised third party
Access to the whole EVM dApp ecosystem – Uniswap, Maker, identity/reputation layers, common NFT standards (ERC721), liquidity, etc.
Agents with permissionless access to an EVM compatible wallet, tooling, assets
Access to evergrowing L1s and L2s ecosystem resources (users, generated historical data, capital, developers, etc)
(ao leveraged EVM -ethereum- for liquidity access in the ao token fair-launch)
Access to the most advanced decentralized agents – Solidity limitations bypassed by ao’s WASM processes
EVMs access to ao’s financial ecosystem (liquidity, defi agents, etc)
Offloading heavy and complex EVM compute to ao trustlessly.
Offer ao’s altDA in the EVM’s rollup stack (L2s for L1 and L3s for L2s) – modularity: e.g. OP-stack on Load Network with access to ao’s altDA
Decentralizing the rollup’s single sequencer model (JSON-RPC based single sequencer – censorable)
LaoD (pronounced as /loʊd/) naming comes from the framework initiative being led by Load Network, and the action of "loading" ao onto an EVM network (load ao).
ao is a data protocol, compute layer agnostic, therefore building an EVM compute layer is totally possible but with constraints (full EVM compatibility, processes cross communication, custom-making the battle-tested EVM ELs and CLs, etc). Therefore, we at Load Network (under Decent Land Labs), are looking at the "EVM on ao" angle differently: make any EVM network interoperable with ao via shared sequencing, and load.network will be the first client of this framework.
Therefore, instead of reinventing the EVM wheel by trying to make it work natively on ao, we are creating a modular EVM shared sequencing framework that can let any EVM network - come with its finely tuned edges and capabilities, with no tradeoffs – to plugin with ao, leverage it for decentralized and performant shared sequencing and benefit from the data interoperability gained by the shared sequencing on ao.
Initiating an ERC20 token transfer on Load Network via the LaoD dummy poc process:
EigenDA proxy:
LN-EigenDA wraps the , exposing endpoints for interacting with the EigenDA disperser in conformance to the , and adding disperser verification logic. This simplifies integrating EigenDA into various rollup frameworks by minimizing the footprint of changes needed within their respective services.
Obtain test tLOAD tokens through our for testing purposes.
Monitor your transactions using the
is a tool by Consensys which allows remote signing.
Using a remote signer comes with risks, please read the web3signer docs. However this is a recommended way to sign transactions for enterprise users and production environments. Web3Signer is not maintained by Load Network team. Example of the most simple local web3signer deployment (for testing purposes):
Load Network have precompiled contracts that push data directly to Arweave as ANS-104 data items. One such precompile is the precompile (arweave_upload)
.
Load Network Reth Precompiles Address:
LN:Precompile
true
Data protocol identifier
Content-Type
application/octet-stream
Arweave data transaction MIME type
LN:Encoding
Brotli
Transaction's data encoding algorithms
LN:Precompile-Address
$value
The decimal precompile number (e.g. 0x17 have the Tag Value of 23)
is an open source directory for Reth's ExExes. You can think of it as an "chainlist of ExExes".
We believe that curating ExExes will accelerate their development by making examples and templates easily discoverable.
This introduces a new DA interface for EVM rollups that doesn't require changes to the sequencer or network architecture. It's easily added to any Reth client with just 80 lines of code by importing the DA ExEx code into the ExExes directory, making integration simple and seamless. &