Only this pageAll pages
Powered by GitBook
1 of 63

load network

Loading...

Loading...

About Load Network

Loading...

Loading...

Loading...

Loading...

Loading...

Using Load Network

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Load Cloud Platform (LCP)

Loading...

Loading...

Loading...

Loading...

Loading...

storage agents

Loading...

Loading...

Loading...

load hyperbeam

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Load Network for evm chains

Loading...

Loading...

Loading...

Loading...

Load Network ExEx

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Load Network Arweave Data Protocols

Loading...

Loading...

DA Integrations

Loading...

Loading...

Network Releases Nomenclature

Load Network Releases: Understanding Our Testnets

Both Alphanets and Devnets are testnet networks with no monetary value tied to the Load Network mainnet. They serve different purposes in our development pipeline

Alphanets: Stable Testnets

  • Designed for user/dev exploration and testing.

  • Features low frequency of breaking changes.

  • Provides a more reliable environment for developers and users to interact with Load Network.

Devnets: Experimental Testnets

  • Acts as a testing ground for the Alphanets.

  • Characterized by frequent breaking changes and potential instability.

  • Playground for testing new features, EIPs, and experimental concepts.

Overview

Defining Load Network

Abstract

Load Network is a high performance blockchain for onchain data storage - cheaply and verifiably store and access any data .

As a high-performance data-centric EVM network, Load Network maximizes scale and transparency for L1s, L2s and data-intensive dApps. Load Network is dedicated to solving the problem of onchain data storage. Load Network offloads storage to Arweave, and achieve high performance computation -decoupled from the EVM L1 itself- by utilizing ao-hyperbeam custom devices, giving any other chain a way to easily plug in a robust permanent storage layer powered by a hyperscalable network of EVM nodes with bleeding edge throughput capacity.

Load Network is ex-WeaveVM Network

Before March 2025, Load Network (abbreviations: LOAD or LN) was named WeaveVM Network (WVM). All existing references to WeaveVM (naming, links, etc.) in the documentation should be treated as Load Network.

Load Network

Load Network is a high performance blockchain for data storage - cheaply and verifiably store, access, and compute with any data.

Load Network ≈ The onchain data center

The Load Network fair launch is now live. Learn how to get $LOAD

Key Features

Exploring Load Network key features

Let's explore the key features of Load Network:

Beefy Block Producer

Load Network achieves enterprise-like performance by limiting block production to beefy hardware nodes while maintaining trustless and decentralized block validation.

What this means, is that anyone with a sufficient amount of $AO tokens (read more about $AO security below in this page) meeting the PoS staking threshold, plus the necessary hardware and internet connectivity (super-node, enterprise hardware), can run a node. This approach is inspired by Vitalik Buterin's work in "The Endgame" post.

Block production is centralized, block validation is trustless and highly decentralized, and censorship is still prevented.

These "super nodes" producing Load Network blocks result in a high-performance EVM network.

Large Block Size

Raising the gas limit increases the block size and operations per block, affecting both History growth and State growth (mainly relevant for our point here).

Load Network Alphanet raises the gas limit to 500M gas (doing 500 mg/s), and lowers the gas per non-zero byte to 8. These changes have resulted in a larger max theoretical block size of 62 MB, and consequently, the network data throughput is ~62 MBps.

This high data throughput can be handled thanks to block production by super nodes and hardware acceleration.

High-Throughput DA

Up until now, there's been no real-world, scalable DA (altDA) layer ready to handle high data throughput with permanent storage. Load Alphanet reaches a maximum throughput of 62 MBps, with a projection of 125 MBps in mainnet.

Modular EVM Node Components Design

Building on HyperBEAM and ao network enables us to package modules of the EVM node stack as HyperBEAM NIF (Native Implemented Function) devices.

This horizontally scalable and parallel architecture allows Load Network EVM nodes to be modularly composable in a totally new paradigm. For example, a node run by Alice might not implement the JSON-RPC component but can pay fees (compute paid in $AO) for its usage from Bob, who has this missing EVM component.

With this model, we will be achieving ao network synergy and interoperability. To read more about the rationale behind this, check the "" blog post

Programmable EVM data & Arweave Permanence

Load Network uses a set of Reth execution extensions (ExExes) to serialize each block in Borsh, then compress it in Brotli before sending it to Arweave. These computations ensure a cost-efficient, permanent history backup on Arweave. This feature is crucial for other L1s/L2s using Load Network for data settlement, aka LOADing [ ^^].

In the , we show the difference between various compression algorithms applied to Borsh-serialized empty block (zero transactions) and JSON-serialized empty block. Borsh serialization combined with Brotli compression gives us the most efficient compression ratio in the data serialization-compression process.

At the time of writing, and since the data protocol's inception, Load Network Arweave ExEx is the largest data protocol on top of Arweave in terms of the number of settled dataitems.

The Load Network interface with Arweave with more than just block data settling. We have developed the first precompiles that achieve a native bidirectional data pipeline with the Arweave network. In other words, with these precompiles (currently supported by Load Network testnet), you can read data from Arweave and send data to Arweave trustlessly and natively from a Solidity smart contract, creating the first ever programmable scalable EVM data, backed with Arweave permanence.

Cost Efficient Data Settling

Load's hyper computation, supercharged hardware, and interface with Arweave results in significantly cheaper data settlement costs on Load Network, which include the Arweave fees to cover the archiving costs. .

Even compared to temporary blob-based solutions, Load Network still offers a significantly cheaper permanent data solution (calldata).

Alien Stack, Alien Network Security

The Load Network is the first EVM L1 to leverage Arweave storage and interoperability natively inside the EVM, and obviously, the first EVM L1 to adopt the modular evm node components paradigm powered by HyperBEAM devices.

To align this alien tech stack, we needed an alien security model. On Load Network, users will pay gas in Load's native gas token, $LOAD. On the node operator side, a node that is not running the full stack of EVM components, is required to buy compute from other nodes offering the missing component by paying $AO.

Additionally, as Load is built on top of HyperBEAM (ao network) and Arweave, it's logical to inherit the network security of AO and reinforce Load's self-DA security. For these reasons, Load node operators stake $AO in order to join the EVM L1 block production.

Load Network Alphanets

A list of Load Network Alphanet Releases

The table below does not include the list of minor releases between major Alphanet releases. For the full changelogs and releases, check them out here: https://github.com/weaveVM/wvm-reth/releases

Alphanet
Blog Post
Changelogs

v1

v2

v3

Quickstart

Get set up with the onchain data center

To easily feed Load Network docs to your favourite LLM, access the compressed knowledge (aka LLM.txt) file from Load Network: (last update: 05/12/2025, 10:00:00 UTC)

Let's make it easy to get going with Load Network. In this doc, we'll go through the simplest ways to use Load across the most common use cases:

https://blog.wvm.dev/alphanet-v3/

https://github.com/weaveVM/wvm-reth/releases/tag/v0.3.0

v4 (LOAD Inception)

https://blog.wvm.dev/alphanet-v4/

https://github.com/weaveVM/wvm-reth/releases/tag/v0.4.0

v5

https://blog.load.network/alphanet-v5/

https://github.com/weaveVM/wvm-reth/releases/tag/v0.5.3

https://blog.wvm.dev/testnet-is-live/
https://github.com/weaveVM/wvm-reth/releases/tag/v0.1.0
https://blog.wvm.dev/alphanet-v2/
https://github.com/weaveVM/wvm-reth/releases/tag/v0.2.2
mission π
diagrams & benchmarks here
Learn more about Load Network precompiles in this section.
Check the comparison calculator for realtime data
Data LOADing cost comparison
Using ARIO's Turbo SDK
  • Integrating ledger storage

  • Using Load DA

  • Migrate from another storage layer

  • Upload data

    As a non-developer: LCP

    The easiest way to interface with Load Network storage capabilities is through the cloud web app: cloud.load.network, Load Cloud Platform. The LCP platform is powered by HyperBEAM's LS3 storage layer and offer programmatic access through load_acc API keys.

    As a developer

    Load S3 Temporary Data Storage Layer (highly recommended)

    The best data pipeline for massive uploads is to use the Load [email protected] HyperBEAM powered storage layer. The LS3 storage layer, run through a special set of HyperBEAM nodes, have the S3 objects serialized as ANS-104 DataItems by default, maintaining provenance and integrity when the uploader wishes to move the S3 object from the temporal storage layer to Arweave in a single HTTP API request.

    Highly scalable bundling service

    To load huge amount of data to Load Network EVM L1 without being tied to the technical network limitations (tx size, block size, network throughput), you can use the load0 bundling service. It's a straightforward REST-based bundling service that let you upload data and retrieve it instantly, at scale:

    For more examples, check out the load0 documentation

    Using ARIO's Turbo SDK

    One of the coolest features offered by the LS3 layer's compatibility with the ANS-104 data standard is the out-of-the-box compatibility with the Arweave ecosystem despite being an offchain temporary storage layer. For that reason, we have built an ANS-104 upload service on top of LS3, compatible with ARIO's Turbo standard, meaning you can use the Turbo official SDK along with Load's custom upload service endpoint to store temporary Arweave dataitems. check out how to use it and learn more about xANS-104 data provenance, lineage and governance here.

    Integrating ledger storage

    Chains like Avalanche, Metis and RSS3 use Load Network as a decentralized archive node. This works by feeding all new and historical blocks to an archiving service you can run yourself, pointed to your network's RPC.

    Clone the archiver repo here

    As well as storing all real-time and historical data, Load Network can be used to reconstruct full chain state, effectively replicating exactly what archive nodes do, but with a decentralized storage layer underneath. Read here to learn how.

    Using Load DA

    With 125mb/s data throughput and long-term data guarantees, Load Network can handle DA for every known L2, with 99.8% room to spare.

    Right now there are 4 ways you can integrate Load Network for DA:

    1. As a blob storage layer for EigenDA

    2. As a DA layer for Dymension RollApps

    3. As an OP-Stack rollup

    4. DIY

    DIY docs are a work in progress, but the commit to add support for Load Network in Dymension can be used as a guide to implement Load DA elsewhere.

    Work with us to use Load DA for your chain - get onboarded here.

    Migrate from another storage layer

    If your data is already on another storage layer like IPFS, Filecoin, Swarm or AWS S3, you can use specialized importer tools to migrate.

    Load S3 Storage Layer

    The HyperBEAM Load S3 node provides a 1:1 compatible development interface for applications using AWS S3 for storage, keeping method names and parameters in tact so the only change should be one line: the endpoint.

    Filecoin

    The load-lassie import tool is the recommended way to easily migrate data stored via Filecoin.

    Just provide the CID you want to import to the API, e.g.:

    https://lassie.load.rs/import/<CID>

    The importer is also self-hostable and further documented here.

    Swarm

    Switching from Swarm to Load is as simple as changing the gateway you already use to resolve content from Swarm.

    • before: https://api.gateway.ethswarm.org/bzz/<hash>

    • after: https://swarm.load.rs/bzz/<hash>

    The first time Load's Swarm gateway sees a new hash, it uploads it to Load Network and serves it directly for subsequent calls. This effectively makes your Swarm data permanent on Load while maintaining the same hash.

    llmtxt.xyz/g/loadnetwork/gitbook-sync/20
    Uploading data

    JSON-RPC Methods

    About Load Network Native JSON-RPC methods

    The eth_getArweaveStorageProof JSON-RPC method

    This JSON-RPC method lets you retrieve the Arweave storage proof for a given Load Network block number

    curl -X POST https://alphanet.load.network \
    -H "Content-Type: application/json" \
    --data '{
     "jsonrpc":"2.0",
     "method":"eth_getArweaveStorageProof",
     "params":["8038800"],
     "id":1
    }'

    The eth_getWvmTransactionByTag JSON-RPC method

    For Load Network L1 tagged transactions, the eth_getWvmTransactionByTag lets you retrieve a transaction hash for a given name-value tag pair.

    ELI5

    ELI5 Load Network

    What is Load Network?

    Load is a high-performance blockchain built towards the goal of solving the EVM storage dilemma with and ao . It gives the coming generation of high-performance chains a place to settle and store onchain data, without worrying about cost, availability, or permanence.

    Load Network offers scalable and cost-effective permanent storage by using Arweave as a decentralized hard drive, both at the node and smart contract layer, HyperBEAM as modular stack of EVM node components, and ao network for compute and network security. This makes it possible to store large data sets and run web2-like applications without incurring EVM storage fees.

    curl -X POST "https://load0.network/upload" \
         --data-binary "@./video.mp4" \
         -H "Content-Type: video/mp4"

    Miscellaneous

    curl https://alphanet.load.network \
      -X POST \
      -H "Content-Type: application/json" \
      -d '{
        "jsonrpc": "2.0",
        "id": 1,
        "method": "eth_getWvmTransactionByTag",
        "params": [{
            "tag": ["name", "value"]
        }]
      }'
    Decentralized Full Data Storage Stack

    Load Network mainnet is being built to be the highest performing EVM blockchain focusing on data storage, having the largest baselayer transaction input size limit (~16MB), the largest ever EVM transaction (~0.5TB 0xbabe transaction), very high network data throughput (multi-gigagas per second), high TPS, decentralization, full data storage stack offering (permanent and temporal), decentralized data gateways and data bundlers.

    Load Network achieves high decentralization by using Arweave as decentralized hard drive, HyperBEAM as a compute marketplace of EVM node components, and permissionless block production participation (running a node). Load Network will offer both permanent and temporary data storage while maintaining decentralized and censorship-resistant retrieval & ingress (gateways, bundling services, etc).

    Use Cases and How to Integrate

    Ledger Data Storage

    Chains like Metis, RSS3 and Dymension use Load Network to permanently store onchain data, acting as a decentralized archival node. If you look at the common problems that are flagged up on L2Beat, a lot of it has to do with centralized sources of truth and data that can’t be independently audited or reconstructed in a case where there’s a failure in the chain. Load adds a layer of protection and transparency to L2s, ruling out some of the failure modes of centralization. Learn more about the wvm-archiver tool here.

    High-Throughput Data Availability (DA)

    Load Network can plug in to a typical EVM L2's stack as a DA layer that's 10-15x cheaper than solutions like Celestia and Avail, and guarantees data permanence on Arweave. LN was built to handle DA for the coming generation of supercharged rollups. With a throughput of ~62MB/s, it could handle DA for every major L2 and still have 99%+ capacity left over.

    You can check out the custom DA-ExEx to make use of LOAD-DA in any Reth node in less than 80 LoCs, also the EigenDA-LN Sidecar Server Proxy to use EigenDA's data availability along with Load Network securing its archiving.

    Storage Heavy dApps

    Load Network offers scalable and cost-effective storage by using Arweave as a decentralized hard drive, and hyperbeam as a decentralized cloud. This makes it possible to store large data sets and run web2-like applications without incurring EVM storage fees.

    We have developed the first-ever Reth precompiles to natively facilitate a bidirectional data pipeline with Arweave from the smart contract API level. Check out the full list of Load precompiled contracts here.

    Foundational Layer (L1) For Rollups

    Load Network is an EVM compatible blockchain, therefore, rollups can be deployed on Load the same as rollups on Ethereum. In contrast to Ethereum or other EVM L1s, rollups deployed on top of Load benefit out-of-the-box from the data-centric features provided by Load (for rollup data settlement and DA).

    Rollups deployed on Load Network use the native Load gas token (tLOAD on Alphanet), similar to how ETH is used for OP rollups on Ethereum.

    For example, we released a technical guide for developers interested in deploying OP-Stack rollups on Load. Check it out here.

    The Onchain Data Center

    Load network is being built with the vision of being the onchain data center. To accomplish this vision, we have started working on several web2 and web3 data pipelines into Load and Arweave, with web2 cloud experience. Start using Load Cloud now!

    Explore Load Network Ecosystem Dapps (Evolving)

    • Load Network Cloud Platform — The UI of the onchain data center

    • Permacast — A decentralized media platform on Load Network

    • Tapestry Finance — Uniswap V2 fork

    • shortcuts.bot — short links for Load Network txids

    • — subdomain resolver for Load Network content

    • — Dropbox onchain alternative

    • — Onchain Instagram

    • — onchain publishing toolkit

    • — Tokenize any data on Load Network

    • — Hyperlane bridge (Load Alphanet <> Ethereum Holesky)

    • — a club for permanent content preservation.

    • — deploy a Dymension roll-app using Load DA

    Useful Links

    • Documentation

    • GitHub Organization

    • Blog

    • Twitter

    Arweave
    HyperBEAM
    Load Network Highlights

    About Load HyperBEAM

    Load Network custom HyperBEAM devices

    hb.load.rs

    About HyperBEAM

    HyperBeam is a client implementation of the AO-Core protocol, written in Erlang. It can be seen as the 'node' software for the decentralized operating system that AO enables; abstracting hardware provisioning and details from the execution of individual programs.

    HyperBEAM node operators can offer the services of their machine to others inside the network by electing to execute any number of different devices, charging users for their computation as necessary.

    Each HyperBEAM node is configured using the [email protected] device, which provides an interface for specifying the node's hardware, supported devices, metering and payments information, amongst other configuration options. For more details, check out the HyperBEAM codebase:

    load_hb: Load Network HyperBEAM node with custom devices

    The repository is our HyperBEAM fork with custom devices such as , , and

    Our development motto is driven by the manifesto initiated during Arweave Day Berlin 2025.

    Our main hyperbeam development is hosted on

    [email protected] device

    The RISC-V Execution Machine device

    This device is in a very Proof Of Concept stage

    About

    we have developed a custom fork of (an Ethereum Execution Environment that seamlessly integrates RISCV smart contracts alongside traditional EVM smart contracts) to input and return the resulted computed EVM db.

    [email protected] device

    The EVM consensus light client

    About

    The [email protected] device is an EVM/Ethereum consensus light client built into the HyperBEAM devices stack. With helios, node operators can trustlessly connect to EVM RPCs with a very lightweight, multichain and secure setup, and no historical syncing overhead. With this device, every hyperbeam node can turn into a verifiable EVM RPC endpoint.

    Compute

    Explore compute options available with Load Network

    Combining the Load EVM with custom HyperBEAM devices enables a wide range of options, from smart contracts to WASM workers to serverless GPU functions.

    In this page we will briefly go over the different options within the extended Load Network ecosystem.

    Native EVM compute

    The Load EVM offers native EVM compute with typical smart contracts, blobs and calldata standards that are all 1:1 compatible with Ethereum and other EVM networks. As a specialized storage and DA chain, the Load L1 is optimized for to enable data-intensive dApps. .

    Compatibility & Performance

    Load Network Compatibility with the standards

    EVM compatibility

    Load Network EVM is built on top of Reth, making it compatible as a network with existing EVM-based applications. This means you can run your current Ethereum-based projects on LN without significant modifications, leveraging the full potential of the EVM ecosystem.

    Load Network EVM doesn't introduce new opcodes or breaking changes to the EVM itself, but it uses ExExes and adds custom precompiles:

    Blobscan Agent

    The EIP-4844 data agent

    The stores Ethereum's blobs temporarily on the , serialized as . Based on need/demand, DataItems can be deterministically pushed to Arweave while maintaining integrity and provenance.

    DataItems stored on the [email protected] device can be retrieved from the Hybrid Gateway as if they are Arweave txs:

    Agent Server Methods

    Retrieve blob versioned hash and the associated ANS-104 dataitem id by versioned hash

    Arweave Data Uploader

    Reth -> Arweave data pipeline

    About

    This ExEx is the first data upload pipeline between an Ethereum client (reth) and Arweave, the permanent data storage network. The ExEx uses to bundle data and send it to Arweave. .

    Self Hosted RPC Proxies

    Rust Proxy

    Run Locally

    Alphanet V0.5.3
    • gas per non-zero byte: 8

    • gas limit: 500_000_000

    • block time: 1s

    • gas/s: 500 mg/s

    • data throughput: ~62 MBps

    After getting R55 to work with the OOTB interpretation of signed raw transaction, we built on top of it a hyperbeam device offering RISC-V compatible Ethereum appchains. For example, this erc20.rs Rust smart contract was deployed on a hb risc-v appchain: github.com/loadnetwork/r55

    RISC-V custom device source code: https://github.com/loadnetwork/load_hb/tree/main/native/riscv_em_nif

    R55
    handle signed raw transaction
    Lua / WASM compute

    Load Network is a hybrid of the EVM and AO's HyperBEAM. The Load Network team runs HyperBEAM instances (permissionless nodes running the AO compute canonical stack), therefore it's possible to leverage the highly scalable AO compute (processes, hyper-aos, Lua serverless functions) within the Load stack. Connect to a node here.

    Additionally, we have developed an EVM compute layer (bytecode calculator) as a HyperBEAM device. Learn more.

    GPU compute

    More over in HyperBEAM land, the Kernel Execution Machine ([email protected] device) allows for quasi-arbitrary GPU-instructions compute execution for .wgsl functions (shaders, kernels). Get started.

    Reth Execution Extensions (ExEx)

    Although this is not compute for the user/consumer level, Load maintains several Reth execution extensions (post block-execution compute logic) that can benefits the Reth and the wider EVM developers set. Learn more.

    high throughput
    Connect to Load Alphanet RPC here
    AR.IO Turbo Bundler
    Get the ExEx code
    https://github.com/permaweb/HyperBEAM
    load_hb
    [email protected]
    [email protected]
    [email protected]
    Hyperbeam Accelerationism (hb/acc)
    hb.load.rs

    Load Network DA ExEx

    LN-DA plugin ExEx

    About

    This introduces a new DA interface for EVM rollups that doesn't require changes to the sequencer or network architecture. It's easily added to any Reth client with just 80 lines of code by importing the DA ExEx code into the ExExes directory, making integration simple and seamless. Get the code here & installing setup guide here

    About ExExes

    About Reth Execution Extensions (ExEx)

    ExEx is a framework for building performant and complex off-chain infrastructure as post-execution hooks.

    Reth ExExes can be used to implement rollups, indexers, MEV bots and more with >10x less code than existing methods. Check out the Reth ExEx announcement by Paradigm https://www.paradigm.xyz/2024/05/reth-exex

    In the following pages we will list the ExExes developed and used by Load Network.

    Borsh Serializer

    Borsh binary serializer ExEx

    About

    Borsh stands for Binary Object Representation Serializer for Hashing and is a binary serializer developed by the NEAR team. It is designed for security-critical projects, prioritizing consistency, safety, and speed, and comes with a strict specification.

    The ExEx utilizes Borsh to serialize and deserialize block objects, ensuring a bijective mapping between objects and their binary representations. Get the ExEx code

    ExEx.rs

    An open source directory of Reth ExExes

    About

    ExEx.rs is an open source directory for Reth's ExExes. You can think of it as an "chainlist of ExExes".

    We believe that curating ExExes will accelerate their development by making examples and templates easily discoverable. Add you ExEx today!

    Load Network ExExes

    Explore Load Network developed ExExes

    In the following section you will explore the Execution Extensions developed by our team to power WeaveVM

    What is Helios?

    Helios is a trustless, efficient, and portable multichain light client written in Rust.

    Helios converts an untrusted centralized RPC endpoint into a safe unmanipulable local RPC for its users. It syncs in seconds, requires no storage, and is lightweight enough to run on mobile devices.

    Helios has a small binary size and compiles into WebAssembly. This makes it a perfect target to embed directly inside wallets and dapps.

    Check out the official repository here

    [email protected] device

    The [email protected] as per its current implementation, initiates the helios client (and JSON-RPC server) at the start of the hyperbeam node run. The JSON-RPC server is spawned as a separate process running in parallel behind the 8545 port (standard consensus rpc port).

    The device supports all of the methods supported by helios. Check the full list here

    Endpoint

    As this device is supported on the hb.load.rs hyperbeam node, it's explicitly assigned the eth.rpc.rs endpoint for the Ethereum mainnet network.

    Example

    local

    using rpc.rs

    device source code: https://github.com/loadnetwork/load_hb/tree/main/native/helios_nif

    Retrieve Indexer stats

    Agent's server info

    curl -X GET https://load-blobscan-agent.load.network/v1/blob/$BLOB_VERSIONED_HASH
    blobscab-agent
    [email protected] HyperBEAM device
    ANS-104 DataItems
    https://github.com/loadnetwork/load_hb/tree/s3-edge/native/s3_nif#hybrid-gateway
    Try it!

    You can find the proxy server codebase here: https://github.com/weaveVM/wvm-rpc-proxy

    JavaScript Proxy

    Run Locally

    Try it!

    You can find the proxy server codebase here: https://github.com/weaveVM/proxy-rpc

    curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' http://127.0.0.1:8545
    curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' https://eth.rpc.rs
    curl -X GET https://load-blobscan-agent.load.network/v1/stats
    curl -X GET https://load-blobscan-agent.load.network/v1/info
    git clone https://github.com/weavevm/wvm-proxy-rpc.git
    
    cd wvm-proxy-rpc
    
    cargo build && cargo shuttle run --port 3000
    curl -X POST http://localhost:3000 -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}'
    git clone https://github.com/weavevm/proxy-rpc.git
    
    cd proxy-rpc
    
    npm install && npm run start
    curl -X POST http://localhost:3000 -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}'
    load.yachts
    onchain.rs
    relic.bot
    fairytale.sh
    bridge.load.network
    mediadao.xyz
    Dymension.xyz Roll-Apps
    Discord
    Explorer
    Data storage price calculator
    Alphanet faucet

    load:// Data Protocol

    Using load:// data retrieval protocol

    About load://

    Load Network Data Retriever (load://) is a protocol for retrieving data from the Load Network (EVM). It leverages the LN DA layer and Arweave’s permanent storage to provide trustless access LN transaction data through both networks, whether that’s data which came from LN itself, or L2 data that was settled to LN.

    Many chains solve this problem by providing query interfaces to archival nodes or centralized indexers. For Load Network, Arweave is the archival node, and can be queried without special tooling. However, the data LN stores on Arweave is also encoded, serialized and compressed, making it cumbersome to access. The load:// protocol solves this problem by providing an out-of-the-box way to grab and decode Load Network data while also checking it has been DA-verified.

    How it works

    The data retrieval pipeline ensures that when you request data associated with a Load Network transaction, it passes through at least one DA check (currently through LN's self-DA).

    It then retrieves the transaction block from Arweave, published by LN ExExes, decodes the block (decompresses Brotli and deserializes Borsh), and scans the archived sealed block transactions within LN to locate the requested transaction ID, ultimately returning the calldata (input) associated with it.

    Try it out

    Currently, the load:// gateway server provides two methods: one for general data retrieval and another specifically for transaction data posted by the load-archiver nodes. To retrieve calldata for any transaction on Load Network, you can use the following command:

    The second method is specific to load-archiver nodes because it decompresses the calldata and then deserializes its Borsh encoding according to a predefined structure. This is possible because the data encoding of load-archiver data is known to include an additional layer of Borsh-Brotli encoding before the data is settled on LN.

    Benchmarks

    Latency for /calldata

    The latency includes the time spent fetching data from LN EVM RPC and the Arweave gateway, as well as the processing time for Brotli decompression, Borsh deserialization, and data validity verification.

    Check out the load:// data protocol protocol

    load0 data layer

    About Load Network optimistic & high performance data layer

    load0 is Bundler's Large Bundle on steroids -- a cloud-like experience to upload and download data from Load Network using the Bundler's 0xbabe2 transaction format powered with SuperAccount & S3 under the hood.

    To obtain API key and unlock higher limits, create an API key on cloud.load.network

    Technical Architecture

    First, the user sends data to the load0 REST API /upload endpoint -- the data is pushed to load0's S3 bucket and returns an optimistic hash (keccak hash) which allows the users to instantly retrieve the object data from load0.

    After being added to the load0 bucket, the object gets added to the orchestrator queue that uploads the optimistic cached objects to Load Network. Using Large Bundle & SuperAccount, the S3 bucket objects get sequentially uploaded to Load and therefore, permanently stored while maintaining very fast uploads and downloads. Object size limit: 1 byte -> 2GB.

    REST API

    1- Upload object

    2- Download object (browser)

    Also, to have endpoints similiarity as in bundler.load.rs, you can do:

    3- Retrieve Bundle metadata using optimistic hash or bundle txid (once settled)

    Returns:

    An object data can be accessed via:

    • optimistic caching: https://load0.network/resolve/{Bundle.optimistic_hash}

    • from Load Network (once settled): https://bundler.load.rs/v2/resolve/{Bundle.bundle_txid}

    Source code:

    LS3 with load_acc

    Learn how to use Load S3 storage layer along LCP's load_acc api keys

    As of October 1st, 2025, it's no longer required to directly contact us to acquire Load S3 access keys to interact with the LS3's drive.load.network HyperBEAM cluster. Interacting with the Load S3 storage layer now has programmatic access with load_acc access keys and a new cluster endpoint: https://api.load.network/s3

    How to get load_acc API keys

    First, you have to create a bucket from the cloud.load.network dashboard if the bucket you want to create scoped keys for does not already exist.

    After creating the bucket, navigate to the "API Keys" tab and create a new load_acc key with a label, then scope it to the desired bucket.

    load_acc in action

    After creating a bucket and load_acc API key, you can now interact with the Load S3 storage layer via S3-compatible SDKs, such as the official AWS S3 SDK. Here is the S3 client configuration as it should be:

    And that's it, that's all you need to start interacting with your HyperBEAM-powered S3 temporary storage!

    [email protected] device

    The first Revm EVM device on HyperBEAM

    About

    The @evm1.0 device: an EVM bytecode emulator built on top of Revm (version v22.0.1).

    The device not only allows evaluation of bytecode (signed raw transactions) against a given state db, but also supports appchain creation, statefulness, EVM context customization (gas limit, chain id, contract size limit, etc.), and the elimination of the block gas limit by substituting it with a transaction-level gas limit.

    This device is experimental, in PoC stage

    Live demo at

    Technical Architecture

    eval_bytecode() takes 3 inputs, a signed raw transaction (N.B: chain id matters), a JSON-stringified state db and the output state path (here in this device it's in )

    References

    • device source code:

    • hb device interface:

    • nif tests:

    • ao process example:

    Bundlers Gateways

    The Load Network Gateway Stack: Fast, Reliable Access to Load Network Data

    All storage chains have the same issue: even if the data storage is decentralized, retrieval is handled by a centralized gateway. A solution to this problem is just to provide a way for anyone to easily run their own gateway – and if you’re an application building on Load Network, that’s a great way to ensure content is rapidly retrievable from the blockchain.

    When – a photo sharing dApp that uses LN bundles for storage – started getting traction, the default LN gateway became a bottleneck for the Relic team. The way data is stored inside bundles (hex-encoded, serialized, compressed) can make it resource-intensive to decode and present media on demand, especially when thousands of users are doing so in parallel.

    In response, we developed two new open source gateways: one , and .

    The LN Gateway Stack introduces a powerful new way to access data from Load Network bundles, combining high performance with network resilience. At its core, it’s designed to make bundle data instantly accessible while contributing to the overall health and decentralization of the LN.

    x402 Protocol Integration

    The first ever temporary, private, payment-gated ANS-104 Dataitems

    What is the x402 protocol?

    The x402 protocol is an open framework that enables machine-to-machine payments on the web by standardizing how clients and services exchange value over HTTP. Building on the HTTP 402 "Payment Required" status code, x402 establishes a clear transaction flow where a server responds to resource requests with payment instructions (including amount and recipient), the client provides payment authorization, a payment facilitator verifies and settles the transaction, and the server delivers the requested resource along with payment confirmation.

    This protocol allows automated agents, crawlers, and digital services to conduct transactions programmatically without requiring traditional accounts, subscriptions, or API keys. By creating a common language for web-based payments, x402 enables new monetization models such as pay-per-use access, micropayments for AI agents purchasing from multiple merchants, and flexible payment schemes including immediate settlement via stablecoins or deferred settlement through traditional payment rails like credit cards and bank accounts.

    Load Network Precompiles Data Protocol

    About the Data Protocol of Load Network Precompile Contracts

    About

    Load Network have precompiled contracts that push data directly to Arweave as ANS-104 data items. One such precompile is the precompile (arweave_upload).

    DA ExEx (Reth-only)

    Plug Load Network high-throughput DA into any Reth node

    About

    Adding a DA layer usually requires base-level changes to a network’s architecture. Typically, DA data is posted either by sending calldata to the L1 or through blobs, with the posting done at the sequencer level or by modifying the rollup node’s code.

    This ExEx introduces an emerging, non-traditional DA interface for EVM rollups. No changes are required at the sequencer level, and it’s all handled via the ExEx, which is easy to add to any Reth client in just 80 lines of code.

    https://github.com/loadnetwork/load0/
    tx lifecycle
    ultraviolet.load.network
    ./appchains
    native/load_revm_nif
    dev_evm.erl
    load_revm_nif_test.erl
    evm-device.lua
    Why we built the Load Network gateway stack

    The gateway stack solves several critical needs in the LN ecosystem:

    Rapid data retrieval

    Through local caching with SQLite, the gateway dramatically reduces load times (4-5x) for frequently accessed bundled data. No more waiting for remote data fetches – popular content is served instantly from the gateway node.

    For relic.bot, this slashed feed loading times from 6-8 seconds to near-instant.

    Network health

    By making it easy to run your own gateway, the stack promotes a more decentralized network. Each gateway instance contributes to network redundancy, ensuring data remains accessible even if some nodes go offline.

    Running a Load Network gateway

    Running your own LN gateway is pretty straightforward. The gateway stack is designed for easy deployment, directly to your server or inside a Docker container.

    With Docker, you can have a gateway up and running in minutes:

    For rustaceans, rusty-gateway is deployable on a Rust host like shuttle.dev – get the repo here and Shuttle deployment docs here.

    The technical side

    Under the hood, the gateway stack features:

    • SQLite-backed persistent cache

      • Content-aware caching with automatic MIME type detection

      • Configurable cache sizes and retention policies

      • Application-specific cache management

      • Automatic cache cleanup based on age and size limits

      • Health monitoring and statistics

    The gateway exposes a simple API for accessing bundle data:

    GET /bundle/:txHash/:index

    This endpoint handles the job of data retrieval, caching, and content-type detection behind the scenes.

    Towards scalability & decentralization

    The Load Network gateway stack was built in response to problems of scale – great problems to have as a new network gaining traction. LN bundle data is now more accessible, resilient and performant. By running a gateway, you’re not just improving your own access to LN data – you’re contributing to a more robust, decentralized network.

    Test the gateways:

    • gateway.wvm.rs - gateway.load.rs

    • gateway.wvm.nerwork

    • resolver.bot

    relic.bot
    JavaScript-based cache-enabled gateway
    one written in Rust
    curl -X POST "https://load0.network/upload" \
      --data-binary "@./video.mp4" \
      -H "Content-Type: video/mp4" \
      -H "X-Load-Authorization: $YOUR_LCP_AUTH_TOKEN"
    GET https://load0.network/download/{optimistic_hash}
    GET https://load0.network/resolve/{optimistic_hash}
    GET https://load0.network/bundle/optimistic/{op_hash}
    GET https://load0.network/bundle/load/{bundle_txid}
    pub struct Bundle {
        pub id: u32,
        pub optimistic_hash: String,
        pub bundle_txid: String,
        pub data_size: u32,
        pub is_settled: bool,
        pub content_type: String
    }
    import { S3Client } from "@aws-sdk/client-s3";
    
    
    const endpoint = "https://api.load.network/s3"; // LS3 HyperBEAM cluster 
    const accessKeyId = "load_acc_YOUR_LCP_ACCESS_KEY"; // get yours from cloud.load.network
    const secretAccessKey = ""; 
    
    // Initialize the S3 client
    const s3Client = new S3Client({
      region: "us-east-1",
      endpoint,
      credentials: {
        accessKeyId,
        secretAccessKey,
      },
      forcePathStyle: true, // required
    });
    #[rustler::nif]
    fn eval_bytecode(signed_raw_tx: String, state: String, cout_state_path: String) -> NifResult<String> {
        let state_option = if state.is_empty() { None } else { Some(state) };
        let evaluated_state: (String, String) = eval(signed_raw_tx, state_option, cout_state_path)?;
        Ok(evaluated_state.0)
    }
    
    #[rustler::nif]
    fn get_appchain_state(chain_id: &str) -> NifResult<String> {
    	let state = get_state(chain_id);
        Ok(state)
    }
    git clone https://github.com/weavevm/bundles-gateway.git  
    cd bundles-gateway  
    docker compose up -d

    The x402 Foundation is a collaborative initiative being established by Cloudflare and Coinbase with the mission of encouraging widespread adoption of the x402 protocol.

    x402 and xANS-104 DataItems

    We are proud to be the first team that has worked on the intersection of x402 micropayments protocol and Arweave's ANS-104 data format. We have integrated the x402 protocol in Load's custom HyperBEAM device ([email protected] device), resulting in the first ever expirable, paywalled, privately shareable ANS-104 DataItems.

    The x402 micropayments have been integrated on the ANS-104 gateway sidecar level of the s3 device. check out source code.

    Creating x402 paywalled private dataitems

    There are two ways to create an x402 paywalled private expireable ANS-104 DataItem, the first is user friendly via the LCP dashboard, and the second is programmatic yet DIY, let's explore both methods.

    x402 on Load Cloud Platform (LCP)

    x402 has been integrated on LCP, making it possible for LCP users to create x402 paywalled, expirable links for their private objects (ANS-104 DataItems) on Load S3. The process is pretty simple and straightforward: fill in the parameters of the Create Payment Link request (set expiry , payee EOA, and USDC amount) and the dashboard will generate a ready-to-use x402 expirable DataItem URL.

    x402 DIY

    The DIY method means doing what the LCP dashboard abstracts, first you need to create a private LS3 ANS-104 Dataitem and upload it to your LCP bucket (load-s3-agent reference):

    After that you have to generate the x402 signed shareable receipt using the HyperBEAM LS3 sidecar:

    this curl request will return a base64 string, you use it to share the x402 paywalled ANS-104 DataItem as following: https://402.load.network/$base64_string

    After the rebrand from WeaveVM to Load Network, all the data protocol tags have changed the "*WeaveVM*" onchain term (Arweave tag) to "*LN*"

    Protocol Specifications

    The data protocol transactions follow the ANS-104 data item specifications. Each LN precompile transaction is posted on Arweave, after brotli compression, with the following tags:

    Tag Name
    Tag Value
    Description

    LN:Precompile

    true

    Data protocol identifier

    Content-Type

    application/octet-stream

    Arweave data transaction MIME type

    LN:Encoding

    Brotli

    Transaction's data encoding algorithms

    LN:Precompile-Address

    $value

    Load Network Precompile Data Items Uploaders

    • Load Network Reth Precompiles Address: 5JUE58yemNynRDeQDyVECKbGVCQbnX7unPrBRqCPVn5Z

    0x17
    Integration Tutorial

    First, you’ll need to add the following environment variables to your Reth instance:

    .env

    The archiver_pk refers to the private key of the LN wallet, which is used to pay gas fees on the LN for data posting. The network variable points to the path of your network configuration file used for the ExEx. A typical network configuration file looks like this:

    network.json

    For a more detailed setup guide for your network, check out this guide.

    Finally, to implement the Load Network DA ExEx in your Reth client, simply import the DA ExEx code into your ExExes directory and it will work off the shelf with your Reth setup. Get the code here.

    here
    workflow
    /calldata endpoint benchmark

    Load S3 Layer (LS3)

    Explore the first temporal data storage layer on AO HyperBEAM

    About

    The Load S3 storage layer is built on top of HyperBEAM as a device, and ao network for data programmability. At its core, the [email protected] device – a HyperBEAM s3 object-storage provider – is the heart of the storage layer.

    The HyperBEAM S3 device offers maximum flexibility for HyperBEAM node operators, allowing them to either spin up MinIO clusters in the same location as the HyperBEAM node and rent their available storage, or connect to existing external clusters, offering native integration between hb’s s3 and devs existing storage clusters. For instance, Load’s S3 device is co-located with the MinIO clusters.

    To start using the Load S3 temporary storage layer today, checkout the , and the sections.

    Erasure-coded redundancy, fault tolerance, and data availability

    Load S3’s MinIO cluster, forming the storage layer, runs on 4 nodes with erasure coding enabled. Data is split into data and parity blocks, then striped across all nodes. This allows the system to tolerate the loss of up to two nodes without data loss or service interruption. Unlike full replication, which stores complete copies of each object on multiple nodes, erasure coding provides redundancy with lower storage overhead, ensuring durability while keeping capacity usage efficient.

    A four-node configuration also enables automatic data healing. When a failed node comes back online or a new node replaces it, missing blocks are rebuilt from the remaining healthy nodes in real-time, without taking the cluster offline. Object integrity is verified using per-object checksums, and data availability can be asserted using S3 metadata, such as size, timestamp, and ETag – ensuring each object is present, intact, and retrievable.

    The Load S3 layer inherits these guarantees by offloading them to a battle-tested distributed object storage system, in this implementation, MinIO. In the future, the Load S3 decentralized network, consisting of multiple S3 HyperBEAM nodes, will have these properties available out of the box, without the need to re-engineer them from scratch.

    [email protected] & ANS-104 DataItems

    The [email protected] device has been designed with a built-in data protocol to natively handle ANS-104 DataItems offchain temporary storage. This approach translates our rationale: HyperBEAM s3 nodes can store signed & valid ANS-104 DataItems temporarily, that can be pushed anytime, when needed, to Arweave, while maintaining the DataItem’s provenance and determinism (e.g. ID, signature, timestamp, etc). Learn more about data provenance, lineage and governance here.

    Hybrid Gateway

    For a direct low level ANS-104 DataItem data streaming from LS3, use the HyperBEAM gateway instead — access the offchain dataitem under:

    Given the S3 device’s native integration with objects serialized and stored as ANS-104 DataItems, we considered DataItem accessibility, such as resolving via Arweave gateways.

    Being an S3 device, we were able to benefit from HyperBEAM’s modular architecture, so we extended HyperBEAM’s gateway: we built the module and extended the by integrating the hb_gateway_s3 store module as a fallback extension to the Arweave’s GraphQL API.

    Additionally, hb_opts.erl Stores orders have been modified to add s3 offchain dataitems retrieval as a fallback after HyperBEAM’s cache module, Arweave gateway then S3 (offchain) – offchain DataItems should have the Data-Protocol : Load-S3 tag to be recognized by the subindex.

    Building these extension components, a hb node running the ~ device, benefit from the Hybrid Gateway that can resolve both onchain and offchain dataitems.

    Load S3 Trust Assumptions, Optimismo

    For a higher level trust assumption, upload data to LS3 using the Turbo-compliant which comes with a built-in system of signed receipts.

    In the current release, Load S3 is a storage layer consisting of a single centralized yet verifiable storage provider (HyperBEAM node running the [email protected] device components).

    This early-stage testing layer offers similar trust assumptions offered by other centralized services in the Arweave ecosystem such as ANS-104 Bundlers. Load S3’s gradual evolution from a layer to decentralized network built on top of ao network will remove the centralized and trust-based components, one by one, to reach a trustless, verifiable and incentivized temporal data storage network.

    Blazingly Fast ANS-104 DataItems streaming sidecar

    Besides the Hybrid Gateway, nodes like s3-node-1 support a highly optimized low-level dataitems streaming leveraging precomputed dataitem data start-byte offset and range streaming from the S3 cluster directly.

    The sidecar bypasses the technical need to deserialize the dataitem in order to extract useful information such as tags and dataitem's data, reducing dramatically the latency for resolving.

    On s3-node-1 — the sidecar is available under https://gateway.s3-node-1.load.network/resolve/:offchain-dataitemid

    And to download the full LS3 DataItem binary (the .ans104 file), you can use the following endpoint: https://gateway.s3-node-1.load.network/binary/:offchain-dataitemid

    Developer Guide

    Load’s HyperBEAM node running the [email protected] device is available the following endpoint: – developers looking to use the HyperBEAM node as S3 endpoint, can use the official S3 SDKs as long as the used S3 commands are supported by [email protected] - (1:1 parity)

    Available Endpoints

    Node Name
    Endpoint
    Features

    To learn how to start using Load S3 today, check out the following

    [email protected] device

    The Kernel Execution Machine device

    About

    The kernel-em NIF (kernel execution machine - [email protected] device) is a HyperBEAM Rust device built on top of wgpu to offer a general GPU-instructions compute execution machine for .wgsl functions (shaders, kernels).

    With wgpu being a cross-platform GPU graphics API, hyperbeam node operators can add the KEM device to offer a compute platform for KEM functions. And with the ability to be called from within an ao process through ao.resolve ([email protected] device), KEM functions offer great flexibility to run as GPU compute sidecars alongside ao processes.

    This device is experimental, in PoC stage

    KEM Technical Architecture

    KEM function source code is deployed on Arweave (example, double integer: ), and the source code TXID is used as the KEM function ID.

    A KEM function execution takes 3 parameters: function ID, binary input data, and output size hint ratio (e.g., 2 means the output is expected to be no more than 2x the size of the input).

    The KEM takes the input, retrieves the kernel source code from Arweave, and executes the GPU instructions on the hyperbeam node operator's hardware against the given input, then returns the byte results.

    On Writing Kernel Functions

    As the kernel execution machine (KEM) is designed to have I/O as bytes, and having the shader entrypoint standardized as main, writing a kernel function should have the function's entrypoint named main, the shader's type to be @compute, and the function's input/output should be in bytes; here is an example of skeleton function:

    Uniform Parameters

    Uniform parameters have been introduced as well, allowing you to pass configuration data and constants to your compute shaders. Uniforms are read-only data that remains constant across all invocations of the shader.

    Here is an example of a skeleton function with uniform parameters:

    Example: Image Glitcher

    Using the image glitcher kernel function -

    References

    • device source code:

    • hb device interface:

    • nif tests:

    • ao process example:

    LN-ExEx Data Protocol

    About LN-ExEx Data Protocol on Arweave

    About

    The LN-ExEx data protocol on Arweave is responsible for archiving Load Network's full block data, which is posted to Arweave using the Arweave Data Uploader Execution Extension (ExEx).

    After the rebrand from WeaveVM to Load Network, all the data protocol tags have changed the "*WeaveVM*" onchain term (Arweave tag) to "*LN*"

    Protocol Specifications

    The data protocol transactions follow the ANS-104 data item specifications. Each Load Network block is posted on Arweave, after borsh-brotli encoding, with the following tags:

    Tag Name
    Tag Value
    Description

    LN-ExEx Data Items Uploaders

    • Reth ExEx Archiver Address:

    • Arweave-ExEx-Backfill Address:

    Ledger Archiver (any chain)

    Connect any EVM network to Load Network

    About

    Load Network Archiver is an ETL archive pipeline for EVM networks. It's the simplest way to interface with LN's permanent data feature without smart contract redeployments.

    Load Network Archiver Usage

    LN Archiver is the ideal choice if you want to:

    • Interface with LN's permanent data settlement and high-throughput DA

    • Maintain your current data settlement or DA architecture

    • Have an interface with LN without rollup smart contract redeployments

    • Avoid codebase refactoring

    Run An Instance

    To run your own node instance of the load-archiver tool, check out the detailed setup guide on github:

    Networks Using LN Archiver

    Network
    Archiver Repo
    Archiver Endpoint

    Deploying OP-Stack Rollups

    Guidance on How To Deploy OP-Stack Rollups on Load Network

    About the OP Stack

    The is a generalizable framework spawned out of Optimism’s efforts to scale the Ethereum L1. It provides the tools for launching a production-quality Optimistic Rollup blockchain with a focus on modularity. Layers like the sequencer, data availability, and execution environment can be swapped out to create novel L2 setups.

    The goal of optimistic rollups is to increase L1 transaction throughput while reducing transaction costs. For example, when Optimism users sign a transaction and pay the gas fee in ETH, the transaction is first stored in a private mempool before being executed by the sequencer. The sequencer generates blocks of executed transactions every two seconds and periodically batches them as call data submitted to Ethereum. The “optimistic” part comes from assuming transactions are valid unless proven otherwise.

    In the case of Laod Network, we have modified OP Stack components to use LN as the data availability and settlement layer for L2s deployed using this architecture.

    curl -X POST https://load-s3-agent.load.network/upload/private \
      -H "Authorization: Bearer $load_acc_api_key" \
      -H "signed: true" \
      -H "bucket_name: $bucket_name" \
      -H "x-dataitem-name: $dataitem_name" \
      -H "x-folder-name: $folder_name" \ 
      -F "[email protected]" \
      -F "content_type=application/octet-stream"
    curl -X POST https://gateway.s3-node-1.load.network/sign/402 \
      -H "Authorization: Bearer $CONTACT_US" \
      -H "x-bucket-name: $bucket_name" \
      -H "x-load-acc: $load_acc_api_key" \
      -H "x-dataitem-key: $dataitem_key.ans104" \
      -H "x-402-address: $payee_eoa" \
      -H "x-402-amount: $usdc_amount" \
      -H "x-expires-minutes: $set_expiry"
    curl -X GET https://gateway.load.network/calldata/$LN_TXID
    curl -X GET https://gateway.load.network/war-calldata/$LN_TXID

    The decimal precompile number (e.g. 0x17 have the Tag Value of 23)

    OP Stack Rollups on Load Network

    We’ve built on top of the Optimism Monorepo to enable the deployment of optimistic rollups using LN as the L1. The key difference between deploying OP rollups on Load Network versus Ethereum is that when you send data batches to LN, your rollup data is also permanently archived on Arweave via to LN’s Execution Extensions (ExExes).

    As a result, OP Stack rollups using LN for data settlement and data availability (DA) will benefit from the cost-effective, permanent data storage offered by Load Network and Arweave. Rollups deployed on LN use the native network gas token (tLOAD on Alphanet), similar to how ETH is used for OP rollups on Ethereum.

    We’ve released a detailed technical guide on GitHub for developers looking to deploy OP rollups on Load Network. Check it out here and the LN’s fork of Optimism Monorepo here.

    OP Stack

    Block-Number

    $value

    Load Network block number

    Block-Hash

    $value

    Load Network block hash

    Client-Version

    $value

    Load Network Reth client version

    Network

    Alphanet vx.x.x

    Load Network Alphanet semver

    LN:Backfill

    $value

    Boolean, if the data has been posted by a backfiller (true) or archiver (false or not existing data)

    Protocol

    LN-ExEx

    Data protocol identifier

    ExEx-Type

    Arweave-Data-Uploader

    The Load Network ExEx type

    Content-Type

    application/octet-stream

    Arweave data transaction MIME type

    LN:Encoding

    Borsh-Brotli

    5JUE58yemNynRDeQDyVECKbGVCQbnX7unPrBRqCPVn5Z
    F8XVrMQzsHiWfn1CaKtUPxAgUkATXQjXULWw3oVXCiFV

    Transaction's data encoding algorithms

    btSvNclyu2me_zGh4X9ULVRZqwze9l2DpkcVHcLw9Eg
    source code
    native/kernel_em_nif
    dev_kem.erl
    kem_nif_test.erl
    kem-device.lua
    Technical Architecture Diagram
    original image
    glitched via the kernel function - minted as AO NFT on Bazar https://bazar.arweave.net/#/asset/0z8MNwaRpkXhEgIxUv8ESNhtHxVGNfFkmGkoPtu0amY
    fn execute_kernel(
        kernel_id: String,
        input_data: rustler::Binary,
        output_size_hint: u64,
    ) -> NifResult<Vec<u8>> {
        let kernel_src = retrieve_kernel_src(&kernel_id).unwrap();
        let kem = pollster::block_on(KernelExecutor::new());
        let result = kem.execute_kernel_default(&kernel_src, input_data.as_slice(), Some(output_size_hint));
        Ok(result)
    }
    // SPDX-License-Identifier: GPL-3.0
    
    // input as u32 array
    @group(0) @binding(0)
    var<storage, read> input_bytes: array<u32>;
    
    // output as u32 array
    @group(0) @binding(1)
    var<storage, read_write> output_bytes: array<u32>;
    
    // a work group of 256 threads
    @compute @workgroup_size(256)
    // main compute kernel entry point
    fn main(@builtin(global_invocation_id) global_id: vec3<u32>) {
    }
    // SPDX-License-Identifier: GPL-3.0
    
    // input as u32 array
    @group(0) @binding(0)
    var<storage, read> input_bytes: array<u32>;
    
    // output as u32 array
    @group(0) @binding(1)
    var<storage, read_write> output_bytes: array<u32>;
    
    // uniform parameters for configuration
    @group(0) @binding(2)
    var<uniform> params: vec2<u32>; // example: param1, param2
    
    // a work group of 256 threads
    @compute @workgroup_size(256)
    // main compute kernel entry point
    fn main(@builtin(global_invocation_id) global_id: vec3<u32>) {
        // Access uniform parameters
        let param1 = i32(params.x);
        let param2 = i32(params.y);
        
        // your kernel logic here
    }

    Metis

    https://github.com/WeaveVM/wvm-archiver

    https://metis.load.rs/v1/info

    RSS3

    https://github.com/WeaveVM/rss3-wvm-archiver

    https://rss3.load.rs/v1/info

    GOAT Network

    https://github.com/WeaveVM/goat-wvm-archiver

    https://goat.load.rs/v1/info

    Avalanche c-chain

    https://github.com/WeaveVM/avalanche-wvm-archiver

    https://github.com/WeaveVM/wvm-archiver

    S3 Node 1 (current testnet)

    https://s3-node-1.load

    • untouched ao compute (canonical)

    • Blazingly fast ANS-104 dataitems streaming from S3 (sidecar)

    • Hybrid Gateway

    • Powers LCP, Turbo upload service and load-s3-agent

    • x402 integration

    • supports private, expirable shareable ANS-104 DataItems (S3 objects)

    S3 Node 0 (deprecated)

    https://s3-node-0.load.network

    • ao compute with offchain dataitems

    • Hybrid Gateway

    LCP
    LS3 with load_acc
    Turbo upload service
    xANS-104
    https://gateway.s3-node-1.load.network/resolve/:dataitem_id
    hb_gateway_s3.erl
    hb_gateway_client.erl
    [email protected]
    upload service
    https://s3-node-1.load.network
    page

    [email protected] device

    Serverless quantum functions runtime (simulation)

    About

    The quantum_runtime_nif is the foundation of the [email protected] device: a quantum computing runtime built on top of roqoqo simulation framework. This hyperbeam device enables serverless quantum function execution, positioning hyperbeam nodes running this device as providers of serverless functions compute.

    The device supports quantum circuit execution, measurement-based quantum computation, and provides a registry of pre-built quantum functions including superposition states, quantum random number generation, and quantum teleportation protocols.

    This device is currently simulation-based using roqoqo-quest backend - for educational purposes only

    What is Quantum Computing?

    Quantum computing make use of quantum mechanical phenomena such as superposition and entanglement to process information in fundamentally different ways than classical computers.

    Unlike classical bits that exist in definite states (0 or 1), quantum bits (qubits) can exist in superposition of both states simultaneously, enabling parallel computation across multiple possibilities.

    [email protected] device

    The [email protected] device, as per its current implementation, provides a serverless quantum function execution environment. It uses the roqoqo simulation backend for development and testing, but can be adapted to real quantum computation using services like or other quantum cloud providers such as IBM Quantum Platform, with minimal device code changes.

    The device supports quantum circuits with up to 32 qubits and provides a registry of whitelisted quantum functions that can be executed through HTTP calls or via ao messaging.

    Available Quantum Functions (in simulation mode)

    • superposition: creates quantum superposition state on a single qubit

    • quantum_rng: quantum (pseuo)random number generator using multiple qubits

    • bell_state: creates entangled Bell states between qubits

    • quantum_teleportation: implements quantum teleportation protocol

    Quantum Runtime Technical Architecture

    The compute() function takes 3 inputs: the number of qubits to initialize, a function ID from the serverless registry, and a list of qubit indices to measure. It returns a HashMap containing the measurement results.

    Device API Examples

    Generate Quantum Random Numbers

    References

    • hb device interface:

    • nif interface:

    • quantum functions registry:

    • runtime core:

    Ledger Archivers: State Reconstruction

    Reconstruction an EVM network using using its load-archiver node instance

    Understanding the World State Trie

    The World State Trie, also known as the Global State Trie, serves as a cornerstone data structure in Ethereum and other EVM networks. Think of it as a dynamic snapshot that captures the current state of the entire network at any given moment. This sophisticated structure maintains a crucial mapping between account addresses (both externally owned accounts and smart contracts) and their corresponding states.

    Each account state in the World State Trie contains several essential pieces of information:

    • Current balance of the account

    • Transaction nonce (tracking the number of transactions sent from this account)

    • Smart contract code (for contract accounts)

    • Hash of the associated storage trie (linking to the account’s persistent storage)

    This structure effectively represents the current status of all assets and relevant information on the EVM network. Each new block contains a reference to the current global state, enabling network nodes to efficiently verify information and validate transactions.

    The Dynamic Nature of State Management

    An important distinction exists between the World State Trie database and the Account Storage Trie database. While the World State Trie database maintains immutability and reflects the network’s global state, the Account Storage Trie database remains mutable with each block. This mutability is necessary because transaction execution within each block can modify the values stored in accounts, reflecting changes in account states as the blockchain progresses.

    Reconstructing the World State with Load Network Archivers

    The core focus of this article is demonstrating how Load Network Archivers’ data lakes can be leveraged to reconstruct an EVM network’s World State. We’ve developed a proof-of-concept library in Rust that showcases this capability using a customized Revm wrapper. This library abstracts the complexity of state reconstruction into a simple interface that requires just 10 lines of code to implement.

    Here’s how to reconstruct a network’s state using our library:

    The reconstruction process follows a straightforward workflow:

    1. The library connects to the specified Load Network Archive network

    2. Historical ledger data is retrieved from the Load Network Archiver data lakes

    3. Retrieved blocks are processed through our custom minimal EVM execution machine

    4. The EVM StateManager applies the blocks sequentially, updating the state accordingly

    This proof-of-concept implementation is available on GitHub:

    has evolved beyond its foundation as a decentralized archive node. This proof of concept demonstrates how our comprehensive data storage enables full EVM network state reconstruction - a capability that opens new possibilities for network analysis, debugging, and state verification.

    We built this PoC to showcase what’s possible when you combine permanent storage with proper EVM state handling. Whether you’re analyzing historical network states, debugging complex transactions, or building new tools for chain analysis, the groundwork is now laid.

    Deploying an ERC20

    Tutoral on how to deploy an ERC20 on Load Network

    Add Load Network Alphanet to MetaMask

    Before deploying, make sure the Load Network network is configured in your MetaMask wallet. Check the Network Configurations.

    ERC20 Contract

    For this example, we will use the ERC20 token template provided by the smart contract library.

    Deployment

    Now that you have your contract source code ready, compile the contract and hit deploy with an initial supply.

    After deploying the contract successfully, check your EOA balance!

    LN-Dymension: DA client for RollAP

    Description of Laod Network integration as a Data Availability client for Dymension RollApps

    Links

    Key Details

    [email protected] device

    The first ao token x402 facilitator

    Thjs was a demo for educational purposes only, for a production ready x402 facilitator on ao, checkout the hyper-x402 section

    Hyper-x402

    is an fork with custom ao network support, that works alongside the rest of the supported networks (EVMs, Solana, Avalanche, Sei, etc), meaning you can not only run the facilitator with those network, but also with ao tokens support.

    https://avalanche.load.rs/v1/info
    Dymension L1 Hub
    https://github.com/WeaveVM/dymension-wvm-archiver
    https://dymension.load.rs/v1/info
    Humanode EVM
    https://github.com/weaveVM/humanode-wvm-archiver
    https://humanode.load.rs/v1/info
    Scroll Mainnet
    https://github.com/weaveVM/scroll-wvm-archiver
    https://scroll.load.rs/v1.info
    phala-mainnet-0
    https://github.com/weaveVM/phala-wvm-archiver
    https://phala.load.rs/v1.info
    tokenize.rs
    AQT.eu
    dev_quantum.erl
    quantum_runtime_nif.erl
    registry.rs
    runtime.rs
    execution flow

    The hyper-x402 ao facilitator is hosted under hyper-x402.load.network (supports base & ao) and the best example to showcase it, is via the axum example

    Axum Middleware Example

    This example repository comes with a tailored axum example that works with the ao network ($AO as payment token). the changes made to main.rs are:

    • fallback to our hosted facilitator if the ENV doesn't set the local facilitator (running at port 8080)

    • creates a PriceBuilderTag set to pay the AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA address:

    it's also possible to define a token struct instance on ao without being tied to $AO as payment token (default instance), by using the by_ao_network function, at the moment only AO, ARIO and USDA are supported

    • the route /protected-route is protected with a cost of 0.000000000001 $AO

    • it's required to have a wallet JWK saved in the ./wallet.json file location (gitignored - use burner!!) in order to complete the test flow (required balance >= 0.000000000001 AO)

    Run the test

    In order to GET the /protected-route and pay to access its gated content, you need to:

    • run the axum server binaries: cargo run --bin x402-axum-example

    • run the script that automates accessing the protected endpoint and handles the payment creation and submission: cargo run --bin ao_payment_helper

    Expected result:

    onchain proof: https://www.ao.link/#/message/_zrmkURCLrHOKWzsErmFTEdXzcUEur1iWGErepju3w0

    [OLD] [email protected] device

    About

    The [email protected] device is the first x402-compliant facilitator implemented with native ao token support, built as a HyperBEAM device (NIF). This facilitator's API follows the spec used in x402-rs.

    N.B: The [email protected] is an early WIP facilitator with partial x402-standard API compatibility. Promising work is being done, but treat it as a PoC for now - endpoint: https://x402-node-1.load.network/[email protected]/supported

    The Why

    Following the x402 hype mid-October 2025 and its adoption within the ai agents stack, we asked ourselves: is there any better compute platform for ai agents than ao? Unbiasedly speaking, no. ao's alien compute enables functionalities like onchain verifiable LLMs (already live on the network), API-keyless autonomous trading agents, permissionless access to Arweave's datalake (WeaveDrive), etc.

    So following that hype, we thought, if a network should be supported in the x402 stack, it's the ao network (and its token standard). Additionally, the encapsulation of the [email protected] device in the HyperBEAM OS exposes the facilitator natively to the ao canonical stack. To put it shortly: this device can be integrated into an existing EVM/SOL/* x402 facilitator, adding ao support (maintaining the API schema compatibility) AND offering additional routes for trustless compute infra for ai agents.

    Advantages

    Compared to EVM-based facilitators, this HyperBEAM device offers:

    • ~270ms total latency for the complete request lifecycle

    • native payment history storage via ao's Arweave settlement (GraphQL, state lookup)

    • potential native route to offer ao's compute alongside the payments railway through a single facilitator

    • partial-compatibility with the x402 facilitator API standard

    • micropayments support with near-zero gas fees

    • no reliance on API keys to function (no JSON-RPC API keys needed)

    • support for AR/ETH/SOL ANS-104 signatures

    Roadmap

    feature
    description + status

    x402-rs API schema minimal compatibility

    done

    Client Library

    done

    ao tokens support

    done

    payments storage

    done (native feature via ao)

    server middleware

    provide ready-to-use integration for Rust web frameworks such as axum and tower (also /settle /verify routes) - wip

    integration in an x402-rs fork

    routing ao requests to [email protected] from an EVM/SOL facilitator - wip

    Technical Overview

    On HyperBEAM’s [email protected] device, the client’s X-Payment` header is an AO message, signed by the user serialised as base64 string but not posted to Arweave – technically speaking, a signed ANS-104 dataitem that follows the ao protocol.

    The HTTP request received by the dev_x402_facilitator.erl module is handed to the Rust NIF (x402), where the NIF reconstructs and verifies the dataitem integrity (payment token, transfer amount, Recipient, ao protocol tags, etc), then posts it to the signed dataitem to Arweave and make AO scheduler aware of the message (payment settlement); on success, the erlang device’s counterpart returns the ao message ID along with the unlocked payload data, the whole process takes under a 300ms with near-zero fees, beating the EVM counterpart on speed and price. For an in-practice example, check out the complete flow test here and this onchain proof

    On the other hand, EVM facilitators (e.g. x402-rs) have the same HTTP schema but the payment is an ERC‑3009 transferWithAuthorization: the client signs EIP‑712 typed data, the facilitator’s server parses the payload, talks to the target target chain via JSON-RPC (in most cases requires API keys to not get rate limited), and spends gas to execute the transfer on-chain before responding.

    hyper-aos
    x402-rs
    #[rustler::nif]
    fn hello() -> NifResult<String> {
        Ok("Hello world!".to_string())
    }
    
    #[rustler::nif(schedule = "DirtyCpu")]
    fn compute(
        num_qubits: usize,
        function_id: String,
        measurements: Vec<usize>,
    ) -> NifResult<HashMap<String, f64>> {
        let runtime = Runtime::new(num_qubits);
        match runtime.execute_serverless(function_id, measurements) {
            Ok(result) => Ok(result),
            Err(_) => Err(rustler::Error::Term(Box::new("execution failed"))),
        }
    }
    curl -X POST "https://hb.load.rs/[email protected]/compute" \
      -H "Content-Type: application/json" \
      -d '{
        "function_id": "quantum_rng",
        "num_qubits": 4,
        "measurements": [0, 1, 2, 3]
      }'
        let ao_token = USDCDeployment::by_network(Network::Ao).pay_to(MixedAddress::Offchain(
            "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA".to_string(),
        ));
        let usda_token = USDCDeployment::by_ao_network("USDA").pay_to(MixedAddress::Offchain(
            "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA".to_string(),
        ));
    status=200 OK body=This is a VIP content!

    The final result is a complete reconstruction of the network’s World State

    https://github.com/weaveVM/evm-state-reconstructing
    Load Network Archivers
    EVM Tries
    LN State Reconstuction Flow

    Load Network provides a gateway for Arweave's permanent with its own (LN) high data throughput of the permanently stored data into .

  • Current maximum encoded blob size is 8 MB (8_388_608 bytes).

  • Laod Network currently operating in public testnet (Alphanet) - not recommended to use it in production environment.

  • Prerequisites and Resources

    1. Understand how to boot basic Dymension RollApp and how to configure it.

    2. Obtain test tLOAD tokens through our faucet for testing purposes.

    3. Monitor your transactions using the Load Network explorer.

    How it works

    You may choose to use Load Network as a DataAvailability layer of your RollApp. We assume that you know how to boot and configure basics of your dymint RollApp. As an example you may use https://github.com/dymensionxyz/rollapp-evm repository. Example uses "mock" DA client. To use Load Network you should simply set next environment variable before config generation step using init.sh export DA_CLIENT="weavevm" # This is the key change export WVM_PRIV_KEY="your_hex_string_wvm_priv_key_without_0x_prefix" init.sh will generate basic configuration for da_config.json in dymint.toml which should look like. da_config = '{"endpoint":"https://alphanet.load.network","chain_id":9496,"timeout":60000000000,"private_key_hex":"your_hex_string_load_priv_key_without_0x_prefix"}' In this example we use PRIVATE_KEY of your LN address. It's not the most secure way to handle transaction signing and that's why we also provide an ability to use web3signer as a signing method. To enable web3signer you will need to change init.sh script and add correspondent fields or change da_config.json in dymint.toml directly. e.g da_config = '{"endpoint":"https://alphanet.load.network","chain_id":9496,"timeout":"60000000000","web3_signer_endpoint":"http://localhost:9000"}'

    and to enable tls next fields should be add to the json file:

    web3_signer_tls_cert_file web3_signer_tls_key_file web3_signer_tls_ca_cert_file Web3 signer

    Web3Signer is a tool by Consensys which allows remote signing.

    Warnings

    Using a remote signer comes with risks, please read the web3signer docs. However this is a recommended way to sign transactions for enterprise users and production environments. Web3Signer is not maintained by Load Network team. Example of the most simple local web3signer deployment (for testing purposes): https://github.com/allnil/web3signer_test_deploy Example of used configuration:

    in rollap-evm log you will eventually see something like this:

    https://dymension.xyz
    // SPDX-License-Identifier: MIT
    pragma solidity ^0.8.0;
    
    /// @title Useless Testing Token
    /// @notice Just a testing shitcoin
    /// @dev SupLoad gmgm
    /// @author pepe frog
    import "@openzeppelin/contracts/token/ERC20/ERC20.sol";
    
    contract WeaveGM is ERC20 {
        constructor(uint256 initialSupply) ERC20("supLoad", "LOAD") {
            _mint(msg.sender, initialSupply);
        }
    }
    OpenZeppelin's
    69420 LOADs because why not
    Success!

    Cloud Platform (LCP)

    About Load Cloud Platform

    Uploading data onchain shouldn’t be any more difficult than using Google Drive. The reason tools like Google Drive are popular is because they just work and are cheap/free. Their hidden downsides? You don’t own your data, it’s not permanent, and – especially for blockchain projects – it’s not useful for application developers.

    Users just want to put their data somewhere and forget about the upkeep. Developers just want a permanent reference to their data that resolves in any environment. Whichever you are, we built cloud.load.network for you.

    The Load Cloud is an all-in-one tool to interact with various Load Network storage interfaces and pipelines: one UI, one API key, various integrations, with web2 UX.

    Using the API keys generated on cloud.load.network - you can access other features such as load0 and Load S3 storage.

    It has been a few months since the . Since the initial alpha, we have been working towards a more complete dashboard for the onchain data center.

    This builds towards our vision of the – meeting developers where they are, with familiar devex and 1:1 parallels with the web2 tools they already use. The decentralized compute layer is in place via Load’s EVM/AO layers, and storage via the HyperBEAM-powered S3 device.

    The V2 release of the Load Cloud Platform (LCP) introduces several, highly-requested features:

    • private ANS-104 DataItems

    • DataItems as first-class s3 objects

    • access controlled DataItems (s3 objects)

    • [email protected] device upgrades

    The foundation: a universal API key system for Load

    V2 introduces the concept of Load accounts and API keys. A unified auth layer finally enables us to provision scoped access to Load’s HyperBEAM S3 layer and build access control for data.

    Under the hood, Load Cloud email login uses to create an EVM wallet. This wallet’s keys have access to the Load authentication API and generate keys in the dashboard that can read and write from S3.

    This approach enables API access to offchain services with a wallet address as the primary identity, leaving the system open in the future for offchain components to handle things like payments and native integration with onchain compute.

    Private, access-controlled ANS-104 DataItems

    Private data and access control are 2 of the most requested features we ever got since we started working with the onchain data center roadmap.

    How can I access my data without necessarily encrypting it and storing the encrypted data onchain? How can I gate access to my data?

    Private DataItems

    Private ANS-104 Dataitems are possible today through the introduction of private buckets along load_acc gated JWT tokens. Before private dataitems, all of Load S3 dataitems were public, and stored according to the `offchain-dataitems` data protocol. However, as of today, any LCP user can:

    • Create a private bucket and select which LCP users (load_acc api keys) can access the bucket’s objects

    • Upload data to the private bucket and have it stored as ANS-104 Dataitems

    • Generate expirable pre-signed URLs for the private data sharing

    • Control the access to the private bucket’s dataitems by adding/removing load_acc API keys

    Therefore, LCP & Load S3 users can now store DataItems privately in their private S3 buckets and control the access to the s3 objects, with zero cryptography (encryption/decryption) overhead to keep the data private as if it was pushed to Arweave. Access to Load S3 dataitems can be controlled via signed auto-expiring sharing URLs, or by making the data permanently public on Arweave via Load’s Turbo integration.

    Access Control

    As mentioned above, LCP V2 allows its users to control the access over the privately stored offchain DataItems. The access is gated by the uploader’s master load_acc api key - in the next patch release, we will allow the users to add other LCP users via their registered email address.

    With this upgrade, private offchain ANS-104 DataItems are now the first-class data format for S3 objects in Load S3’s LCP.

    [email protected] HyperBEAM device upgrades

    In order to make the private offchain DataItems accessible to their rightful whitelisted users, we had to upgrade the HyperBEAM device’s sidecar and make it possible to:

    • Generate presigned URLs (JWT-gated) for private dataitems given private bucket name, dataitem s3 key, a user’s load_acc and expiry timestamp. The device then generates a presigned URL, by validating the requester’s correct ownership of the bucket and the dataitem.

    • Data streaming of the private dataitem’s data field directly from the S3 cluster, without deserializing the ANS-104 dataitem, to the user’s browser, after JWT validation.

    These features have been released under s3_nif v0.3.1 which are live under s3-node-1.load.network. To check the s3 device sidecar’s upgrades, visit the source code .

    load-s3-agent v4

    The Load S3 HTTP API and ANS-104 data orchestrator is now in v4 with several necessary features for the functionality of the LCP backend, and Load S3 clients. One of the most tangent features to this blog post, in the agent’s v4 release, is the /upload/private HTTP POST method that lets LCP users to programmatically push raw data to their LCP’s private bucket, auth’d with the load_acc API keys, and with the final data format as ANS-104 dataitem, that’s prepared and signed by the agent’s wallet.

    For more examples, check out the

    Start Using LCP Today

    Today you can use the LCP platform to create buckets, folders and temporarily store data privately in object-storage format. The LCP uses Load's S3 HyperBEAM data storage layer for hotcache storage.

    LN-EigenDA Proxy Server

    Permanent EigenDA blobs

    Links

    EigenDA proxy: repository

    About EigenDA Side Server Proxy

    LN-EigenDA wraps the , exposing endpoints for interacting with the EigenDA disperser in conformance to the , and adding disperser verification logic. This simplifies integrating EigenDA into various rollup frameworks by minimizing the footprint of changes needed within their respective services.

    About LN-EigenDA Side Server Proxy Integration

    It's a Load Network integration as a secondary backend of eigenda-proxy. In this scope, Load Network provides an EVM gateway/interface for EigenDA blobs on Arweave's Permaweb, removing the need for trust assumptions and relying on centralized third party services to sync historical data and provides a "pay once, save forever" data storage feature for EigenDA blobs.

    Key Details

    • Current maximum encoded blob size is 8 MiB (8_388_608 bytes).

    • Load Network currently operating in public testnet (Alphanet) - not recommended to use it in production environment.

    Prerequisites and Resources

    1. Review the configuration parameters table and .env file settings for the Holesky network.

    2. Obtain test tLOAD tokens through our for testing purposes.

    3. Monitor your transactions using the

    Usage Examples

    Please double check .env file values you start eigenda-proxy binary with env vars. They may conflict with flags.

    Start eigenda proxy with LN private key:

    POST command:

    GET command:

    Examples using Web3signer as a remote signer

    Web3 signer

    is a tool by Consensys which allows remote signing.

    Warnings

    Using a remote signer comes with risks, please read the web3signer docs. However this is a recommended way to sign transactions for enterprise users and production environments. Web3Signer is not maintained by Load Network team. Example of the most simple local web3signer deployment (for testing purposes):

    start eigenda proxy with signer:

    start web3signer tls:

    use evm_state_reconstructing::utils::core::evm_exec::StateReconstructor;  
    use evm_state_reconstructing::utils::core::networks::Networks;  
    use evm_state_reconstructing::utils::core::reconstruct::reconstruct_network;  
    use anyhow::Error;
    
    async fn reconstruct_state() -> Result<StateReconstructor, Error> {  
        let network: Networks = Networks::metis();  
        let state: StateReconstructor = reconstruct_network(network).await?;  
        Ok(state)  
    }  
    # Set environment variables
    export DA_CLIENT="weavevm"  # This is the key change
    export WVM_PRIV_KEY="your_hex_string_wvm_priv_key_without_0x_prefix"
    
    export ROLLAPP_CHAIN_ID="rollappevm_1234-1"
    export KEY_NAME_ROLLAPP="rol-user"
    export BASE_DENOM="arax"
    export MONIKER="$ROLLAPP_CHAIN_ID-sequencer"
    export ROLLAPP_HOME_DIR="$HOME/.rollapp_evm"
    export SETTLEMENT_LAYER="mock"
    
    # Initialize and start
    make install BECH32_PREFIX=$BECH32_PREFIX
    export EXECUTABLE="rollapp-evm"
    $EXECUTABLE config keyring-backend test
    
    sh scripts/init.sh
    
    # Verify dymint.toml configuration
    cat $ROLLAPP_HOME_DIR/config/dymint.toml | grep -A 5 "da_config"
    
    dasel put -f "${ROLLAPP_HOME_DIR}"/config/dymint.toml "max_idle_time" -v "2s"
    dasel put -f "${ROLLAPP_HOME_DIR}"/config/dymint.toml "max_proof_time" -v "1s"
    dasel put -f "${ROLLAPP_HOME_DIR}"/config/dymint.toml "batch_submit_time" -v "30s"
    dasel put -f "${ROLLAPP_HOME_DIR}"/config/dymint.toml "p2p_listen_address" -v "/ip4/0.0.0.0/tcp/36656"
    dasel put -f "${ROLLAPP_HOME_DIR}"/config/dymint.toml "settlement_layer" -v "mock"
    dasel put -f "${ROLLAPP_HOME_DIR}"/config/dymint.toml "node_address" -v "http://localhost:36657"
    dasel put -f "${ROLLAPP_HOME_DIR}"/config/dymint.toml "settlement_node_address" -v "http://127.0.0.1:36657"
    
    
    # Start the rollapp
    
    $EXECUTABLE start --log_level=debug \
      --rpc.laddr="tcp://127.0.0.1:36657" \
      --p2p.laddr="tcp://0.0.0.0:36656" \
      --proxy_app="tcp://127.0.0.1:36658"
    
    INFO[0000] weaveVM: successfully sent transaction[tx hash 0x8a7a7f965019cf9d2cc5a3d01ee99d56ccd38977edc636cc0bbd0af5d2383d2a]  module=weavevm
    INFO[0000] wvm tx hash[hash 0x8a7a7f965019cf9d2cc5a3d01ee99d56ccd38977edc636cc0bbd0af5d2383d2a]  module=weavevm
    DEBU[0000] waiting for receipt[txHash 0x8a7a7f965019cf9d2cc5a3d01ee99d56ccd38977edc636cc0bbd0af5d2383d2a attempt 0 error get receipt failed: failed to get transaction receipt: not found]  module=weavevm
    INFO[0002] Block created.[height 35 num_tx 0 size 786]   module=block_manager
    DEBU[0002] Applying block[height 35 source produced]     module=block_manager
    DEBU[0002] block-sync advertise block[error failed to find any peer in table]  module=p2p
    INFO[0002] MINUTE EPOCH 6[]                              module=x/epochs
    INFO[0002] Epoch Start Time: 2025-01-13 09:21:03.239539 +0000 UTC[]  module=x/epochs
    INFO[0002] commit synced[commit 436F6D6D697449447B5B3130342038203131302032303620352031323920393020343520313633203933203235322031352031343320333920313538203131342035382035352031352038322038203939203132392032333520313731203230382031392032343320313932203139203233352036355D3A32337D]
    DEBU[0002] snapshot is skipped[height 35]
    INFO[0002] Gossipping block[height 35]                   module=block_manager
    DEBU[0002] Gossiping block.[len 792]                     module=p2p
    DEBU[0002] indexed block[height 35]                      module=txindex
    DEBU[0002] indexed block txs[height 35 num_txs 0]        module=txindex
    INFO[0002] Produced empty block.[]                       module=block_manager
    DEBU[0002] Added bytes produced to bytes pending submission counter.[bytes added 786 pending 15719]  module=block_manager
    INFO[0003] data available in weavevm[wvm_tx 0x8a7a7f965019cf9d2cc5a3d01ee99d56ccd38977edc636cc0bbd0af5d2383d2a wvm_block 0xe897eab56aee50b97a0f2bd1ff47af3c834e96ca18528bb869c4eafc0df583be wvm_block_number 5651207]  module=weavevm
    DEBU[0003] Submitted blob to DA successfully.[]          module=weavevm

    Google BigQuery ETL

    Load Network GBQ ETL ExEx

    About

    This ExEx is an Extract-Transform-Load (ETL) process of the JSON-serialized blocks into Google BigQuery.

    Get the ExEx code

    sidecar
    native integration of [email protected] with the MinIO cluster.
    first release
    decentralized AWS
    load_acc
    Magic
    [email protected]
    here
    S3 agent repository
    Start using Load Network Cloud Platform today
    high-level EigenDA client
    OP Alt-DA server spec
    faucet
    Load Network explorer.
    Web3Signer
    https://github.com/allnil/web3signer_test_deploy
    echo -n "hello private world" | curl -X POST https://load-s3-agent.load.network/upload/private \
      -H "Authorization: Bearer $load_acc_api_key" \
      -H "x-bucket-name: $bucket_name" \
      -H "x-folder-name": $folder_name" \ 
      -H "x-dataitem-name: $object_friendly_name" \
      -F "file=@-;type=text/plain" \
      -F "content_type=text/plain"
    ./bin/eigenda-proxy \
        --addr 127.0.0.1 \
        --port 3100 \
        --eigenda.disperser-rpc disperser-holesky.eigenda.xyz:443 \
        --eigenda.signer-private-key-hex $PRIVATE_KEY \
        --eigenda.max-blob-length 8Mb \
        --eigenda.eth-rpc https://ethereum-holesky-rpc.publicnode.com \
        --eigenda.svc-manager-addr 0xD4A7E1Bd8015057293f0D0A557088c286942e84b \
        --weavevm.endpoint https://alphanet.load.network/ \
        --weavevm.chain_id 9496 \
        --weavevm.enabled \
        --weavevm.private_key_hex $WVM_PRIV_KEY \
        --storage.fallback-targets weavevm \
        --storage.concurrent-write-routines 2 
    curl -X POST "http://127.0.0.1:3100/put?commitment_mode=simple" \
          --data-binary "some data that will successfully be written to EigenDA" \
          -H "Content-Type: application/octet-stream" \
          --output response.bin
    COMMITMENT=$(xxd -p response.bin | tr -d '\n' | tr -d ' ')
    curl -X GET "http:/127.0.0.1:3100/get/0x$COMMITMENT?commitment_mode=simple" \
         -H "Content-Type: application/octet-stream"
    ./bin/eigenda-proxy \
        --addr 127.0.0.1 \
        --port 3100 \
        --eigenda.disperser-rpc disperser-holesky.eigenda.xyz:443 \
        --eigenda.signer-private-key-hex $PRIVATE_KEY \
        --eigenda.max-blob-length 8MiB \
        --eigenda.eth-rpc https://ethereum-holesky-rpc.publicnode.com \
        --eigenda.svc-manager-addr 0xD4A7E1Bd8015057293f0D0A557088c286942e84b \
        --weavevm.endpoint https://alphanet.load.network/ \
        --weavevm.chain_id 9496 \
        --weavevm.enabled \
        --weavevm.web3_signer_endpoint http://localhost:9000 \
        --storage.fallback-targets weavevm \
        --storage.concurrent-write-routines 2 
    ./bin/eigenda-proxy \
        --addr 127.0.0.1 \
        --port 3100 \
        --eigenda.disperser-rpc disperser-holesky.eigenda.xyz:443 \
        --eigenda.signer-private-key-hex $PRIVATE_KEY \
        --eigenda.max-blob-length 8MiB \
        --eigenda.eth-rpc https://ethereum-holesky-rpc.publicnode.com \
        --eigenda.svc-manager-addr 0xD4A7E1Bd8015057293f0D0A557088c286942e84b \
        --weavevm.endpoint https://testnet-rpc.wvm.dev/ \
        --weavevm.chain_id 9496 \
        --weavevm.enabled \
        --weavevm.web3_signer_endpoint https://localhost:9000 \
        --storage.fallback-targets weavevm \
        --storage.concurrent-write-routines 2 \
        --weavevm.web3_signer_tls_cert_file $SOME_PATH_TO_CERT \
        --weavevm.web3_signer_tls_key_file $SOME_PATH_TO_KEY \
        --weavevm.web3_signer_tls_ca_cert_file $SOME_PATH_TO_CA_CERT

    0xbabe2: Large Data Uploads

    Using Load Network's 0xbabe2 transaction format for large data uploads - the largest EVM transaction in history

    About 0xbabe2 Transaction Format

    0xbabe2 is the newest data transaction format from the Bundler data protocol. Also called "Large Bundle," it's a bundle under version 0xbabe2 (address: 0xbabe2dCAf248F2F1214dF2a471D77bC849a2Ce84) that exceeds the Load Network L1 and 0xbabe1 transaction input size limits, introducing incredibly high size efficiency to data storage on Load Network.

    For example, with Alphanet v0.4.0 metrics running at 500 mgas/s, a Large Bundle has a max size of 246 GB. However, to ensure a smooth DevX and optimal finalization period (aka "safe mode"), we have limited the 0xbabe2 transaction input limit to 2GB at the Bundler SDK level. If you want higher limits, you can achieve this by changing a simple constant!

    If you have 10 hours to spare, make several teas and watch this 1 GB video streamed to you onchain from the Load Network! 0xbabe2 txid:

    Architecture design TLDR

    In simple terms, a Large Bundle consists of n smaller chunks (standalone bundles) that are sequentially connected tail-to-head and then at the end the Large Bundle is a reference to all the sequentially related chunks, packing all of the chunks IDs in a single 0xbabe2 bundle and sending it to Load Network.

    To dive deeper into the architecture design behind 0xbabe2 and how it works, check out the 0xbabe2 section in the .

    with the upcoming Load Network network release (Alphanet v0.5.0) reaching 1 gigagas/s – 0xbabe2 data size limit will double to 492GB, almost 0.5TB EVM transaction.

    0xbabe2 Broadcasting

    Broadcasting an 0xbabe2 to Load Network can be done via the Bundler Rust SDK through 2 ways: the normal 0xbabe2 broadcasting (single-wallet single-threaded) or through the multi-wallet multi-threaded method (using SuperAccount).

    Single-Threaded Broadcasting

    Uploading data via the single-threaded method is efficient when the data isn't very large; otherwise, it would have very high latency to finish all data chunking then bundle finalization:

    Multi-Threaded Broadcasting

    Multi-Threaded 0xbabe2 broadcasting is done via a multi-wallet architecture that ensures parallel chunks settlement on Load Network, maximizing the usage of the network's data throughput. To broadcast a bundle using the multi-threaded method, you need to initiate a SuperAccount instance and fund the Chunkers:

    A Super Account is a set of wallets created and stored as keystore wallets locally under your chosen directory. In Bundler terminology, each wallet is called a "chunker". Chunkers optimize the DevX of uploading Large Bundle's chunks to LN by allocating each chunk to a chunker (~4MB per chunker), moving from a single-wallet single-threaded design in data uploads to a multi-wallet multi-threaded design.

    0xbabe2 Data Retrieval

    0xbabe2 transaction data retrieval can be done either using the Rust SDK or the REST API. Using the REST API to resolve (chunk reconstruction until reaching final data) is faster for user usage as it does chunks streaming, resulting in near-instant data usability (e.g., rendering in browser).

    Rust SDK

    REST API

    What you can fit in a 492GB 0xbabe2 transaction

    Modern LLMs

    Model
    What Can Fit in one 0xbabe2 transaction

    Blockchain Data

    Data Type
    What Can Fit in one 0xbabe2 transaction

    Media Files

    File Type
    What Can Fit in one 0xbabe2 transaction

    Other Data

    Data Type
    What Can Fit in one 0xbabe2 transaction

    Network configurations

    Load Network Configurations

    Alphanet V5

    • RPC URL: https://alphanet.load.network

    • Chain ID: 9496

    • Alphanet Faucet:

    • Testnet Currency Symbol: tLOAD

    • Explorer:

    • Chainlist:

    Add to MetaMask

    Click on Networks > Add a network > Add a network manually

    Turbo Offchain Upload Service

    Learn about the ARIO's Turbo-compliant offchain upload service

    About

    The is the first -compliant, offchain, s3-based, on HyperBEAM upload service. With this new upload service, Turbo and the broader Arweave ecosystem users can start using temporary storage directly via the official Turbo SDK: a simple -one line- endpoint change.

    Thanks to the Turbo SDK’s clear open standards, it was possible to develop a layer on top Load S3 that acts as a upload service that inherits the storage features of Load S3, while maintaining interoperability and integrity with Arweave’s ANS-104 data standard. Load’s upload service inherits the offered by the Load S3 client.

    https://load.network/faucet
    https://explorer.load.network
    https://chainlist.org/chain/9496
    Adding Load Alphanet in Metamask

    GPT-4 Turbo (1100B params est.)

    0.22 models (16-bit) or 0.89 models (4-bit)

    Llama 3 70B

    3.51 models (16-bit) or 14.06 models (4-bit)

    Llama 3 405B

    0.61 models (16-bit) or 2.43 models (4-bit)

    Gemini Pro (220B params est.)

    1.12 models (16-bit) or 4.47 models (4-bit)

    Gemini Ultra (750B params est.)

    0.33 models (16-bit) or 1.31 models (4-bit)

    Mistral Large (123B params est.)

    2.00 models (16-bit) or 8.00 models (4-bit)

    Claude 3 Haiku (70B params)

    3.51 models (16-bit) or 14.06 models (4-bit)

    Claude 3 Sonnet (175B params)

    1.41 models (16-bit) or 5.62 models (4-bit)

    Claude 3 Opus (350B params)

    0.70 models (16-bit) or 2.81 models (4-bit)

    Claude 3.5 Sonnet (250B params)

    0.98 models (16-bit) or 3.94 models (4-bit)

    Claude 3.7 Sonnet (300B params)

    0.82 models (16-bit) or 3.28 models (4-bit)

    GPT-4o (1500B params est.)

    0.16 models (16-bit) or 0.66 models (4-bit)

    Solana's State Snapshot (~70GB)

    ~7 instances

    Bitcoin Full Ledger (~625 GB)

    ~78% of the ledger

    Ethereum Full Ledger (~1250 GB)

    ~40% of the ledger

    Ethereum blobs (~2.64 GB per day)

    ~186 days worth of blob data

    Celestia's max throughput per day (112.5 GB)

    4.37× capacity

    MP3 Songs (4MB each)

    123,000 songs

    Full HD Movies (5GB each)

    98 movies

    4K Video Footage (2GB per hour)

    246 hours

    High-Resolution Photos (3MB each)

    164,000 photos

    Ebooks (5MB each)

    100,000 books

    Documents/Presentations (1MB each)

    Database Records (5KB per record)

    98 billion records

    Virtual Machine Images (8GB each)

    61 VMs

    Docker container images (500MB each)

    1,007 containers

    Genome sequences (4GB each)

    123 genomes

    https://bundler.load.rs/v2/resolve/0x45cfaff6c3a507b1b1e88ef502ce32f93e7f515d9580ea66c340dc69e9d47608
    Bundler documentation

    492,000 files

    use bundler::utils::core::large_bundle::LargeBundle;
    
    async fn send_large_bundle_single_thread() -> Result<String, Error> {
        let private_key = String::from("");
        let content_type = "text/plain".to_string();
        let data = "~UwU~".repeat(4_000_000).as_bytes().to_vec();
        let large_bundle = LargeBundle::new()
            .data(data)
            .private_key(private_key)
            .content_type(content_type)
            .chunk()
            .build()?
            .propagate_chunks()
            .await?
            .finalize()
            .await?;
        Ok(large_bundle_hash)
    }
    use bundler::utils::core::super_account::SuperAccount;
    // init SuperAccount instance
    let super_account = SuperAccount::new()
        .keystore_path(".bundler_keystores".to_string())
        .pwd("weak-password".to_string()) // keystore pwd
        .funder("private-key".to_string()) // the pk that will fund the chunkers
        .build();
    // create chunkers
    let _chunkers = super_account.create_chunkers(Some(256)).await.unwrap(); // Some(amount) of chunkers
    // fund chunkers (1 tWVM each)
    let _fund = super_account.fund_chunkers().await.unwrap(); // will fund each chunker by 1 tWVM
    // retrieve chunkers
    let loaded_chunkers = super_account.load_chunkers(None).await.unwrap(); // None to load all chunkers
    async fn send_large_bundle_multi_thread() -> Result<String, Error> {
        // will fail until a tLOAD funded EOA (pk) is provided, take care about nonce if same wallet is used as in test_send_bundle_with_target
        let private_key =
            String::from("6f142508b4eea641e33cb2a0161221105086a84584c74245ca463a49effea30b");
        let content_type = "text/plain".to_string();
        let data = "~UwU~".repeat(8_000_000).as_bytes().to_vec();
        let super_account = SuperAccount::new()
            .keystore_path(".bundler_keystores".to_string())
            .pwd("test".to_string());
        let large_bundle = LargeBundle::new()
            .data(data)
            .private_key(private_key)
            .content_type(content_type)
            .super_account(super_account)
            .chunk()
            .build()
            .unwrap()
            .super_propagate_chunks()
            .await
            .unwrap()
            .finalize()
            .await
            .unwrap();
        println!("{:?}", large_bundle);
        Ok(large_bundle)
    }
    async fn retrieve_large_bundle() -> Result<Vec<u8>, Error> {
        let large_bundle = LargeBundle::retrieve_chunks_receipts(
            "0xb58684c24828f8a80205345897afa7aba478c23005e128e4cda037de6b9ca6fd".to_string(),
        )
        .await?
        .reconstruct_large_bundle()
        .await?;
        Ok(large_bundle)
    }
    curl -X GET https://bundler.load.rs/v2/resolve/$0xBABE2_TXID

    The upload service limits and features are subject to change, to stay up to date, always check the service’s public configs here and follow the docs on GitHub.

    Compatibility

    Endpoint
    Status

    GET / GET /info

    ✅

    GET /bundler_metrics

    ✅ (placeholder)

    GET /health

    ✅

    GET /v1/tx/{dataitem_id}/offsets

    ✅ (placeholder)

    POST /v1/tx/{token} (<= 10MB uploads)

    ✅

    GET /v1/tx/{dataitem_id}/status

    ✅

    Service Limits & Specs

    The offchain LS3-powered ANS-104 upload service match the services standards set by the Turbo service, such as:

    • 4GB dataitem max size

    • return service-signed receipts

    • match the multipart upload biz logic, including chunking strategy (min/max/default chunk size)

    • it does not bundle dataitems as they are stored on LS3, it leaves the bundling logic for the onchain upload service that requires onchain fees fine tuning (e.g. Turbo)

    • comes along fast finality indexes & data caches, a LS3 dataitem streaming gateway powered by HyperBEAM:

    • service AR address: 2BBwe2pSXn_Tp-q_mHry0Obp88dc7L-eDIWx0_BUfD0

    • offchain -> onchain dataitems anchoring is supported via the

    Endpoints:

    • loaded-turbo-api (offchain, Load S3 bundler endpoint): https://loaded-turbo-api.load.network

    • data cache / fast finality index: https://gateway.s3-node-1.load.network

    Examples

    Small uploads (<= 10 MB)

    Example offchain uploaded DataItem fast indexer access: https://gateway.s3-node-1.load.network/resolve/Y11-TiVivfQpcg7eDV8ouxJfQl2UHvFFPgs-NJ_HC2k

    Large multipart resumable uploads

    Example signed receipt: https://loaded-turbo-api.load.network/v1/chunks/arweave/541a7043-6706-47e3-be73-907bffb17a80/status

    Check out the upload service v1.0.0 release here

    Uploading data using AWS S3 SDK / Load Agent vs Turbo SDK

    This Turbo-compliant upload service, along the load-s3-agent, form the main 2 data objects ingress as ANS-104 DataItems.

    The main difference between load-s3-agent and Turbo-SDK is access control. Uploading data via Turbo SDK default to the offchain-dataitems data protocol where all uploaded data items sit in the protocol’s public bucket, while using the load s3 agent, it’s possible to upload object -dataitems- to private buckets and control the access and have expireable shareable download links.

    However in both cases, the outcome is equal: ANS-104 formatted S3 objects along its data provenance guarantees and Arweave alignment.

    loaded-turbo-api
    Turbo
    Load S3
    SLA

    Supported Precompiles

    About Load Network precompiled contracts

    What Are Precompiled Contracts?

    Ethereum uses precompiles to efficiently implement cryptographic primitives within the EVM instead of re-implementing these primitives in Solidity.

    The following precompiles are currently included: ecrecover, sha256, blake2f, ripemd-160, Bn256Add, Bn256Mul, Bn256Pairing, the identity function, modular exponentiation, and point evaluation.

    Ethereum precompiles behave like smart contracts built into the Ethereum protocol. The ten precompiles live in addresses 0x01 to 0x0A. Load Network supports all of these 10 standard precompiles and adds new custom precompiles starting at the 23rd byte representing the letter "W" position (index) in the alphabet. Therefore, Load Network precompiles start at address 0x17 (23rd byte).

    Load Network Precompiles List

    Address
    Name
    Minimum Gas
    Input
    Output
    Description

    Outlining Load Network New Precompiles

    1- Precompile 0x17: upload data from Solidity to Arweave

    The LN Precompile at address 0x17 (0x0000000000000000000000000000000000000017) enables data upload (in byte format) from Solidity to Arweave, and returns the data TXID (in byte format). In Alphanet V4, data uploads are limited to 100KB. Future network updates will remove this limitation and introduce a higher data cap.

    Solidity code example:

    2- Precompile 0x18: read Arweave data from Solidity

    This precompile, at address 0x18 (0x0000000000000000000000000000000000000018), completes the data pipeline between LN and Arweave, making it bidirectional. It enables retrieving data from Arweave in bytes for a given Arweave TXID.

    The 0x18 precompile allows user input to choose their Arweave gateway for resolving a TXID. If no gateway URL is provided, the precompile defaults to arweave.net.

    The format of the precompile bytes input (string representation) should be as follows: gateway_url;arweave_txid

    Solidity code example:

    3- Precompile 0x20: Access to LN' historical blocks

    This precompile, at address 0x20(0x0000000000000000000000000000000000000020), lets smart contract developers not access only the most recent 256 blocks, but any block data starting at genesis. To explain how to request block data using the 0x20 precompile, here is a code example:

    As you can see, for the query variable we have three “parameters” separated by a semicolon ”;” (gateway;load_block_id;block_field format)

    • An Arweave gateway (optional and fallback to arweave.net if not provided):

    • Load Network's block number to fetch, target block: 141550

    • The field of the block struct to access, in this case: hash

    Only the gateway is an optional parameter, and regarding the field of the block struct to access, here is the Block struct that the 0x20 precompile uses:

    4- Precompile 0x21: Native access to archived Ethereum blobs

    This precompile, at address 0x21 (0x0000000000000000000000000000000000000021), is a unique solution for native access to blobs data (not just commitments) from the smart contract layer. This precompile fetches from the the blobs data that KYVE archives for their supported networks.

    Therefore, with 0x21, KYVE clients will have, for the first time, the ability to fetch their archived blobs from an EVM smart contract layer instead of wrapping the Trustless API in oracles and making expensive calls.

    0x21 lets you fetch KYVE's Ethereum blob data starting at Ethereum's block - the first block with a recorded EIP-4844 transaction. To retrieve a blob from the Trustless API, in the 0x21 staticcall you need to specify the Ethereum block number, blob index in the transaction, and the blob field you want to retrieve, in this format: block_number;blob_index.field N.B: blob_index represents the blob index in the KYVE’s Trustless API JSON response:

    The eip-4844 transaction fields that you can access from the 0x21 query are:

    • blob (raw blob data, the body)

    • kzg_commitment

    • kzg_proof

    • slot

    Advantages of 0x21 (use cases)

    • Native access to blob data from smart contract layer

    • Access to permanently archived blobs

    • Opens up longer verification windows for rollups using KYVE for archived blobs and LN for settlement layer

    • Enables using blobs for purposes beyond rollups DA, opening doors for data-intensive blob-based applications with permanent blob access

    Check out the 0x21 precompile source code .

    EVM Bundler (legacy)

    Since and launch, we advise using them instead of EVM Bundler for several reasons:

    • Arweave's ANS-104 compatibility

    • S3 object-storage compatibility

    • Highly-scalable, cost-efficient temporary storage

    import {
      TurboFactory,
      developmentTurboConfiguration,
    } from '@ardrive/turbo-sdk/node';
    import Arweave from 'arweave';
    import fs from 'fs';
    
    (async () => {
      // Create a test file
      const testData = 'Hello from loaded-turbo-api S3 bundler!';
      fs.writeFileSync('test-file.txt', testData);
    
      // Create an Arweave key for testing
      const arweave = new Arweave({});
      const jwk = await Arweave.crypto.generateJWK();
      const address = await arweave.wallets.jwkToAddress(jwk);
      console.log('Test wallet address:', address);
    
      const customTurboConfig = {
        ...developmentTurboConfiguration,
        uploadServiceConfig: {
          url: 'https://loaded-turbo-api.load.network', // loaded-turbo-api endpoint
        },
      };
    
      // Create authenticated client
      const turboAuthClient = TurboFactory.authenticated({
        privateKey: jwk,
        ...customTurboConfig,
      });
    
      try {
        // Test upload
        console.log('Uploading file loaded-turbo-api');
        const uploadResult = await turboAuthClient.uploadFile({
        fileStreamFactory: () => fs.createReadStream('test-file.txt'),
        fileSizeFactory: () => fs.statSync('test-file.txt').size,
        dataItemOpts: {
            tags: [
            { name: 'Content-Type', value: 'text/plain' }
            ]
        },
        signal: AbortSignal.timeout(30_000),
        });
    
    
        console.log('Upload successful!');
        console.log(JSON.stringify(uploadResult, null, 2));
    
        // Verify the response structure
        console.log('\n=== Response Validation ===');
        console.log('ID:', uploadResult.id);
        console.log('Owner:', uploadResult.owner);
        console.log('Winc:', uploadResult.winc);
        console.log('Data Caches:', uploadResult.dataCaches);
        console.log('Fast Finality Indexes:', uploadResult.fastFinalityIndexes);
    
      } catch (error) {
        console.error('Upload failed:', error);
        if (error.response) {
          console.error('Response status:', error.response.status);
          console.error('Response data:', error.response.data);
        }
      } finally {
        fs.unlinkSync('test-file.txt');
      }
    })();
    import {
      TurboFactory,
      developmentTurboConfiguration,
    } from '@ardrive/turbo-sdk/node';
    import Arweave from 'arweave';
    import fs from 'fs';
    
    (async () => {
      // Create large file to force multipart upload (>10MB)
      const largeTestData = 'A'.repeat(15 * 1024 * 1024); // 15MB
      fs.writeFileSync('large-test-file.txt', largeTestData);
      console.log('Created 15MB test file');
    
      // Generate test wallet
      const arweave = new Arweave({});
      const jwk = await Arweave.crypto.generateJWK();
      const address = await arweave.wallets.jwkToAddress(jwk);
      console.log('Test wallet address:', address);
    
      const customTurboConfig = {
        ...developmentTurboConfiguration,
        uploadServiceConfig: {
          url: 'https://loaded-turbo-api.load.network',
        },
      };
    
      const turboAuthClient = TurboFactory.authenticated({
        privateKey: jwk,
        ...customTurboConfig,
      });
    
      try {
        console.log('Starting multipart upload...');
        
        const uploadResult = await turboAuthClient.uploadFile({
          fileStreamFactory: () => fs.createReadStream('large-test-file.txt'),
          fileSizeFactory: () => fs.statSync('large-test-file.txt').size,
          dataItemOpts: {
            tags: [{ name: 'Content-Type', value: 'text/plain' }]
          },
          events: {
            onProgress: ({ totalBytes, processedBytes, step }) => {
              const percent = ((processedBytes / totalBytes) * 100).toFixed(1);
              console.log(`${step.toUpperCase()} Progress: ${percent}% (${processedBytes}/${totalBytes})`);
            },
            onUploadProgress: ({ totalBytes, processedBytes }) => {
              console.log(`Upload: ${((processedBytes / totalBytes) * 100).toFixed(1)}%`);
            },
          },
        });
    
        console.log('Result:', uploadResult);
    
      } catch (error) {
        console.error('\nUpload failed:', error.message);
        if (error.response) {
          console.error('Status:', error.response.status);
          console.error('Data:', error.response.data);
        }
      } finally {
        fs.unlinkSync('large-test-file.txt');
      }
    })();
    

    GET /v1/chunks/{token}/-1/-1

    ✅

    GET /v1/chunks/{token}/{upload_id}/-1

    ✅

    GET /v1/chunks/{token}/{upload_id}/status

    ✅

    POST /v1/chunks/{token}/{upload_id}/{offset}

    ✅

    POST /v1/chunks/{token}/{upload_id}/finalize

    ✅

    GET /account/balance/:id

    not supported, deprecated in turbo-upload-service

    GET /price/:token/:byteCount?

    not supported, deprecated in turbo-upload-service

    https://gateway.s3-node-1.load.network
    load-s3-agent

    Powered by AO and HyperBEAM

  • Equipped with a bigger set of dev tools and APIs, offering higher stability

  • However, we will keep this legacy section for EVM Bundlers in the docs for backward compatibility - soon it will enter sunsetting phase.

    Load S3
    Load Cloud Platform

    hash

    Hash function

    0x03 (0x0000000000000000000000000000000000000003)

    RIPEMD-160

    600

    data

    hash

    Hash function

    0x04 (0x0000000000000000000000000000000000000004)

    identity

    15

    data

    data

    Returns the input

    0x05 (0x0000000000000000000000000000000000000005)

    modexp

    200

    Bsize, Esize, Msize, B, E, M

    value

    Arbitrary-precision exponentiation under modulo

    0x06 (0x0000000000000000000000000000000000000006)

    ecAdd

    150

    x1, y1, x2, y2

    x, y

    Point addition (ADD) on the elliptic curve alt_bn128

    0x07 (0x0000000000000000000000000000000000000007)

    ecMul

    6000

    x1, y1, s

    x, y

    Scalar multiplication (MUL) on the elliptic curve alt_bn128

    0x08 (0x0000000000000000000000000000000000000008)

    ecPairing

    45000

    x1, y1, x2, y2, ..., xk, yk

    success

    Bilinear function on groups on the elliptic curve alt_bn128

    0x09 (0x0000000000000000000000000000000000000009)

    blake2f

    0

    rounds, h, m, t, f

    h

    Compression function F used in the BLAKE2 cryptographic hashing algorithm

    0x0A (0x000000000000000000000000000000000000000A)

    point evaluation

    50000

    bytes

    bytes

    Verify p(z) = y given commitment that corresponds to the polynomial p(x) and a KZG proof. Also verify that the provided commitment matches the provided versioned_hash.

    0x17 (0x0000000000000000000000000000000000000017)

    arweave_upload

    10003

    bytes

    bytes

    upload bytes array to Arweave and get back the upload TXID in bytes

    0x18 (0x0000000000000000000000000000000000000018)

    arweave_read

    10003

    bytes

    bytes

    retrieve an Arweave TXID data in bytes

    0x20 (0x0000000000000000000000000000000000000020)

    read_block

    10003

    bytes

    bytes

    retrieve a LN's block data (from genesis) pulling it from Arweave

    0x21 (0x0000000000000000000000000000000000000021)

    kyve_trustless_api_blob

    10003

    bytes

    bytes

    retrieve a historical Ethereum blob data from LN's smart contract layer

    0x01 (0x0000000000000000000000000000000000000001)

    ecRecover

    3000

    hash, v, r, s

    publicAddress

    Elliptic curve digital signature algorithm (ECDSA) public key recovery function

    0x02 (0x0000000000000000000000000000000000000002)

    SHA2-256

    60

    https://ar-io.dev
    Check out the 0x20 source code here
    KYVE Trustless API
    19426589
    here

    data

    pragma solidity ^0.8.0;
    
    contract ArweaveUploader {
        function upload_to_arweave(string memory dataString) public view returns (bytes memory) {
            // Convert the string parameter to bytes
            bytes memory data = abi.encodePacked(dataString);
    
            // pc address: 0x0000000000000000000000000000000000000017
            (bool success, bytes memory result) = address(0x17).staticcall(data);
    
            return result;
        }
    pragma solidity ^0.8.0;
    
    contract ArweaveReader {
        function read_from_arweave(string memory txIdOrGatewayAndTxId) public view returns (bytes memory) {
            // Convert the string parameter to bytes
            bytes memory data = abi.encodePacked(txIdOrGatewayAndTxId);
    
            // pc address: 0x0000000000000000000000000000000000000018
            (bool success, bytes memory result) = address(0x18).staticcall(data);
    
            return result;
        }
    }
    pragma solidity ^0.8.0;
    
    contract LnBlockReader {
        function read_block() public view returns (bytes memory) {
            // Convert the string parameter to bytes
            string memory blockIdAndField = "141550;hash";
            bytes memory data = abi.encodePacked(blockIdAndField);
    
            (bool success, bytes memory result) = address(0x20).staticcall(data);
    
            return result;
        }
    }
    #[serde(rename_all = "camelCase")]
    pub struct Block {
        pub base_fee_per_gas: Option<String>,         // "baseFeePerGas"
        pub blob_gas_used: Option<String>,            // "blobGasUsed"
        pub difficulty: Option<String>,               // "difficulty"
        pub excess_blob_gas: Option<String>,          // "excessBlobGas"
        pub extra_data: Option<String>,               // "extraData"
        pub gas_limit: Option<String>,                // "gasLimit"
        pub gas_used: Option<String>,                 // "gasUsed"
        pub hash: Option<String>,                     // "hash"
        pub logs_bloom: Option<String>,               // "logsBloom"
        pub miner: Option<String>,                    // "miner"
        pub mix_hash: Option<String>,                 // "mixHash"
        pub nonce: Option<String>,                    // "nonce"
        pub number: Option<String>,                   // "number"
        pub parent_beacon_block_root: Option<String>, // "parentBeaconBlockRoot"
        pub parent_hash: Option<String>,              // "parentHash"
        pub receipts_root: Option<String>,            // "receiptsRoot"
        pub seal_fields: Vec<String>,                 // "sealFields" as an array of strings
        pub sha3_uncles: Option<String>,              // "sha3Uncles"
        pub size: Option<String>,                     // "size"
        pub state_root: Option<String>,               // "stateRoot"
        pub timestamp: Option<String>,                // "timestamp"
        pub total_difficulty: Option<String>,         // "totalDifficulty"
        pub transactions: Vec<String>,                // "transactions" as an array of strings
    }
    pragma solidity ^0.8.0;
    
    contract KyveBlobsTrustlessApi {
        function getBlob
    () public view returns (bytes memory) {
            // Convert the string parameter to bytes
            string memory query = "20033081;0.blob";
            bytes memory data = abi.encodePacked(query);
    
            (bool success, bytes memory result) = address(0x21).staticcall(data);
    
            return result;
        }
    }
    GitHub - dymensionxyz/rollapp-evm: EVM DRS - EVM Dymension Rollapp StandardGitHub

    Load S3 Agent

    The LCP data agent

    About

    s3-load-agent is a data agent built on top of HyperBEAM [email protected] temporal data storage device. This agent orchestrates the location of the data moving it from temporal to permanent (Arweave).

    N.B: beta testing release, unstable and subject to breaking changes, use in testing enviroments only.

    Agent API

    • GET / : agent info

    • GET /stats : storage stats

    • GET /:dataitem_id : generate a presigned get_object URL to access the ANS-104 DataItem data - DEPRECATED since v0.7.0 - use gateway.s3-node-1.load.network/resolve/$DATAITEM_ID instead

    Upload data and return an agent public signed DataItem

    Or optionally add custom tags KVs that will be included in the ANS-104 DataItem construction

    Optional: have the agent publish an offchain provenance record for the uploaded DataItem (the API returns the provenance transaction id in offchain_provenance_proof):

    • onchain provenance example: https://viewblock.io/arweave/tx/b6kTeJISHCmKTqaq_GK5g6hCGPWmMgfR7W4FcJwBwGU

    • offchain dataitem: https://gateway.s3-node-1.load.network/resolve/qvnTWVz4QqVAa7DsiiPER3HMArN88clg_SZc1BIS63s

    Upload data and return an agent private signed DataItem

    *** N.B: any private DataItem does not have the tags indexed nor is queryable ***

    Upload signed dataitem to a private bucket (private dataitem)

    Upload a signed DataItem and store it in Load S3

    Tags are extracted from the ANS-104 DataItem, indexed and queryable

    Post offchain DataItem to Arweave

    example: for offchain dataitem = eoNAO-HlYasHJt3QFDuRrMVdLUxq5B8bXe4N_kboNWs

    Querying DataItems by Tags

    all dataitems pushed after agent's v0.6.0 release are queryable by the dataitem's tags KVs:

    • Pagination follows Arweave's GQL schema: optional first (default 25, max 100) and a cursor after.

    • full_metadata flag (Optional<bool>) to return the full tags associated with a query's dataitem

    • created_after /

    if page_info.has_next_page returns true, reuse the page_info.next_cursor string as the next after.

    count reflects the number of items returned in the current page, while total_count includes every dataitem that matches the filters across all pages.

    Load S3 Agent library

    Adding load-s3-agent library

    Example

    GitHub - dymensionxyz/dymint: Sequencing Engine for Dymension RollAppsGitHub

    GET /tags/query : query dataitems for a given tags KV pairs.

  • POST /upload : post data (or signed dataitem) to store a public offchain DataItem on [email protected]

  • POST /upload/private : post data (or signed dataitem) to store a private offchain DataItem on [email protected]

  • POST /post/:dataitem_id : post an [email protected] public DataItem to Arweave via Turbo (N.B: Turbo covers any dataitem cost with size <= 100KB).

  • created_before
    (ISO-8601/RFC3339 strings) filter items by their
    created_at
    timestamp (inclusive bounds).
    echo -n "hello world" | curl -X POST https://load-s3-agent.load.network/upload \
      -H "Authorization: Bearer $load_acc_api_key" \
      -F "file=@-;type=text/plain" \
      -F "content_type=text/plain"
    echo -n "hello custom tagged world"  | curl -X POST https://load-s3-agent.load.network/upload \
        -H "Authorization: Bearer $load_acc_api_key" \
        -F "file=@-;type=text/plain" \
        -F 'tags=[{"key":"tag1","value":"tag1"},{"key":"tag2","value":"tag2"}]'
    echo -n "hello provenance world" | curl -X POST https://load-s3-agent.load.network/upload \
      -H "Authorization: Bearer $load_acc_api_key" \
      -H "offchain_provenance: true" \
      -F "file=@-;type=text/plain"
    echo -n "hello world" | curl -X POST https://load-s3-agent.load.network/upload/private \
      -H "Authorization: Bearer $load_acc_api_key" \
      -H "x-bucket-name: $bucket_name" \
      -H "x-dataitem-name: $dataitem_name" \
      -H "x-folder-name": $folder_name" \ 
      -H "signed: false" \  
      -F "file=@-;type=text/plain" \
      -F "content_type=text/plain"
    curl -X POST https://load-s3-agent.load.network/upload/private \
      -H "Authorization: Bearer $load_acc_api_key" \
      -H "signed: true" \
      -H "bucket_name: $bucket_name" \
      -H "x-dataitem-name: $dataitem_name" \
      -H "x-folder-name": $folder_name" \ 
      -F "[email protected]" \
      -F "content_type=application/octet-stream"
    curl -X POST https://load-s3-agent.load.network/upload \
      -H "Authorization: Bearer $load_acc_api_key" \
      -H "signed: true" \
      -F "[email protected]"
    curl -X POST \
      "https://load-s3-agent.load.network/post/eoNAO-HlYasHJt3QFDuRrMVdLUxq5B8bXe4N_kboNWs" \
      -H "Authorization: Bearer REACH_OUT_TO_US" \
      -H "Content-Type: application/json"
    curl -X POST https://load-s3-agent.load.network/tags/query \
      -H "Content-Type: application/json" \
      -d '{
        "filters": [
          {
            "key": "tag1",
            "value": "tag1"
          },
          {
            "key": "tag2",
            "value": "tag2"
          }
        ]
      }'
    
    curl -X POST https://load-s3-agent.load.network/tags/query \
      -H "Content-Type: application/json" \
      -d '{
        "filters": [
          {
            "key": "tag1",
            "value": "tag1"
          }
        ],
        "full_metadata": true,
        "created_after": "2025-12-01T00:00:00Z",
        "created_before": "2025-12-05T00:00:00Z",
        "first": 25,
        "after": null
      }'
    
    [dependencies]
    load-s3-agent = { git = "https://github.com/loadnetwork/lcp-uploader-api.git", branch = "main" }
    use load_s3_agent::create_dataitem;
    
    fn build_item() -> anyhow::Result<()> {
        // ensure UPLOADER_JWK is available in the environment (or .env)
        std::env::set_var("UPLOADER_JWK", include_str!("../wallet.json"));
    
        let item = create_dataitem(
            b"hello world".to_vec(),
            "text/plain",
            &[("My-Tag".into(), "tag-value".into())],
        )?;
    
        println!("Signed data item id: {}", item.id());
        Ok(())
    }

    [email protected] device

    S3 object storage in hyperbeam

    Device status: WIP MVP

    Setup

    1- add s3_device.config

    in the root level of the hyperbeam codebase, touch s3_device.config and add the creds to connect to your S3 cluster

    connecting to external s3 cluster (./build.sh)

    connecting to local minio s3 cluster (./s3_device.sh)

    build and run the hyperbeam node

    configurting the local minio cluster

    if you choose the local minio cluster route, you can configure (set) your access key id and secret access key by creating .env file here:

    N.B: your local minio cluster access keys values should be also set equally in the s3_device.config config file

    Supported methods

    Supported

    Use the [email protected] device

    After running the hyperbeam node with the [email protected] device, you can use the node_endpoint/[email protected] url as a S3 compatible API endpoint.

    HyperBEAM node running the S3 device (testing enviroment):

    N.B (regarding access authorization): using the [email protected] as end user (client) you only have to pass the accessKeyId in the request's credentials, and secretAccessKey value doesn't matter. This is due to the design of [email protected] access authorization where the device check's the S3 request's access_key_id of Authorization Header, and validate its parity with the access_key_id defined in s3_device.config -> Keep the access_key_id secret and use it as access API key.

    1- create s3 client

    2- create bucket

    3- get object (with range)

    4- put object (with expiry)

    5- Generate presigned get_object url

    The returned URL uses the preset public_endpoint (in s3_device.config) as base url.

    Cache layer

    The NIF implements an LRU cache with size-based eviction (in-memory). The following cache endpoints are available under the hyperbeam http api (intentionally not compatible with the S3 API spec):

    1- get cached object

    Note: This endpoint requires no authentication

    cache vs S3 API GetObjectCommand : curl "http://localhost:8734/[email protected]/BUCKET_NAME/OBJECT_KEY"

    Hybrid gateway

    The hybdrid gateway is an extension to the hb_gateway_client.erl that makes it possible for the hyperbeam node to retrieve both of onchain (Arweave) posted ANS-104 dataitems, and offchain (in dev_s3.erl) object-storage temporal dataitems.

    Workflow

    • hb_store_gateway.erl -> calls hb_gateway_client:read() -> it tries to read from local cache then Arweave (onchain dataitem) -> incase not found onchain, check the offchain dataitems s3 bucket

    • offchain dataitems retrieval route : hb_gateway_s3:read() -> calls dev_s3:handle_s3_request() -> retrieve the dataitem.asn104 from the dev_s3 bucket

    Test it

    1- create & set the offchain bucket

    Make hyperbeam aware of the dev_s3 bucket that is storing your ANS-104 offchain dataiems, here (s3_bucket in default_message)

    2- add test data

    Make sure to create the [email protected] bucket as you defined the name in hb_opts.erl then add a fake offchain dataitem. If you want to test using existing signed offchain ans-104 dataitems, checkout the test-dataitems directory and store it in your [email protected] bucket.

    Otherwise, you can generate a signed valid ANS-104 dataitem using the hyperbeam erlang shell:

    3- test retrieving dataitems

    • offchain: http://localhost:8734/ysAGgm6JngmOAxmaFN2YJr5t7V1JH8JGZHe1942mPbA

    • onchain: http://localhost:8734/myb2p8_TSM0KSgBMoG-nu6TLuqWwPmdZM5V2QSUeNmM

    Verify dataitem integrity

    Expected Response:

    For more examples & ao processes interactions, check the directory

    Source code:

    Logo

    create_bucket

    head_bucket

    put_object (support expiry: 1-365 days)

    get_object (support range)

    delete_object

    delete_objects

    head_object

    list_objects

    https://s3-node-0.load.network
    test-dataitems
    https://github.com/loadnetwork/load_hb/tree/s3-edge/native/s3_nif
    {endpoint, <<"https://drive.load.network">>}.
    {public_endpoint, <<"https://drive.load.network">>}.
    {access_key_id, <<"your_access_key_id">>}.
    {secret_access_key, <<"your_access_key">>}.
    {region, <<"eu-west-2">>}.
    {endpoint, "http://localhost:9001"}.% Internal MinIO - dev
    {public_endpoint, "https://your.hyperbeam-s3-cluster-endpoint.com"}. % Public-facing URL, used for presigned URLs
    {access_key_id, <<"value">>}.
    {secret_access_key, <<"value">>}.
    {region, <<"value">>}.
    ./s3_device.sh # build the s3_nif device & run local minio cluster
    
    # if you want to connect to external s3 cluster, run ./build.sh instead
    
    rebar3 compile
    
    erl -pa _build/default/lib/*/ebin 
    
    1> application:ensure_all_started(hb).
    MINIO_ROOT_USER=access_key_id
    MINIO_ROOT_PASSWORD=secret_access_key
    import { S3Client } from "@aws-sdk/client-s3";
    
    const accessKeyId = "your-access-key";
    const secretAccessKey = ""; // intentionally empty
    
    const s3Client = new S3Client({
      region: "eu-west-2",
      endpoint: "http://localhost:8734/[email protected]",
      credentials: {
        accessKeyId,
        secretAccessKey,
      },
      forcePathStyle: true,
    });
    async function createBucket(bucketName) {
        try {
            const command = new CreateBucketCommand({ Bucket: bucketName });
            const result = await s3Client.send(command);
            console.log("Bucket created:", result.Location || bucketName);
        } catch (error) {
            console.error("Error creating bucket:", error);
        }
    }
    async function getObject(bucketName, key) {
      try {
        console.log(`Getting object: ${bucketName}/${key}`);
    
        const command = new GetObjectCommand({
          Bucket: bucketName,
          Key: key,
          Range: "bytes=-1",
        });
    
        const result = await s3Client.send(command);
    
        const bodyContents = await result.Body.transformToString();
    
        console.log("Object retrieved successfully!");
        console.log("Content:", bodyContents);
        console.log("Metadata:", {
          ContentType: result.ContentType,
          ContentLength: result.ContentLength,
          ETag: result.ETag,
          LastModified: result.LastModified,
        });
    
        return result;
      } catch (error) {
        console.error("Error getting object:", error.name, error.message);
        throw error;
      }
    }
    async function PutObject(bucketName, fileName, body, expiryDays) {
      try {
        const command = new PutObjectCommand({
          Bucket: bucketName,
          Key: fileName,
          Body: body,
          Metadata: {
            "expiry-days": expiryDays.toString(),
          },
        });
    
        const result = await s3Client.send(command);
        console.log("Object created:", fileName, "with expiry:", expiryDays);
        return result;
      } catch (error) {
        console.error("Error creating object", error);
      }
    }
    curl -X POST http://localhost:8734/[email protected]/get-presigned -H "Content-Type: application/json" -H "Authorization: AWS4-HMAC-SHA256 Credential=YOUR-ACCESS-KEY-ID/20230101/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-date, Signature=dummy" -d '{"bucket": "BUCKET-NAME", "key": "OBJECT-KEY", "duration": DURATION_IN_SECONS}' # 1s-7days
    curl "http://localhost:8734/[email protected]/cache/BUCKET_NAME/OBJECT_KEY"
    s3_bucket => <<"offchain-dataitems">> % you can change the name
    curl -X PUT "http://localhost:8734/[email protected]/offchain-dataitems/dataitems/ysAGgm6JngmOAxmaFN2YJr5t7V1JH8JGZHe1942mPbA.ans104" \
      -H "Content-Type: application/octet-stream" \
      -H "Authorization: AWS4-HMAC-SHA256 Credential=YOUR_ACCESS_KEY_ID/20250119/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-date, Signature=dummy" \
      --data-binary @ysAGgm6JngmOAxmaFN2YJr5t7V1JH8JGZHe1942mPbA.ans104
    1> rr("src/ar_tx.erl").  % load tx record definition
    2> TX = #tx{data = <<"Hello Load S3">>, tags = [{<<"Content-Type">>, <<"text/plain">>}], format = ans104}.
    3> SignedTX = ar_bundles:sign_item(TX, hb:wallet()).
    4> ANS104Binary = ar_bundles:serialize(SignedTX).
    5> DataItemID = hb_util:encode(hb_tx:id(SignedTX, signed)).
    6> file:write_file("TheDataItemId.ans104", ANS104Binary).
    
    % After that, store the dataitem on s3 as we did previously or using the s3 sdk/client of your choice.
    curl -I "http://localhost:8734/ysAGgm6JngmOAxmaFN2YJr5t7V1JH8JGZHe1942mPbA"
    HTTP/1.1 200 OK
    Content-Type: text/plain
    Data-Protocol: ao
    Device-Test: [email protected]
    access-control-allow-methods: GET, POST, PUT, DELETE, OPTIONS
    access-control-allow-origin: *
    access-control-expose-headers: *
    ao-body-key: data
    ao-types: status="integer"
    content-digest: sha-256=:9WV5g4vLNkDev2JHfurtl5OE4XrCfKwjf4zE9SceHyg=:
    date: Fri, 18 Jul 2025 15:24:11 GMT
    id: ysAGgm6JngmOAxmaFN2YJr5t7V1JH8JGZHe1942mPbA
    owner: q4VxGIPsOoT2ce5OsF0eSErdxuMtFexE9tUvY_Gl1tE-w3p6YRy6REuc4t2gDFUFE233PP8l5B-db6a2IzRw8FrEc7eFeu_-sWi-FTnkh3EQ3ExE6D9VpBaocSwlPJmVXGMfC64kkxy1hrh44qpQl-RwI52IP15J5YTLWN_XOzGqPCL94VPRFQtwhiK2FQkAx1iCDCtC_FUWC9CitUnNygTUAt2X5I1oD_e9zoyWyUuEp14TLM0-JDnBzGW1t0BbZZKUw8nvmjkqyErQXOHU4AbSevp7rmb3kmi0qFEqb85flF11sHvl1ABJ9i84cmYOM4Az87Gw5beVdzIwe_1tnlUOdX42-skOuNwNPoSOOrUOXh78_meoHCWk5iwYXCnFIWOdlXl-i9Ts2MCf1Ub0v7UPeLT4mtbdhRyG6iK6nokFGHs6A5t1nce0ItGAO1wpBs_4zK3qwfxKvNwoIHpJARyBof8IKnrr28-RpkNJyhVCRvNueUusANnWNk8zIjWseNF3zLg2w_IxZKrDb7a7u1RDQGHSxDvX8mHNHZKAUcqUVeQau8pyfOcDw7hRPKLPkcoCv28ZusAeS0hibdIXA0CJ0HXzleNLIJhCBGwEmo_n1Fa1_hIEekGKnztkNwtLbhyfLtFuqbT6o_r9LdQ81glhAccc-_OJeTvG-fsYD3s
    server: Cowboy
    signature: sig-ahm9fs6tg1al3sq0w-ttpxaba2ztt2by1xnq1f4ih6w=:HGS0k2orTq+gA8t7zNx3ykTwn/lyagXlIB3yWX9AGu+yrLK1klrXo2mbySD3Kglmxw2Wyolj42CiGx4yYqAtvhN9AtS3HwVqf47E2FhmcQYk6dcfdAxaJcEz1lXncSyDAAwT7R/ecqJHHNWqOGHiP/9I9V4gCH6qz+kxXI6y/Z5Nnf9IWuJ8xv8XY37TICHlU8oPnPuLPFuvOVPCUXrzCxR6jP8JaVmx5kAHhMMC60+3CDKc3auoe9uxhMQlshnCShD1J/EzycwVqbfXV7TprqVRJoox/Z9EYdd36QKe303dgb3hO/s2aH/Z8TwLJEz6X0c9Miyt31pRQK+31Ev9QwAeXUbE48KljS/knfpTvNA0gjNIQ6xoz9paLWoHhZ+Ntzp/6fC0amzRZcXD1d5cZ5wxRUp1PUSnkexdi73adYV6UKL9pG4NRtDXpx1rP8H33qV5a7JKnTUrcg7TTuwgtieoB+ZKgAGPPtZUYsTRehSkW3gHFujpO+DsNko3UfhveXk5FpONKm2J+22LoOmLp13/yHdiRoHOIr+W/iPD7ueM3iSxtJFO68Wx6qIWp5PV0L+/trU/nHnDYxCgFNE/lZQvINeBSR8mKfE6ws1MJqoGdOC4ZdcRK11Bk75bjlWZUoz4r9JJGQgaA92xwqGzpWRQzAjGWs4rxhCib8YXLqE=:, sig-iq1nxhxgfzszd_rdcs-qwa_csljm7qwvdxmjr3mca7g=:f4/xQHH6IHbhDg3xIGKd7IRIXmwsrUd4a8XaBmKIRco=:
    signature-input: sig-ahm9fs6tg1al3sq0w-ttpxaba2ztt2by1xnq1f4ih6w=("Content-Type" "Data-Protocol" "Device-Test" "ao-body-key" "ao-types" "content-digest" "id" "owner" "signature" "signature-input" "status" "tags+link");alg="rsa-pss-sha512";keyid="q4VxGIPsOoT2ce5OsF0eSErdxuMtFexE9tUvY/Gl1tE+w3p6YRy6REuc4t2gDFUFE233PP8l5B+db6a2IzRw8FrEc7eFeu/+sWi+FTnkh3EQ3ExE6D9VpBaocSwlPJmVXGMfC64kkxy1hrh44qpQl+RwI52IP15J5YTLWN/XOzGqPCL94VPRFQtwhiK2FQkAx1iCDCtC/FUWC9CitUnNygTUAt2X5I1oD/e9zoyWyUuEp14TLM0+JDnBzGW1t0BbZZKUw8nvmjkqyErQXOHU4AbSevp7rmb3kmi0qFEqb85flF11sHvl1ABJ9i84cmYOM4Az87Gw5beVdzIwe/1tnlUOdX42+skOuNwNPoSOOrUOXh78/meoHCWk5iwYXCnFIWOdlXl+i9Ts2MCf1Ub0v7UPeLT4mtbdhRyG6iK6nokFGHs6A5t1nce0ItGAO1wpBs/4zK3qwfxKvNwoIHpJARyBof8IKnrr28+RpkNJyhVCRvNueUusANnWNk8zIjWseNF3zLg2w/IxZKrDb7a7u1RDQGHSxDvX8mHNHZKAUcqUVeQau8pyfOcDw7hRPKLPkcoCv28ZusAeS0hibdIXA0CJ0HXzleNLIJhCBGwEmo/n1Fa1/hIEekGKnztkNwtLbhyfLtFuqbT6o/r9LdQ81glhAccc+/OJeTvG+fsYD3s=", sig-iq1nxhxgfzszd_rdcs-qwa_csljm7qwvdxmjr3mca7g=("Content-Type" "Data-Protocol" "Device-Test" "ao-body-key" "ao-types" "content-digest" "id" "owner" "signature" "signature-input" "status" "tags+link");alg="hmac-sha256";keyid="ao"
    status: 200
    tags+link: 24Xx7HqIQkRzm3CtNQxfht5RbHTz12N1Ihwm1B0IIFE
    transfer-encoding: chunked
    Logo

    Arweave's ANS-104 Rust SDK

    bundles-rs is a Rust SDK for creating, signing, managing and posting ANS-104 dataitems

    About

    A Rust SDK for creating, signing, managing and posting ANS-104 dataitems.

    Warning: this repository is actively under development and could have breaking changes until reaching full API compatibility in v1.0.0.

    Installation

    Add to your Cargo.toml:

    Dev setup

    Supported Signers

    Blockchain
    Signature Type

    Regarding Tags

    This ANS-104 dataitems client fully implements the ANS-104 specification as-is

    Usage Examples

    Quick start

    Or for basic signed dataitem

    Working with signers

    N.B: use random signer generation for testing purposes only

    Arweave Signer

    Ethereum Signer

    Solana Signer

    Ed25519Core Signer

    Verification

    Manual

    With Signer

    Deep hash

    Upload to Bundling services over HTTP (e.g. )

    bundler crate

    bundler crate is Rust SDK to interact with Arweave (ANS-104) bundling services. This crate is designed to be backward compatible with existing bundling services and fine tuned for

    Installation

    Imports

    Usage Example

    Send Transaction (Solana)

    Send Transaction (Turbo)

    Get Default Client Info

    Get Turbo Client Info

    Get Price for Bytes (Turbo)

    Get Rates (Turbo)

    Check Transaction Status (Turbo)

    Turbo API References:

    • upload api: https://upload.ardrive.io/api-docs

    • payment api: https://payment.ardrive.io/api-docs

    SDK source code:

    3072 bytes

    Can have empty strings

    val <= 3072 bytes

    Empty names/values

    non empty strings

    non empty strings

    Can have empty strings

    Can have empty strings

    Arweave

    RSA-PSS

    Ethereum

    secp256k1

    Solana

    Ed25519 (with base58 solana flavoring)

    -

    Ed25519Core (raw Ed25519)

    Constraint

    bundles-rs

    Spec

    arbundles js

    HyperBEAM ar_bundles

    Maximum tags per data item

    <= 128 tags

    <= 128 tags

    <= 128 tags

    No max tags

    Tag name max size

    1024 bytes

    1024 bytes

    all keys + vals <= 4096 bytes

    Can have empty strings

    Tag value max size

    Turbo
    Turbo
    https://github.com/loadnetwork/bundles-rs

    3072 bytes

    [dependencies]
    # main library
    bundles_rs = { git = "https://github.com/loadnetwork/bundles-rs", branch = "main" }
    
    # use individual crates
    # or use branch/tag/rev -- we recommend checking and using the last client version
    ans104 = { git = "https://github.com/loadnetwork/bundles-rs", version = "0.1.0" } 
    crypto = { git = "https://github.com/loadnetwork/bundles-rs", version = "0.1.0" }
    git clone https://github.com/loadnetwork/bundles-rs.git
    cd bundles-rs
    cargo clippy --workspace --lib --examples --tests --benches --locked --all-features
    cargo +nightly fmt
    cargo check --all
    use bundles_rs::{
        ans104::{data_item::DataItem, tags::Tag},
        crypto::ethereum::EthereumSigner,
    };
    
    #[tokio::main]
    async fn main() -> Result<(), Box<dyn std::error::Error>> {
        // create a signer
        let signer = EthereumSigner::random()?;
        
        // create tags (metadata)
        let tags = vec![
            Tag::new("Content-Type", "text/plain"),
            Tag::new("App-Name", "Load-Network"),
        ];
        
        // create and sign a dataitem
        let data = b"Hello World Arweave!".to_vec();
        // first None for Target and the second for Anchor
        // let target = [0u8; 32]; -- 32-byte target address
        // let anchor = b"unique-anchor".to_vec(); -- max 32 bytes
        let item = DataItem::build_and_sign(&signer, None, None, tags, data)?;
        
        // get the dataitem id
        let id = item.arweave_id();
        println!("dataitem id: {}", id);
        
        // serialize for upload
        let bytes = item.to_bytes()?;
        println!("Ready to upload {} bytes", bytes.len());
        
        Ok(())
    }
    use bundles_rs::ans104::{data_item::DataItem, tags::Tag};
    
    // create unsigned data item
    let tags = vec![Tag::new("Content-Type", "application/json")];
    let data = br#"{"message": "Hello World"}"#.to_vec();
    let mut item = DataItem::new(None, None, tags, data)?;
    
    // sign dataitem
    item.sign(&signer)?;
    use bundles_rs::crypto::arweave::ArweaveSigner;
    
    let signer = ArweaveSigner::from_jwk_file("wallet.json")?;
    
    // from stringified JWK
    let jwk_json = r#"{"kty":"RSA","n":"...","e":"AQAB","d":"..."}"#;
    let signer = ArweaveSigner::from_jwk_str(jwk_json)?;
    
    // random
    let signer = ArweaveSigner::random()?;
    
    // Arweave address
    let address = signer.address();
    println!("Arweave address: {}", address);
    use bundles_rs::crypto::ethereum::EthereumSigner;
    
    // generate random key
    let signer = EthereumSigner::random()?;
    
    // or from private key bytes
    let private_key = hex::decode("your_private_key_hex")?;
    let signer = EthereumSigner::from_bytes(&private_key)?;
    
    // EOA
    let address = signer.address_string();
    println!("Ethereum address: {}", address);
    use bundles_rs::crypto::solana::SolanaSigner;
    // random
    let signer = SolanaSigner::random();
    // pk
    let signer = SolanaSigner::from_base58("your_base58_private_key")?;
    // from secret bytes
    let secret = [0u8; 32]; // your secret bytes
    let signer = SolanaSigner::from_secret_bytes(&secret)?;
    
    // Get Solana address
    let address = signer.address();
    println!("Solana address: {}", address);
    use bundles_rs::crypto::ed25519::Ed25519Core;
    
    // random
    let signer = Ed25519Core::random();
    // from seed bytes
    let seed = [0u8; 32];
    let signer = Ed25519Core::from_secret_bytes(&seed)?;
    // verify signature and structure
    item.verify()?;
    
    // manual verification steps
    assert_eq!(item.signature.len(), item.signature_type.signature_len());
    assert_eq!(item.owner.len(), item.signature_type.owner_len());
    use bundles_rs::crypto::signer::Signer;
    
    let message = item.signing_message();
    let is_valid = signer.verify(&message, &item.signature)?;
    assert!(is_valid);
    use bundles_rs::ans104::deep_hash::{DeepHash, deep_hash_sync};
    
    let data = b"custom data";
    let hash_structure = DeepHash::List(vec![
        DeepHash::Blob(b"custom"),
        DeepHash::Blob(data),
    ]);
    
    let hash = deep_hash_sync(&hash_structure);
    println!("Deep hash hex: {}", hex::encode(hash));
    use reqwest::Client;
    
    async fn upload_to_turbo(item: &DataItem) -> Result<String, Box<dyn std::error::Error>> {
        let client = Client::new();
        let bytes = item.to_bytes()?;
        
        let response = client
            .post("https://turbo.ardrive.io/tx/solana")
            .header("Content-Type", "application/octet-stream")
            .body(bytes)
            .send()
            .await?;
        
        if response.status().is_success() {
            let tx_id = response.text().await?;
            Ok(tx_id)
        } else {
            Err(format!("Upload failed: {}", response.status()).into())
        }
    }
    [dependencies]
    # main library
    bundles_rs = { git = "https://github.com/loadnetwork/bundles-rs", branch = "main" }
    
    # bundler only
    bundler = { git = "https://github.com/loadnetwork/bundles-rs", branch = "main" }
    use bundles_rs::bundler::BundlerClient;
    use bundles_rs::ans104::{data_item::DataItem, tags::Tag};
    use bundles_rs::crypto::solana::SolanaSigner;
    let client = BundlerClient::new().url("https://upload.ardrive.io").build().unwrap();
    let signer = SolanaSigner::random();
    let tags = vec![Tag::new("content-type", "text/plain")];
    let dataitem = DataItem::build_and_sign(&signer, None, None, tags, b"hello world".to_vec()).unwrap();
    
    let tx = client.send_transaction(dataitem).await.unwrap();
    println!("tx: {:?}", tx);
    let client = BundlerClient::turbo().build().unwrap();
    let signer = SolanaSigner::random();
    let tags = vec![Tag::new("content-type", "text/plain")];
    let dataitem = DataItem::build_and_sign(&signer, None, None, tags, b"hello world turbo".to_vec()).unwrap();
    
    let tx = client.send_transaction(dataitem).await.unwrap();
    println!("tx: {:?}", tx);
    let client = BundlerClient::default().build().unwrap();
    let info = client.info().await.unwrap();
    println!("{:?}", info);
    let client = BundlerClient::turbo().build().unwrap();
    let info = client.info().await.unwrap();
    println!("{:?}", info);
    let client = BundlerClient::turbo().build().unwrap();
    let price = client.bytes_price(99999).await.unwrap();
    println!("{:?}", price);
    let client = BundlerClient::turbo().build().unwrap();
    let rates = client.get_rates().await.unwrap();
    println!("{:?}", rates);
    let client = BundlerClient::turbo().build().unwrap();
    let status = client.status("w5n6r6PvqBRph2or4WiyjLumL9HE-IR_JgEcnct_3b0").await.unwrap();
    println!("{:?}", status);

    Load Network Bundler

    The LN Bundler is the fastest, cheapest and most scalable way to store EVM data onchain

    ⚡ Quickstart

    To upload data to Load Network with the alphanet bundling service, see here in the quickstart docs for the upload SDK and example repository.

    About

    Load Network Bundler is a data protocol specification and library that introduces the first bundled EVM transactions format. This protocol draws inspiration from Arweave's specification.

    Bundler as data protocol and library is still in PoC (Proof of Concept) phase - not recommended for production usage, testing purposes only.

    For the JS/TS version of LN bundles, .

    Advantages of Load Network bundled transactions

    • Reduces transaction overhead fees from multiple fees (n) per n transaction to a single fee per bundle of envelopes (n transactions)

    • Enables third-party services to handle bundle settlement on LN (will be decentralized with LOAD1)

    • Maximizes the TPS capacity of LN without requiring additional protocol changes or constraints

    Protocol Specification

    Nomenclature

    • Bundler: Refers to the data protocol specification of the EVM bundled transactions on Load Network.

    • Envelope: A legacy EVM transaction that serves as the fundamental building block and composition unit of a Bundle.

    • Bundle: An EIP-1559 transaction that groups multiple envelopes (n > 0), enabling efficient transaction batching and processing.

    1. Bundle Format

    A bundle is a group of envelopes organized through the following process:

    1. Envelopes MUST be grouped in a vector

    2. The bundle is Borsh serialized according to the BundleData type

    3. The resulting serialization vector is compressed using Brotli compression

    4. The Borsh-Brotli serialized-compressed vector is added as input

    Bundles Versioning

    Bundles versioning is based on the bundles target address:

    Bundle Version
    Bundler Target Acronym
    Bundler Target Address

    2. Envelope Format

    An envelope is a signed Legacy EVM transaction with the following MUSTs and restrictions.

    1. Transaction Fields

      • nonce: MUST be 0

      • gas_limit: MUST be 0

    3. Transaction Type Choice

    The selection of transaction types follows clear efficiency principles. Legacy transactions were chosen for envelopes due to their minimal size (144 bytes), making them the most space-efficient option for data storage. EIP-1559 transactions were adopted for bundles as the widely accepted standard for transaction processing.

    4. Notes

    • Envelopes exist as signed Legacy transactions within bundles but operate under distinct processing rules - they are not individually processed by the Load Network as transactions, despite having the structure of a Legacy transaction (signed data with a Transaction type). Instead, they are bundled together and processed as a single onchain transaction (therefore the advantage of Bundler).

    • Multiple instances of the same envelope within a bundle are permissible and do not invalidate either the bundle or the envelopes themselves. These duplicate instances are treated as copies sharing the same timestamp when found in a single bundle. When appearing across different bundles, they are considered distinct instances with their respective bundle timestamps (valid envelopes and considered as copies of distinct timestamps).

    • Since envelopes are implemented as signed Legacy transactions, they are strictly reserved for data settling purposes. Their use for any other purpose is explicitly prohibited for the envelope's signer security.

    Large Bundle

    About

    A Large Bundle is a bundle under version 0xbabe2 that exceeds the Load Network L1 and 0xbabe1 transaction size limits, introducing incredibly high size efficiency to data settling on LN. For example, with running @ 500 mgas/s, a Large Bundle has a max size of 246 GB. For the sake of DevX and simplicity of the current 0xbabe2 stack, Large Bundles in the Bundler SDK have been limited to 2GB, while on the network level, the size is 246GB.

    SuperAccount

    A Super Account is a set of wallets created and stored as keystore wallets locally under your chosen directory. In Bundler terminology, each wallet is called a "chunker". Chunkers optimize the DevX of uploading LB chunks to LN by splitting each chunk to a chunker (~4MB per chunker), moving from a single-wallet single-threaded design in data uploads to a multi-wallet multi-threaded design.

    Architecture design

    Large Bundles are built on top of the Bundler data specification. In simple terms, a Large Bundle consists of n smaller chunks (standalone bundles) that are sequentially connected tail-to-head and then at the end the Large Bundle is a reference to all the sequentially related chunks, packing all of the chunks IDs in a single 0xbabe2 bundle and sending it to Load Network.

    Large Bundle Size Calculation

    Determining Number of Chunks

    To store a file of size S (in MB) with a chunk size C, the number of chunks (N) is calculated as:

    N = ⌊S/C⌋ + [(S mod C) > 0]

    Special case: if S < C then N = 1

    Maximum Theoretical Size

    The bundling actor collects all hash receipts of the chunks, orders them in a list, and uploads this list as a LN L1 transaction. The size components of a Large Bundle are:

    • 2 Brackets [ ] = 2 bytes

    • EVM transaction header without "0x" prefix = 64 bytes per hash

    • 2 bytes for comma and space (one less comma at the end, so subtract 2 from total)

    • Size per chunk's hash = 68 bytes

    Therefore: Total hashes size = 2 + (N × 68) - 2 = 68N bytes

    Maximum Capacity Calculation

    • Maximum L1 transaction input size (C_tx) = 4 MB = 4_194_304 bytes

    • Maximum number of chunks (Σn) = C_tx ÷ 68 = 4_194_304 ÷ 68 = 61_680 chunks

    • Maximum theoretical Large Bundle size (C_max) =

    Load Network Bundles Limitation

    Network gaslimit
    L1 tx input size
    0xbabe1 size
    0xbabe2 size

    Bundler Library

    Import Bundler in your project

    0xbabe1 Bundles

    Build an envelope, build a bundle

    Example: Build a bundle packed with envelopes

    Example: Send tagged envelopes

    0xbabe2 Large Bundle

    Example: construct and disperse a Large Bundle single-threaded

    Example: construct and disperse a Large Bundle multi-threaded

    Example: Retrieve Large Bundle data

    For more examples, check the tests in .

    HTTP API

    • Base endpoint:

    Retrieve full envelopes data of a given bundle

    Retrieve full envelopes data of a given bundle (with from's envelope property derived from sig)

    Retrieve envelopes ids of a given bundle

    N.B: All of the /v1 methods (0xbabe1) are available under /v2 for 0xbabe2 Large Bundles.

    Resolve the content of a Large Bundle (not efficient, experimental)

    Cost Efficiency: some comparisons

    SSTORE2 VS LN L1 calldata

    View comparison table

    In the comparison below, we tested data settling of 1MB of non-zero bytes. LN's pricing of non-zero bytes (8 gas) and large transaction data size limit (8MB) allows us to fit the whole MB in a single transaction, paying a single overhead fee.

    Chain
    File Size (bytes)
    Number of Contracts/Tx
    Gas Used
    Gas Price (Gwei)
    Cost in Native
    Native Price (USD)
    Total (USD)

    SSTORE2 VS LN L1 Calldata VS LN Bundler 0xbabe1

    View comparison table

    Now let's take the data even higher, but for simplicity, let's not fit the whole data in a single LN L1 calldata transaction. Instead, we'll split it into 1MB transactions (creating multiple data settlement overhead fees): 5MB, 5 txs of 1 MB each:

    Chain
    File Size (bytes)
    Number of Contracts/Tx
    Gas Used
    Gas Price (Gwei)
    Cost in Native
    Native Price (USD)
    Total (USD)

    LN L1 Calldata VS LN Bundler 0xbabe1

    View comparison table

    Let's compare storing 40 MB of data (40 x 1 MB transactions) using two different methods, considering the 8 MB bundle size limit:

    Metric
    LN L1 Calldata
    LN Bundler

    Table data sources

    Source Code

    Load S3 Agentic Storage

    About the first data agents layer

    It would be best if absolutely everything was immortalized onchain forever… but it’s just not practical. Permanent data is around $0.015 per MB. Not expensive for something vitally important that needs to live for 200 years; very expensive for a few hidden gems amid terabytes of garbage.

    In an ideal world, we’d have a way to automatically determine an artifact’s significance before making it permanent. But how do we know what is worth keeping forever?

    One idea we had at Decent Land Labs was an agent-based way to plug into a temporary storage layer and immortalize some of its data on Arweave if certain conditions were met.

    Let's explore the current Load S3 data storage agents list.

    Supports relational data grouping by combining multiple related transactions into a single bundle

    Large Bundle: A transaction that carries multiple bundles.
  • Bundler Lib: Refers to the Bundler Rust library that facilitates composing and propagating Bundler's bundles.

  • (calldata) to an EIP-1559 transaction
  • The resulting bundle is broadcasted on Load Network with target set to 0xbabe addresses based on bundle version.

  • gas_price: MUST be 0
  • value: MUST be 0

  • Size Restrictions

    • Total Borsh-Brotli compressed envelopes (Bundle data) MUST be under 9 MB

    • Total Tags bytes size must be <= 2048 bytes before compression.

  • Signature Requirements

    • each envelope MUST have a valid signature

  • Usage Constraints

    • MUST be used strictly for data settling on Load Network

    • MUST only contain envelope's calldata, with optional target setting (default fallback to ZERO address)

    • CANNOT be used for:

      • tLOAD transfers

      • Contract interactions

      • Any purpose other than data settling

  • Σn
    ×
    C_tx
    = 61_680 × 4 MB = 246,720 MB ≈ 246.72 GB

    LN L1 Calldata

    1,000,000

    1

    8,500,000 (8M for calldata & 500k as base gas fee)

    1 Gwei

    -

    -

    ~$0.05

    Ethereum L1

    1,000,000

    41

    202,835,200 gas

    20 Gwei

    4.056704

    $3641.98

    $14774.43

    Polygon Sidechain

    LN Bundler 0xbabe1

    5,000,000

    1

    40,500,000 (40M for calldata & 500k as base gas fee)

    1 Gwei

    -

    -

    ~$0.25-$0.27

    LN L1 Calldata

    5,000,000

    5

    42,500,000 (40M for calldata & 2.5M as base gas fee)

    1 Gwei

    -

    -

    ~$0.22

    Ethereum L1

    5 bundle transactions (8MB each, 40 * 1MB envelopes)

    Transactions per Bundle

    1 MB each

    8 x 1MB per bundle

    Gas Cost per Tx

    8.5M gas (8M calldata + 500k base)

    64.5M gas (64M + 500k base) per bundle

    Number of Base Fees

    40

    5

    Total Gas Used

    340M gas (40 x 8.5M)

    322.5M gas (5 x 64.5M)

    Gas Price

    1 Gwei

    1 Gwei

    Total Cost

    ~$1.5-1.7

    ~$1.3

    Cost Savings

    -

    ~15% cheaper

    v0.1.0

    0xbabe1

    0xbabe1d25501157043c7b4ea7CBC877B9B4D8A057

    v0.2.0

    0xbabe2

    0xbabe2dCAf248F2F1214dF2a471D77bC849a2Ce84

    500 mgas/s (current)

    4MB

    4MB

    246 GB

    1 gigagas/s (upcoming)

    8MB

    8MB

    492 GB

    Total Data Size

    40 MB

    40 MB

    Transaction Format

    ANS-102
    click here
    Alphanet v0.4.0
    lib.rs
    https://bundler.load.rs/
    Load Network price calculator
    EVM storage calculator
    https://github.com/weaveVM/bundler
    Envelope Lifecycle
    EVM transaction types - size in bytes
    0xbabe2 transaction lifecycle

    40 separate EIP-1559 transactions

    pub struct BundleData {
        pub envelopes: Vec<TxEnvelopeWrapper>,
    }
    pub struct Tag {
        pub name: String,
        pub value: String,
    }
    
    pub struct EnvelopeSignature {
        pub y_parity: bool,
        pub r: String,
        pub s: String,
    }
    
    pub struct TxEnvelopeWrapper {
        pub chain_id: u64,
        pub nonce: u64,
        pub gas_price: u128,
        pub gas_limit: u64,
        pub to: String,
        pub value: String,
        pub input: String,
        pub hash: String,
        pub signature: EnvelopeSignature,
        pub tags: Option<Vec<Tag>>,
    }
    use bundler::utils::core::super_account::SuperAccount;
    // init SuperAccount instance
    let super_account = SuperAccount::new()
        .keystore_path(".bundler_keystores".to_string())
        .pwd("weak-password".to_string()) // keystore pwd
        .funder("private-key".to_string()) // the pk that will fund the chunkers
        .build();
    // create chunkers
    let _chunkers = super_account.create_chunkers(Some(256)).await.unwrap(); // Some(amount) of chunkers
    // fund chunkers (1 tWVM each)
    let _fund = super_account.fund_chunkers().await.unwrap(); // will fund each chunker by 1 tWVM
    // retrieve chunkers
    let loaded_chunkers = super_account.load_chunkers(None).await.unwrap(); // None to load all chunkers
    bundler = { git = "https://github.com/weaveVM/bundler", branch = "main" }
    use bundler::utils::core::envelope::Envelope;
    use bundler::utils::core::bundle::Bundle;
    use bundler::utils::core::tags::Tag;
    
    
    // Envelope
    let envelope = Envelope::new()
        .data(byte_vec)
        .target(address)
        .tags(tags)
        .build()?;
    
    // Bundle
    let bundle_tx = Bundle::new()
        .private_key(private_key)
        .envelopes(envelopes)
        .build()
        .propagate()
        .await?;
    async fn send_bundle_without_target() -> eyre::Result<String> {
        // will fail until a tLOAD funded EOA (pk) is provided
        let private_key = String::from("");
        
        let mut envelopes: Vec<Envelope> = vec![];
        
        for _ in 0..10 {
            let random_calldata: String = generate_random_calldata(128_000); // 128 KB of random calldata
            let envelope_data = serde_json::to_vec(&random_calldata).unwrap();
            
            let envelope = Envelope::new()
                .data(Some(envelope_data))
                .target(None)
                .build()?;
                
            envelopes.push(envelope);
        }
        
        let bundle_tx = Bundle::new()
            .private_key(private_key)
            .envelopes(envelopes)
            .build()
            .propagate()
            .await?;
            
        Ok(bundle_tx)
    }
        async fn send_envelope_with_tags() -> eyre::Result<String> {
            // will fail until a tLOAD funded EOA (pk) is provided
            let private_key = String::from("");
    
            let mut envelopes: Vec<Envelope> = vec![];
            
            // add your tags to a vector
            let tags = vec![Tag::new(
                "Content-Type".to_string(),
                "text/plain".to_string(),
            )];
    
            for _ in 0..1 {
                let random_calldata: String = generate_random_calldata(128_000); // 128 KB of random calldata
                let envelope_data = serde_json::to_vec(&random_calldata).unwrap();
                let envelope = Envelope::new()
                    .data(Some(envelope_data))
                    .target(None)
                    .tags(Some(tags.clone())) // add your tags
                    .build()
                    .unwrap();
                envelopes.push(envelope);
            }
    
            let bundle_tx = Bundle::new()
                .private_key(private_key)
                .envelopes(envelopes)
                .build()
                .expect("REASON")
                .propagate()
                .await
                .unwrap();
            
            Ok(bundle_tx)
        }
    use bundler::utils::core::large_bundle::LargeBundle;
    
        async fn send_large_bundle_without_super_account() -> eyre::Result<String> {
            let private_key = String::from("");
            let content_type = "text/plain".to_string();
            let data = "~UwU~".repeat(4_000_000).as_bytes().to_vec();
    
            let large_bundle = LargeBundle::new()
                .data(data)
                .private_key(private_key)
                .content_type(content_type)
                .chunk()
                .build()?
                .propagate_chunks()
                .await?
                .finalize()
                .await?;
    
            Ok(large_bundle_hash)
        }
        async fn send_large_bundle_with_super_account() {
            // will fail until a tLOAD funded EOA (pk) is provided, take care about nonce if same wallet is used as in test_send_bundle_with_target
            let private_key = String::from("");
            let content_type = "text/plain".to_string();
            let data = "~UwU~".repeat(8_000_000).as_bytes().to_vec();
            let super_account = SuperAccount::new()
                .keystore_path(".bundler_keystores".to_string())
                .pwd("test".to_string());
    
            let large_bundle = LargeBundle::new()
                .data(data)
                .private_key(private_key)
                .content_type(content_type)
                .super_account(super_account)
                .chunk()
                .build()
                .unwrap()
                .super_propagate_chunks()
                .await
                .unwrap()
                .finalize()
                .await
                .unwrap();
    
            println!("{:?}", large_bundle);
        }
        async fn retrieve_large_bundle() -> eyre::Result<Vec<u8>> {
            let large_bundle = LargeBundle::retrieve_chunks_receipts(
                "0xb58684c24828f8a80205345897afa7aba478c23005e128e4cda037de6b9ca6fd".to_string(),
            )
            .await?
            .reconstruct_large_bundle()
            .await?;
            
            Ok(large_bundle)
        }
    GET /v1/envelopes/:bundle_txid
    GET /v1/envelopes-full/:bundle_txid
    GET /v1/envelopes/ids/:bundle_txid
    GET /v2/resolve/:large_bundle_txid

    1,000,000

    41

    202,835,200 gas

    40 Gwei (L1: 20 Gwei)

    8.113408

    $0.52

    $4.21

    BSC L1

    1,000,000

    41

    202,835,200 gas

    5 Gwei

    1.014176

    $717.59

    $727.76

    Arbitrum (Optimistic L2)

    1,000,000

    41

    202,835,200 gas (+15,000,000 L1 gas)

    0.1 Gwei (L1: 20 Gwei)

    0.020284 (+0.128168 L1 fee)

    $3641.98

    $540.66

    Avalanche L1

    1,000,000

    41

    202,835,200 gas

    25 Gwei

    5.070880

    $43.90

    $222.61

    Base (Optimistic L2)

    1,000,000

    41

    202,835,200 gas (+15,000,000 L1 gas)

    0.001 Gwei (L1: 20 Gwei)

    0.000203 (+0.128168 L1 fee)

    $3641.98

    $467.52

    Optimism (Optimistic L2)

    1,000,000

    41

    202,835,200 gas (+15,000,000 L1 gas)

    0.001 Gwei (L1: 20 Gwei)

    0.000203 (+0.128168 L1 fee)

    $3641.98

    $467.52

    Blast (Optimistic L2)

    1,000,000

    41

    202,835,200 gas (+15,000,000 L1 gas)

    0.001 Gwei (L1: 20 Gwei)

    0.000203 (+0.128168 L1 fee)

    $3641.98

    $467.52

    Linea (ZK L2)

    1,000,000

    41

    202,835,200 gas (+12,000,000 L1 gas)

    0.05 Gwei (L1: 20 Gwei)

    0.010142 (+0.072095 L1 fee)

    $3641.98

    $299.50

    Scroll (ZK L2)

    1,000,000

    41

    202,835,200 gas (+12,000,000 L1 gas)

    0.05 Gwei (L1: 20 Gwei)

    0.010142 (+0.072095 L1 fee)

    $3641.98

    $299.50

    Moonbeam (Polkadot)

    1,000,000

    41

    202,835,200 gas (+NaN L1 gas)

    100 Gwei

    20.283520

    $0.27

    $5.40

    Polygon zkEVM (ZK L2)

    1,000,000

    41

    202,835,200 gas (+12,000,000 L1 gas)

    0.05 Gwei (L1: 20 Gwei)

    0.010142 (+0.072095 L1 fee)

    $3641.98

    $299.50

    Solana L1

    1,000,000

    98

    490,000 imports

    N/A

    0.000495 (0.000005 deposit)

    $217.67

    $0.11

    5,000,000

    204

    1,009,228,800 gas

    20 Gwei

    20.184576

    $3650.62

    $73686.22

    Polygon Sidechain

    5,000,000

    204

    1,009,228,800 gas

    40 Gwei (L1: 20 Gwei)

    40.369152

    $0.52

    $20.95

    BSC L1

    5,000,000

    204

    1,009,228,800 gas

    5 Gwei

    5.046144

    $717.75

    $3621.87

    Arbitrum (Optimistic L2)

    5,000,000

    204

    1,009,228,800 gas (+80,000,000 L1 gas)

    0.1 Gwei (L1: 20 Gwei)

    0.100923 (+0.640836 L1 fee)

    $3650.62

    $2707.88

    Avalanche L1

    5,000,000

    204

    1,009,228,800 gas

    25 Gwei

    25.230720

    $44.01

    $1110.40

    Base (Optimistic L2)

    5,000,000

    204

    1,009,228,800 gas (+80,000,000 L1 gas)

    0.001 Gwei (L1: 20 Gwei)

    0.001009 (+0.640836 L1 fee)

    $3650.62

    $2343.13

    Optimism (Optimistic L2)

    5,000,000

    204

    1,009,228,800 gas (+80,000,000 L1 gas)

    0.001 Gwei (L1: 20 Gwei)

    0.001009 (+0.640836 L1 fee)

    $3650.62

    $2343.13

    Blast (Optimistic L2)

    5,000,000

    204

    1,009,228,800 gas (+80,000,000 L1 gas)

    0.001 Gwei (L1: 20 Gwei)

    0.001009 (+0.640836 L1 fee)

    $3650.62

    $2343.13

    Linea (ZK L2)

    5,000,000

    204

    1,009,228,800 gas (+60,000,000 L1 gas)

    0.05 Gwei (L1: 20 Gwei)

    0.050461 (+0.360470 L1 fee)

    $3650.62

    $1500.16

    Scroll (ZK L2)

    5,000,000

    204

    1,009,228,800 gas (+60,000,000 L1 gas)

    0.05 Gwei (L1: 20 Gwei)

    0.050461 (+0.360470 L1 fee)

    $3650.62

    $1500.16

    Moonbeam (Polkadot)

    5,000,000

    204

    1,009,228,800 gas (+NaN L1 gas)

    100 Gwei

    100.922880

    $0.27

    $26.94

    Polygon zkEVM (ZK L2)

    5,000,000

    204

    1,009,228,800 gas (+60,000,000 L1 gas)

    0.05 Gwei (L1: 20 Gwei)

    0.050461 (+0.360470 L1 fee)

    $3650.62

    $1500.16

    Solana L1

    5,000,000

    489 tx

    2445.00k imports

    N/A

    0.002468 (0.000023 deposit)

    $218.44

    $0.54

    Load Network WeaveDrive ExEx

    Load Network AO's WeaveDrive ExEx

    Load Network has created the first Reth ExEx that attest data to AO network following the WeaveDrive data protocol specification — check code integration & learn more about WeaveDrive (AOP-5)