Quickstart

Get set up with the onchain data center

To easily feed Load Network docs to your favourite LLM, access the compressed knowledge (aka LLM.txt) file from Load Network:

https://www.llmtxt.xyz/g/loadnetwork/gitbook-sync/0 (last update: 14 June 2025)

Let's make it easy to get going with Load Network. In this doc, we'll go through the simplest ways to use Load across the most common use cases:

Upload data

The easiest way to upload data to Load Network is to use a bundling service. Bundling services cover upload costs on your behalf, and feel just like using a web2 API.

The recommended testnet bundling service endpoints are:

Instantiate an uploader in the bundler-upload-sdk using this endpoint and the public testnet API key:

API_KEY=d025e132382aea412f4256049c13d0e92d5c64095d1c88e1f5de7652966b69af

Full upload example

import { BundlerSDK } from 'bundler-upload-sdk';
import { readFile } from 'fs/promises';
import 'dotenv/config';

const bundler = new BundlerSDK('https://upload.onchain.rs/', process.env.API_KEY);

async function main() {
  try {
    const fileBuffer = await readFile('files/hearts.gif');
    const txHash = await bundler.upload([
      {
        file: fileBuffer,
        tags: {
          'content-type': 'image/gif',
        }
      }
    ]);
    console.log(`https://resolver.bot/bundle/${txHash}/0`);
  } catch (error) {
    console.error('Upload failed:', error.message);
    process.exit(1);
  }
}

main().catch(error => {
  console.error('Unhandled error:', error);
  process.exit(1);
});

...Or clone this example repo to avoid copy-pasting.

Need to upload a huge amount of data?

The above example demonstrates posting data in a single Load Network base layer tx. This is limited by Load's blocksize, so tops out at about 8mb.

For practically unlimited upload sizes, you can use the large bundles spec to submit data in chunks. Chunks can even be uploaded in parallel, making large bundles a performant way to handle big uploads.

The Rust Bundler SDK makes it possible for developers to spin up their own bundling services with support for large bundles.

Integrating ledger storage

Chains like Avalanche, Metis and RSS3 use Load Network as a decentralized archive node. This works by feeding all new and historical blocks to an archiving service you can run yourself, pointed to your network's RPC.

Clone the archiver repo here

As well as storing all real-time and historical data, Load Network can be used to reconstruct full chain state, effectively replicating exactly what archive nodes do, but with a decentralized storage layer underneath. Read here to learn how.

Using Load DA

With 125mb/s data throughput and long-term data guarantees, Load Network can handle DA for every known L2, with 99.8% room to spare.

Right now there are 4 ways you can integrate Load Network for DA:

  1. DIY

DIY docs are a work in progress, but the commit to add support for Load Network in Dymension can be used as a guide to implement Load DA elsewhere.

Work with us to use Load DA for your chain - get onboarded here.

Migrate from another storage layer

If your data is already on another storage layer like IPFS, Filecoin, Swarm or AWS S3, you can use specialized importer tools to migrate.

AWS S3

The Load S3 SDK provides a 1:1 compatible development interface for applications using AWS S3 for storage, keeping method names and parameters in tact so the only change should be one line: the import .

Filecoin / IPFS

The load-lassie import tool is the recommended way to easily migrate data stored via Filecoin or IPFS.

Just provide the CID you want to import to the API, e.g.:

https://lassie.load.rs/import/<CID>

The importer is also self-hostable and further documented here.

Swarm

Switching from Swarm to Load is as simple as changing the gateway you already use to resolve content from Swarm.

The first time Load's Swarm gateway sees a new hash, it uploads it to Load Network and serves it directly for subsequent calls. This effectively makes your Swarm data permanent on Load while maintaining the same hash.

Last updated