0xbabe2: Large Data Uploads
Using Load Network's 0xbabe2 transaction format for large data uploads - the largest EVM transaction in history
About 0xbabe2 Transaction Format
0xbabe2 is the newest data transaction format from the Bundler data protocol. Also called "Large Bundle," it's a bundle under version 0xbabe2
(address: 0xbabe2dCAf248F2F1214dF2a471D77bC849a2Ce84) that exceeds the Load Network L1 and 0xbabe1
transaction input size limits, introducing incredibly high size efficiency to data storage on Load Network.
For example, with Alphanet v0.4.0 metrics running at 500 mgas/s, a Large Bundle has a max size of 246 GB. However, to ensure a smooth DevX and optimal finalization period (aka "safe mode"), we have limited the 0xbabe2 transaction input limit to 2GB at the Bundler SDK level. If you want higher limits, you can achieve this by changing a simple constant!
If you have 10 hours to spare, make several teas and watch this 1 GB video streamed to you onchain from the Load Network! 0xbabe2 txid: https://bundler.load.rs/v2/resolve/0x45cfaff6c3a507b1b1e88ef502ce32f93e7f515d9580ea66c340dc69e9d47608
Architecture design TLDR
In simple terms, a Large Bundle consists of n
smaller chunks (standalone bundles) that are sequentially connected tail-to-head and then at the end the Large Bundle is a reference to all the sequentially related chunks, packing all of the chunks IDs in a single 0xbabe2 bundle and sending it to Load Network.
To dive deeper into the architecture design behind 0xbabe2 and how it works, check out the 0xbabe2 section in the Bundler documentation.
0xbabe2 Broadcasting
Broadcasting an 0xbabe2 to Load Network can be done via the Bundler Rust SDK through 2 ways: the normal 0xbabe2 broadcasting (single-wallet single-threaded) or through the multi-wallet multi-threaded method (using SuperAccount).
Single-Threaded Broadcasting
Uploading data via the single-threaded method is efficient when the data isn't very large; otherwise, it would have very high latency to finish all data chunking then bundle finalization:
Multi-Threaded Broadcasting
Multi-Threaded 0xbabe2 broadcasting is done via a multi-wallet architecture that ensures parallel chunks settlement on Load Network, maximizing the usage of the network's data throughput. To broadcast a bundle using the multi-threaded method, you need to initiate a SuperAccount
instance and fund the Chunkers:
A Super Account is a set of wallets created and stored as keystore wallets locally under your chosen directory. In Bundler terminology, each wallet is called a "chunker". Chunkers optimize the DevX of uploading Large Bundle's chunks to LN by allocating each chunk to a chunker (~4MB per chunker), moving from a single-wallet single-threaded design in data uploads to a multi-wallet multi-threaded design.
0xbabe2 Data Retrieval
0xbabe2 transaction data retrieval can be done either using the Rust SDK or the REST API. Using the REST API to resolve (chunk reconstruction until reaching final data) is faster for user usage as it does chunks streaming, resulting in near-instant data usability (e.g., rendering in browser).
Rust SDK
REST API
What you can fit in a 492GB 0xbabe2 transaction
Modern LLMs
Claude 3 Haiku (70B params)
3.51 models (16-bit) or 14.06 models (4-bit)
Claude 3 Sonnet (175B params)
1.41 models (16-bit) or 5.62 models (4-bit)
Claude 3 Opus (350B params)
0.70 models (16-bit) or 2.81 models (4-bit)
Claude 3.5 Sonnet (250B params)
0.98 models (16-bit) or 3.94 models (4-bit)
Claude 3.7 Sonnet (300B params)
0.82 models (16-bit) or 3.28 models (4-bit)
GPT-4o (1500B params est.)
0.16 models (16-bit) or 0.66 models (4-bit)
GPT-4 Turbo (1100B params est.)
0.22 models (16-bit) or 0.89 models (4-bit)
Llama 3 70B
3.51 models (16-bit) or 14.06 models (4-bit)
Llama 3 405B
0.61 models (16-bit) or 2.43 models (4-bit)
Gemini Pro (220B params est.)
1.12 models (16-bit) or 4.47 models (4-bit)
Gemini Ultra (750B params est.)
0.33 models (16-bit) or 1.31 models (4-bit)
Mistral Large (123B params est.)
2.00 models (16-bit) or 8.00 models (4-bit)
Blockchain Data
Solana's State Snapshot (~70GB)
~7 instances
Bitcoin Full Ledger (~625 GB)
~78% of the ledger
Ethereum Full Ledger (~1250 GB)
~40% of the ledger
Ethereum blobs (~2.64 GB per day)
~186 days worth of blob data
Celestia's max throughput per day (112.5 GB)
4.37× capacity
Media Files
MP3 Songs (4MB each)
123,000 songs
Full HD Movies (5GB each)
98 movies
4K Video Footage (2GB per hour)
246 hours
High-Resolution Photos (3MB each)
164,000 photos
Ebooks (5MB each)
100,000 books
Documents/Presentations (1MB each)
492,000 files
Other Data
Database Records (5KB per record)
98 billion records
Virtual Machine Images (8GB each)
61 VMs
Docker container images (500MB each)
1,007 containers
Genome sequences (4GB each)
123 genomes
Last updated