connecting to local minio s3 cluster (./s3_device.sh)
{endpoint, "http://localhost:9001"}.% Internal MinIO - dev
{public_endpoint, "https://your.hyperbeam-s3-cluster-endpoint.com"}. % Public-facing URL, used for presigned URLs
{access_key_id, <<"value">>}.
{secret_access_key, <<"value">>}.
{region, <<"value">>}.
build and run the hyperbeam node
./s3_device.sh# build the s3_nif device & run local minio cluster# if you want to connect to external s3 cluster, run ./build.sh insteadrebar3compileerl-pa_build/default/lib/*/ebin1> application:ensure_all_started(hb).
configurting the local minio cluster
if you choose the local minio cluster route, you can configure (set) your access key id and secret access key by creating .env file here:
N.B: your local minio cluster access keys values should be also set equally in the s3_device.config config file
After running the hyperbeam node with the [email protected] device, you can use the node_endpoint/[email protected] url as a S3 compatible API endpoint.
N.B (regarding access authorization): using the [email protected] as end user (client) you only have to pass the accessKeyId in the request's credentials, and secretAccessKey value doesn't matter. This is due to the design of [email protected] access authorization where the device check's the S3 request's access_key_id of Authorization Header, and validate its parity with the access_key_id defined in s3_device.config -> Keep the access_key_id secret and use it as access API key.
1- create s3 client
2- create bucket
3- get object (with range)
4- put object (with expiry)
5- Generate presigned get_object url
The returned URL uses the preset public_endpoint (in s3_device.config) as base url.
Cache layer
The NIF implements an LRU cache with size-based eviction (in-memory). The following cache endpoints are available under the hyperbeam http api (intentionally not compatible with the S3 API spec):
1- get cached object
Note: This endpoint requires no authentication
cache vs S3 API GetObjectCommand : curl "http://localhost:8734/[email protected]/BUCKET_NAME/OBJECT_KEY"
Hybrid gateway
The hybdrid gateway is an extension to the hb_gateway_client.erl that makes it possible for the hyperbeam node to retrieve both of onchain (Arweave) posted ANS-104 dataitems, and offchain (in dev_s3.erl) object-storage temporal dataitems.
Workflow
hb_store_gateway.erl -> calls hb_gateway_client:read() -> it tries to read from local cache then Arweave (onchain dataitem) -> incase not found onchain, check the offchain dataitems s3 bucket
offchain dataitems retrieval route : hb_gateway_s3:read() -> calls dev_s3:handle_s3_request() -> retrieve the dataitem.asn104 from the dev_s3 bucket
Test it
1- create & set the offchain bucket
Make hyperbeam aware of the dev_s3 bucket that is storing your ANS-104 offchain dataiems, here (s3_bucket in default_message)
2- add test data
Make sure to create the [email protected] bucket as you defined the name in hb_opts.erl then add a fake offchain dataitem. If you want to test using existing signed offchain ans-104 dataitems, checkout the test-dataitems directory and store it in your [email protected] bucket.
Otherwise, you can generate a signed valid ANS-104 dataitem using the hyperbeam erlang shell:
1> rr("src/ar_tx.erl"). % load tx record definition
2> TX = #tx{data = <<"Hello Load S3">>, tags = [{<<"Content-Type">>, <<"text/plain">>}], format = ans104}.
3> SignedTX = ar_bundles:sign_item(TX, hb:wallet()).
4> ANS104Binary = ar_bundles:serialize(SignedTX).
5> DataItemID = hb_util:encode(hb_tx:id(SignedTX, signed)).
6> file:write_file("TheDataItemId.ans104", ANS104Binary).
% After that, store the dataitem on s3 as we did previously or using the s3 sdk/client of your choice.