S3 client implementation for the unified storage-cli tool. This module provides S3-compatible blobstore operations through the main storage-cli binary.
Note: This is not a standalone CLI. Use the main storage-cli binary with -s s3 flag to access S3 functionality.
For general usage and build instructions, see the main README.
The S3 client requires a JSON configuration file with the following structure:
{
"bucket_name": "<string> (required)",
"credentials_source": "<string> [static|env_or_profile|none]",
"access_key_id": "<string> (required if credentials_source = 'static')",
"secret_access_key": "<string> (required if credentials_source = 'static')",
"region": "<string> (optional - default: 'us-east-1')",
"host": "<string> (optional)",
"port": <int> (optional),
"ssl_verify_peer": <bool> (optional - default: true),
"use_ssl": <bool> (optional - default: true),
"signature_version": "<string> (optional)",
"server_side_encryption": "<string> (optional)",
"sse_kms_key_id": "<string> (optional)",
"download_concurrency": <int> (optional - default: 5),
"download_part_size": <int64> (optional - default: 5242880), # 5 MB
"upload_concurrency": <int> (optional - default: 5),
"upload_part_size": <int64> (optional - default: 5242880), # 5 MB
"multipart_copy_threshold": <int64> (optional - default: 5368709120), # 5 GB - files larger than this use multipart copy
"multipart_copy_part_size": <int64> (optional - default: 104857600), # 100 MB - must be at least 5 MB
"single_upload_threshold": <int64> (optional - default: 0), # bytes; files <= this use a single PutObject call, larger files use multipart upload. 0 means always use multipart. Max 5 GB for AWS S3. GCS ignores this and always uses single upload.
"request_checksum_calculation_enabled": <bool> (optional - default: true),
"response_checksum_calculation_enabled": <bool> (optional - default: true),
"uploader_request_checksum_calculation_enabled": <bool> (optional - default: true)
}Usage examples:
# Upload a file to S3
storage-cli -s s3 -c s3-config.json put local-file.txt remote-object.txt
# Download a file from S3
storage-cli -s s3 -c s3-config.json get remote-object.txt downloaded-file.txt
# Check if an object exists
storage-cli -s s3 -c s3-config.json exists remote-object.txt
# List all objects
storage-cli -s s3 -c s3-config.json list
# Delete an object
storage-cli -s s3 -c s3-config.json delete remote-object.txtRun unit tests from the repository root:
ginkgo --skip-package=integration --cover -v -r ./s3/...- To run the integration tests, export the following variables into your environment
export access_key_id=<YOUR_AWS_ACCESS_KEY>
export focus_regex="GENERAL AWS|AWS V2 REGION|AWS V4 REGION|AWS US-EAST-1"
export region_name=us-east-1
export s3_endpoint_host=s3.amazonaws.com
export secret_access_key=<YOUR_SECRET_ACCESS_KEY>
export stack_name=s3cli-iam
export bucket_name=s3cli-pipeline
- Setup infrastructure with
./.github/scripts/s3/setup-aws-infrastructure.sh - Run the desired tests by executing one or more of the scripts
run-integration-*in./.github/scripts/s3(to runrun-integration-s3-compatsee Setup for GCP or Setup for AliCloud) - Teardown infrastructure with
./.github/scripts/s3/run-integration-*
Run ./.github/scripts/s3/setup-aws-infrastructure.sh and ./.github/scripts/s3/teardown-infrastructure.sh before and after the ./.github/scripts/s3/run-integration-* in repo's root folder.
- Create a bucket in GCP
- Create access keys
- Navigate to IAM & Admin > Service Accounts.
- Select your service account or create a new one if needed.
- Ensure your service account has necessary permissions (like
Storage Object Creator,Storage Object Viewer,Storage Admin) depending on what access you want. - Go to Cloud Storage and select Settings.
- In the Interoperability section, create an HMAC key for your service account. This generates an "access key ID" and a "secret access key".
- Export the following variables into your environment:
export access_key_id=<YOUR_ACCESS_KEY>
export secret_access_key=<YOUR_SECRET_ACCESS_KEY>
export bucket_name=<YOUR_BUCKET_NAME>
export s3_endpoint_host=storage.googleapis.com
export s3_endpoint_port=443
- Run
run-integration-s3-compat.shin./.github/scripts/s3
- Create bucket in AliCloud
- Create access keys from
RAM -> User -> Create Accesskey - Export the following variables into your environment:
export access_key_id=<YOUR_ACCESS_KEY>
export secret_access_key=<YOUR_SECRET_ACCESS_KEY>
export bucket_name=<YOUR_BUCKET_NAME>
export s3_endpoint_host="oss-<YOUR_REGION>.aliyuncs.com"
export s3_endpoint_port=443
- Run
run-integration-s3-compat.shin./.github/scripts/s3