Convert Figma logo to code with AI

s3tools logos3cmd

Official s3cmd repo -- Command line tool for managing S3 compatible storage services (including Amazon S3 and CloudFront).

4,539
904
4,539
297

Top Related Projects

15,322

Universal Command Line Interface for Amazon Web Services

2,820

Simple | Fast tool to manage MinIO clusters :cloud:

1,720

Access other storage backends via the S3 API

AWS SDK for the Go programming language.

Quick Overview

s3tools/s3cmd is a command-line tool for managing Amazon S3 (Simple Storage Service) and other compatible storage services. It provides a user-friendly interface for performing various operations such as uploading, downloading, and managing files and buckets on S3.

Pros

  • Comprehensive Functionality: s3cmd supports a wide range of S3 operations, including creating and managing buckets, uploading and downloading files, and managing access permissions.
  • Cross-Platform Compatibility: The tool is available for multiple operating systems, including Windows, macOS, and Linux, making it accessible to a broad user base.
  • Scripting and Automation: s3cmd can be easily integrated into scripts and automated workflows, allowing users to streamline their S3 management tasks.
  • Active Development and Community: The project has an active community of contributors and is regularly updated to address issues and add new features.

Cons

  • Outdated User Interface: The command-line interface of s3cmd may feel outdated compared to more modern S3 management tools, which may have a more intuitive and user-friendly interface.
  • Limited Functionality for Advanced Use Cases: While s3cmd covers the basic S3 operations, it may lack some advanced features or integrations that are required for more complex use cases.
  • Dependency on Python: s3cmd is written in Python, which may be a limitation for users who prefer to work with other programming languages.
  • Potential for Compatibility Issues: As the S3 API evolves, there is a possibility that s3cmd may encounter compatibility issues with newer versions of the service, requiring users to stay up-to-date with the tool's development.

Code Examples

N/A (s3cmd is a command-line tool, not a code library)

Getting Started

To get started with s3cmd, follow these steps:

  1. Install s3cmd:

    • On Windows, you can download the executable from the s3tools website.
    • On macOS or Linux, you can install s3cmd using your system's package manager (e.g., pip install s3cmd).
  2. Configure s3cmd:

    • Run the s3cmd --configure command to set up your AWS credentials and other settings.
    • Follow the prompts to enter your AWS Access Key ID, Secret Access Key, and other required information.
  3. Perform basic operations:

    • List the contents of an S3 bucket: s3cmd ls s3://your-bucket-name
    • Upload a file to an S3 bucket: s3cmd put local-file.txt s3://your-bucket-name/remote-file.txt
    • Download a file from an S3 bucket: s3cmd get s3://your-bucket-name/remote-file.txt local-file.txt
    • Create a new S3 bucket: s3cmd mb s3://your-new-bucket-name
  4. Explore advanced features:

    • Synchronize a local directory with an S3 bucket: s3cmd sync local-directory/ s3://your-bucket-name/
    • Set access permissions on an S3 object: s3cmd setacl s3://your-bucket-name/file.txt --acl-public
    • Monitor the progress of file transfers: s3cmd -v put local-file.txt s3://your-bucket-name/remote-file.txt

For more detailed information and usage examples, refer to the s3cmd documentation.

Competitor Comparisons

15,322

Universal Command Line Interface for Amazon Web Services

Pros of aws/aws-cli

  • Comprehensive support for a wide range of AWS services, not just S3
  • Actively maintained and updated by the AWS team
  • Integrates seamlessly with other AWS tools and services

Cons of aws/aws-cli

  • Larger and more complex than s3tools/s3cmd, with a steeper learning curve
  • May have a higher resource footprint compared to s3tools/s3cmd
  • Requires AWS credentials to be configured, which can be a barrier for some users

Code Comparison

aws/aws-cli:

import boto3

s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket')
bucket.upload_file('local_file.txt', 'remote_file.txt')

s3tools/s3cmd:

s3cmd put local_file.txt s3://my-bucket/remote_file.txt

The aws/aws-cli code uses the Boto3 library to interact with the S3 service, while s3tools/s3cmd provides a more concise and user-friendly command-line interface for the same task.

2,820

Simple | Fast tool to manage MinIO clusters :cloud:

Pros of minio/mc

  • Supports a wider range of cloud storage providers, including AWS S3, Google Cloud Storage, and Azure Blob Storage, in addition to MinIO.
  • Provides a more modern and user-friendly command-line interface with improved functionality and performance.
  • Offers better integration with other tools and services, making it more versatile in a broader range of use cases.

Cons of minio/mc

  • May have a steeper learning curve for users familiar with the s3tools/s3cmd interface.
  • Some advanced features or customizations available in s3tools/s3cmd may not be as readily accessible in minio/mc.
  • Depending on the specific use case, the additional features and support for multiple cloud providers in minio/mc may not be necessary, making s3tools/s3cmd a more lightweight and focused option.

Code Comparison

s3tools/s3cmd:

def get_bucket_location(self, bucket_name):
    """
    Get the location constraint of a bucket.
    """
    try:
        response = self.send_request('GET', '/%s?location' % bucket_name)
        return response.data.strip()
    except S3Error as e:
        if e.code == 'NoSuchBucket':
            return None
        else:
            raise

minio/mc:

func (c *S3Client) GetBucketLocation(ctx context.Context, bucketName string) (string, error) {
    resp, err := c.api.GetBucketLocation(ctx, bucketName)
    if err != nil {
        return "", err
    }
    return resp.LocationConstraint, nil
}

The code snippets above show the implementation of the get_bucket_location function in s3tools/s3cmd and the equivalent GetBucketLocation function in minio/mc. Both functions aim to retrieve the location constraint of an S3 bucket, but the minio/mc version is written in Go, while the s3tools/s3cmd version is in Python.

1,720

Access other storage backends via the S3 API

Pros of S3Proxy

  • S3Proxy is a lightweight and fast proxy for Amazon S3, making it a good choice for applications that need to interact with S3 but don't require the full functionality of S3cmd.
  • S3Proxy supports a wide range of authentication methods, including AWS Signature Version 4, making it more flexible than S3cmd.
  • S3Proxy can be easily integrated into other applications, as it provides a simple and straightforward API.

Cons of S3Proxy

  • S3Proxy has a smaller feature set compared to S3cmd, as it is focused on providing a basic set of S3 functionality.
  • S3Proxy may not be as well-documented or have as large a community as S3cmd, which could make it more difficult to troubleshoot issues.
  • S3Proxy may not be as actively maintained as S3cmd, which could lead to compatibility issues with newer versions of AWS S3.

Code Comparison

S3cmd:

import boto
from boto.s3.connection import S3Connection

conn = S3Connection(aws_access_key_id='YOUR_ACCESS_KEY', aws_secret_access_key='YOUR_SECRET_KEY')
bucket = conn.get_bucket('my-bucket')
key = bucket.get_key('my-file.txt')
print(key.get_contents_as_string())

S3Proxy:

S3Proxy s3Proxy = new S3Proxy();
s3Proxy.setAwsAccessKey("YOUR_ACCESS_KEY");
s3Proxy.setAwsSecretKey("YOUR_SECRET_KEY");
s3Proxy.setAwsRegion("us-east-1");
s3Proxy.setPort(8080);
s3Proxy.start();

AWS SDK for the Go programming language.

Pros of aws/aws-sdk-go

  • Comprehensive support for AWS services: The aws/aws-sdk-go provides a wide range of APIs for interacting with various AWS services, making it a powerful tool for building AWS-based applications.
  • Actively maintained and updated: The aws/aws-sdk-go is actively maintained by the AWS team, ensuring that it stays up-to-date with the latest AWS features and services.
  • Consistent API design: The SDK follows a consistent API design, making it easier for developers to work with multiple AWS services within the same codebase.

Cons of aws/aws-sdk-go

  • Complexity: The aws/aws-sdk-go can be more complex to use compared to simpler tools like s3tools/s3cmd, especially for developers who are new to AWS.
  • Dependency on AWS: The aws/aws-sdk-go is tightly coupled with AWS, which means that it may not be the best choice for developers who need to work with multiple cloud providers.

Code Comparison

s3tools/s3cmd:

s3cmd put local_file.txt s3://my-bucket/remote_file.txt

aws/aws-sdk-go:

svc := s3.New(session.New())
_, err := svc.PutObject(&s3.PutObjectInput{
    Bucket: aws.String("my-bucket"),
    Key:    aws.String("remote_file.txt"),
    Body:   file,
})
if err != nil {
    // Handle error
}

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

S3cmd tool for Amazon Simple Storage Service (S3)

Build Status

S3tools / S3cmd mailing lists:

S3cmd requires Python 2.6 or newer. Python 3+ is also supported starting with S3cmd version 2.

See installation instructions.

What is S3cmd

S3cmd (s3cmd) is a free command line tool and client for uploading, retrieving and managing data in Amazon S3 and other cloud storage service providers that use the S3 protocol, such as Google Cloud Storage or DreamHost DreamObjects. It is best suited for power users who are familiar with command line programs. It is also ideal for batch scripts and automated backup to S3, triggered from cron, etc.

S3cmd is written in Python. It's an open source project available under GNU Public License v2 (GPLv2) and is free for both commercial and private use. You will only have to pay Amazon for using their storage.

Lots of features and options have been added to S3cmd, since its very first release in 2008.... we recently counted more than 60 command line options, including multipart uploads, encryption, incremental backup, s3 sync, ACL and Metadata management, S3 bucket size, bucket policies, and more!

What is Amazon S3

Amazon S3 provides a managed internet-accessible storage service where anyone can store any amount of data and retrieve it later again.

S3 is a paid service operated by Amazon. Before storing anything into S3 you must sign up for an "AWS" account (where AWS = Amazon Web Services) to obtain a pair of identifiers: Access Key and Secret Key. You will need to give these keys to S3cmd. Think of them as if they were a username and password for your S3 account.

Amazon S3 pricing explained

At the time of this writing the costs of using S3 are (in USD):

$0.023 per GB per month of storage space used

plus

$0.00 per GB - all data uploaded

plus

$0.000 per GB - first 1GB / month data downloaded $0.090 per GB - up to 10 TB / month data downloaded $0.085 per GB - next 40 TB / month data downloaded $0.070 per GB - next 100 TB / month data downloaded $0.050 per GB - data downloaded / month over 150 TB

plus

$0.005 per 1,000 PUT or COPY or LIST requests $0.004 per 10,000 GET and all other requests

If for instance on 1st of January you upload 2GB of photos in JPEG from your holiday in New Zealand, at the end of January you will be charged $0.05 for using 2GB of storage space for a month, $0.0 for uploading 2GB of data, and a few cents for requests. That comes to slightly over $0.06 for a complete backup of your precious holiday pictures.

In February you don't touch it. Your data are still on S3 servers so you pay $0.06 for those two gigabytes, but not a single cent will be charged for any transfer. That comes to $0.05 as an ongoing cost of your backup. Not too bad.

In March you allow anonymous read access to some of your pictures and your friends download, say, 1500MB of them. As the files are owned by you, you are responsible for the costs incurred. That means at the end of March you'll be charged $0.05 for storage plus $0.045 for the download traffic generated by your friends.

There is no minimum monthly contract or a setup fee. What you use is what you pay for. At the beginning my bill used to be like US$0.03 or even nil.

That's the pricing model of Amazon S3 in a nutshell. Check the Amazon S3 homepage for more details.

Needless to say that all these money are charged by Amazon itself, there is obviously no payment for using S3cmd :-)

Amazon S3 basics

Files stored in S3 are called "objects" and their names are officially called "keys". Since this is sometimes confusing for the users we often refer to the objects as "files" or "remote files". Each object belongs to exactly one "bucket".

To describe objects in S3 storage we invented a URI-like schema in the following form:

s3://BUCKET

or

s3://BUCKET/OBJECT

Buckets

Buckets are sort of like directories or folders with some restrictions:

  1. each user can only have 100 buckets at the most,
  2. bucket names must be unique amongst all users of S3,
  3. buckets can not be nested into a deeper hierarchy and
  4. a name of a bucket can only consist of basic alphanumeric characters plus dot (.) and dash (-). No spaces, no accented or UTF-8 letters, etc.

It is a good idea to use DNS-compatible bucket names. That for instance means you should not use upper case characters. While DNS compliance is not strictly required some features described below are not available for DNS-incompatible named buckets. One more step further is using a fully qualified domain name (FQDN) for a bucket - that has even more benefits.

  • For example "s3://--My-Bucket--" is not DNS compatible.
  • On the other hand "s3://my-bucket" is DNS compatible but is not FQDN.
  • Finally "s3://my-bucket.s3tools.org" is DNS compatible and FQDN provided you own the s3tools.org domain and can create the domain record for "my-bucket.s3tools.org".

Look for "Virtual Hosts" later in this text for more details regarding FQDN named buckets.

Objects (files stored in Amazon S3)

Unlike for buckets there are almost no restrictions on object names. These can be any UTF-8 strings of up to 1024 bytes long. Interestingly enough the object name can contain forward slash character (/) thus a my/funny/picture.jpg is a valid object name. Note that there are not directories nor buckets called my and funny - it is really a single object name called my/funny/picture.jpg and S3 does not care at all that it looks like a directory structure.

The full URI of such an image could be, for example:

s3://my-bucket/my/funny/picture.jpg

Public vs Private files

The files stored in S3 can be either Private or Public. The Private ones are readable only by the user who uploaded them while the Public ones can be read by anyone. Additionally the Public files can be accessed using HTTP protocol, not only using s3cmd or a similar tool.

The ACL (Access Control List) of a file can be set at the time of upload using --acl-public or --acl-private options with s3cmd put or s3cmd sync commands (see below).

Alternatively the ACL can be altered for existing remote files with s3cmd setacl --acl-public (or --acl-private) command.

Simple s3cmd HowTo

  1. Register for Amazon AWS / S3

Go to https://aws.amazon.com/s3, click the "Sign up for web service" button in the right column and work through the registration. You will have to supply your Credit Card details in order to allow Amazon charge you for S3 usage. At the end you should have your Access and Secret Keys.

If you set up a separate IAM user, that user's access key must have at least the following permissions to do anything:

  • s3:ListAllMyBuckets
  • s3:GetBucketLocation
  • s3:ListBucket

Other example policies can be found at https://docs.aws.amazon.com/AmazonS3/latest/dev/example-policies-s3.html

  1. Run s3cmd --configure

You will be asked for the two keys - copy and paste them from your confirmation email or from your Amazon account page. Be careful when copying them! They are case sensitive and must be entered accurately or you'll keep getting errors about invalid signatures or similar.

Remember to add s3:ListAllMyBuckets permissions to the keys or you will get an AccessDenied error while testing access.

  1. Run s3cmd ls to list all your buckets.

As you just started using S3 there are no buckets owned by you as of now. So the output will be empty.

  1. Make a bucket with s3cmd mb s3://my-new-bucket-name

As mentioned above the bucket names must be unique amongst all users of S3. That means the simple names like "test" or "asdf" are already taken and you must make up something more original. To demonstrate as many features as possible let's create a FQDN-named bucket s3://public.s3tools.org:

$ s3cmd mb s3://public.s3tools.org

Bucket 's3://public.s3tools.org' created
  1. List your buckets again with s3cmd ls

Now you should see your freshly created bucket:

$ s3cmd ls

2009-01-28 12:34  s3://public.s3tools.org
  1. List the contents of the bucket:
$ s3cmd ls s3://public.s3tools.org
$

It's empty, indeed.

  1. Upload a single file into the bucket:
$ s3cmd put some-file.xml s3://public.s3tools.org/somefile.xml

some-file.xml -> s3://public.s3tools.org/somefile.xml  [1 of 1]
 123456 of 123456   100% in    2s    51.75 kB/s  done

Upload a two-directory tree into the bucket's virtual 'directory':

$ s3cmd put --recursive dir1 dir2 s3://public.s3tools.org/somewhere/

File 'dir1/file1-1.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-1.txt' [1 of 5]
File 'dir1/file1-2.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-2.txt' [2 of 5]
File 'dir1/file1-3.log' stored as 's3://public.s3tools.org/somewhere/dir1/file1-3.log' [3 of 5]
File 'dir2/file2-1.bin' stored as 's3://public.s3tools.org/somewhere/dir2/file2-1.bin' [4 of 5]
File 'dir2/file2-2.txt' stored as 's3://public.s3tools.org/somewhere/dir2/file2-2.txt' [5 of 5]

As you can see we didn't have to create the /somewhere 'directory'. In fact it's only a filename prefix, not a real directory and it doesn't have to be created in any way beforehand.

Instead of using put with the --recursive option, you could also use the sync command:

$ s3cmd sync dir1 dir2 s3://public.s3tools.org/somewhere/
  1. Now list the bucket's contents again:
$ s3cmd ls s3://public.s3tools.org

                       DIR   s3://public.s3tools.org/somewhere/
2009-02-10 05:10    123456   s3://public.s3tools.org/somefile.xml

Use --recursive (or -r) to list all the remote files:

$ s3cmd ls --recursive s3://public.s3tools.org

2009-02-10 05:10    123456   s3://public.s3tools.org/somefile.xml
2009-02-10 05:13        18   s3://public.s3tools.org/somewhere/dir1/file1-1.txt
2009-02-10 05:13         8   s3://public.s3tools.org/somewhere/dir1/file1-2.txt
2009-02-10 05:13        16   s3://public.s3tools.org/somewhere/dir1/file1-3.log
2009-02-10 05:13        11   s3://public.s3tools.org/somewhere/dir2/file2-1.bin
2009-02-10 05:13         8   s3://public.s3tools.org/somewhere/dir2/file2-2.txt
  1. Retrieve one of the files back and verify that it hasn't been corrupted:
$ s3cmd get s3://public.s3tools.org/somefile.xml some-file-2.xml

s3://public.s3tools.org/somefile.xml -> some-file-2.xml  [1 of 1]
 123456 of 123456   100% in    3s    35.75 kB/s  done
$ md5sum some-file.xml some-file-2.xml

39bcb6992e461b269b95b3bda303addf  some-file.xml
39bcb6992e461b269b95b3bda303addf  some-file-2.xml

Checksums of the original file matches the one of the retrieved ones. Looks like it worked :-)

To retrieve a whole 'directory tree' from S3 use recursive get:

$ s3cmd get --recursive s3://public.s3tools.org/somewhere

File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as './somewhere/dir1/file1-1.txt'
File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as './somewhere/dir1/file1-2.txt'
File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as './somewhere/dir1/file1-3.log'
File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as './somewhere/dir2/file2-1.bin'
File s3://public.s3tools.org/somewhere/dir2/file2-2.txt saved as './somewhere/dir2/file2-2.txt'

Since the destination directory wasn't specified, s3cmd saved the directory structure in a current working directory ('.').

There is an important difference between:

get s3://public.s3tools.org/somewhere

and

get s3://public.s3tools.org/somewhere/

(note the trailing slash)

s3cmd always uses the last path part, ie the word after the last slash, for naming files.

In the case of s3://.../somewhere the last path part is 'somewhere' and therefore the recursive get names the local files as somewhere/dir1, somewhere/dir2, etc.

On the other hand in s3://.../somewhere/ the last path part is empty and s3cmd will only create 'dir1' and 'dir2' without the 'somewhere/' prefix:

$ s3cmd get --recursive s3://public.s3tools.org/somewhere/ ~/

File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as '~/dir1/file1-1.txt'
File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as '~/dir1/file1-2.txt'
File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as '~/dir1/file1-3.log'
File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as '~/dir2/file2-1.bin'

See? It's ~/dir1 and not ~/somewhere/dir1 as it was in the previous example.

  1. Clean up - delete the remote files and remove the bucket:

Remove everything under s3://public.s3tools.org/somewhere/

$ s3cmd del --recursive s3://public.s3tools.org/somewhere/

File s3://public.s3tools.org/somewhere/dir1/file1-1.txt deleted
File s3://public.s3tools.org/somewhere/dir1/file1-2.txt deleted
...

Now try to remove the bucket:

$ s3cmd rb s3://public.s3tools.org

ERROR: S3 error: 409 (BucketNotEmpty): The bucket you tried to delete is not empty

Ouch, we forgot about s3://public.s3tools.org/somefile.xml. We can force the bucket removal anyway:

$ s3cmd rb --force s3://public.s3tools.org/

WARNING: Bucket is not empty. Removing all the objects from it first. This may take some time...
File s3://public.s3tools.org/somefile.xml deleted
Bucket 's3://public.s3tools.org/' removed

Hints

The basic usage is as simple as described in the previous section.

You can increase the level of verbosity with -v option and if you're really keen to know what the program does under its bonnet run it with -d to see all 'debugging' output.

After configuring it with --configure all available options are spitted into your ~/.s3cfg file. It's a text file ready to be modified in your favourite text editor.

The Transfer commands (put, get, cp, mv, and sync) continue transferring even if an object fails. If a failure occurs the failure is output to stderr and the exit status will be EX_PARTIAL (2). If the option --stop-on-error is specified, or the config option stop_on_error is true, the transfers stop and an appropriate error code is returned.

For more information refer to the S3cmd / S3tools homepage.

License

Copyright (C) 2007-2023 TGRMN Software (https://www.tgrmn.com), Sodria SAS (https://www.sodria.com/) and contributors

This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

NPM DownloadsLast 30 Days