I recently had a project with lot of images, and since I use DigitalOcean droplet and they recently started a service similar to Amazon S3 with 2 months free trial I wanted to give it a try.
So for my project I need to upload current images at once to DO Spaces, after a quick search I find out that [Spaces API aims to be interoperable with Amazon's AWS S3 API] (https://developers.digitalocean.com/documentation/spaces/), that's just awesome.
I ended up using s3cmd and it worked at my first try, this is how I setup the tool.
Install s3cmd
There is many ways to install it , check the official website to pick the one that suits you. Here is my way:
After SSH-ed into the server, I cloned the repository git clone https://github.com/s3tools/s3cmd.git
then run sudo python setup.py install
.
Create our DO Space
Next we need to create the bucket and generate the keys. Go to Spaces product page (Affiliate link) and create your Space.
Next go to API page https://cloud.digitalocean.com/settings/api and generate new key under Spaces access keys section.
So Now we have :
- Bucket name, or space name (unikapps in my case)
- the region (nyc3 in my case)
- access key
- and access secret
Configure S3Cmd
Next run s3cmd --configure
and follow the instructions:
> Access Key: [You Space Access Key]
> Secret Key: [You Space Secret Key]
> Default Region: nyc3 or your chosed region
> S3 Endpoint: nyc3.digitaloceanspaces.com where nyc3 is the region
> DNS-style for accessing a bucket: unikapps.nyc3.digitaloceanspaces.com where unikapps in the space name and nyc3 is the region
Just hit enter for the remaining options, for me that's all i needed i don't worry about the security since the files I'm hosting are intended to be public.
And finally we get to the fun part
s3cmd sync --acl-public --skip-existing --verbose /full/path/to/directory s3://unikapps/
Ok I'll explain it, s3cmd is the tool itself
sync I used sync instead of put to skip the existing files when the command is restarted
**--acl-public ** to make files public (default ACL= private)
--skip-existing to skip the existing files
--verbose or -v this one is optional, I use supervisor so I wanted to log what's happening behind the scene,
For the list of possible options, visit the official website http://s3tools.org/usage.
That's all.
If you're familiar with supervisor this is my configuration:
#/etc/supervisor/conf.d/s3cmd-worker.conf
[program:s3cmd-worker]
process_name=%(program_name)s_%(process_num)02d
command=s3cmd sync --acl-public --skip-existing --verbose /full/path/to/directory s3://unikapps/
autostart=true
autorestart=true
user=root
numprocs=1
redirect_stderr=true
stdout_logfile=/path/to/logs/s3cmd.log
And start the worker
sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl start s3cmd-worker:*