Eucalyptus Object Storage (S3) via ceph-radosgw


Eucalyptus object storage has typically been managed by our walrus software component. Recently support for riakcs has been added which also provides an s3 interface.

In this blog entry we’ll explore the settings needed to configure ceph-radosgw with eucalyptus, and what’s needed to configure an S3 client to interface to eucalyptus.

Tested version is Eucalyptus 4.2.1 general release interfacing against ceph version 0.94.5-9.el7cp

Ceph storage servers &    < — >  Ceph radosgw  < — > Eucalyptus OSG < — > S3 Client

Ceph monitor servers

Prior to this installation, follow the setup steps detailed in:

Ceph configuration

As the radosgw is REST based, it’s relatively easy to setup either active or passive loadbalancing.  In this particular setup we’ll be using round-robin DNS against two radosgw servers on the backend. Won’t be going into how to interface the ceph gateways with the ceph cluster as that’s out of scope.

1. Configure the server with the rados gateway configuration:



fsid = aaaa00a0-a0aa-0a0a-a000-00000a0000a0

mon_initial_members = host1, host2, host3

mon_host =,,

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

filestore_xattr_use_omap = true

max_open_files = 150000


rgw print continue = false

rgw_frontends = "civetweb port=80"


rgw print continue = false

rgw_frontends = "civetweb port=80"

2. Start the rados gateway and check it’s responding on port 80:

service ceph-radosgw restart

curl http://localhost:80

This will leave us with two servers that are running civetweb on port 80 which will in turn communicate to the ceph cluster using librados

In order for eucalyptus to communicate using authentication to the rados gateways, we need to create authentication credentials.

3. To do this run the following and take note of the access and secret keys:

radosgw-admin user create --uid=eucalyptus --display-name="Eucalyptus"

radosgw-admin user info --uid=eucalyptus | grep 'access\|secret'

Eucalyptus configuration

If you’re currently running walrus as your walrusbackend, please migrate all data off of it prior. You’ll also need to run euserv-deregister-service <walrusname> and delete /var/lib/eucalptus/bukkit to clear storage space.

Once this is completed, please run ‘euserv-register-service -t objectstorage -h <UFS_IP>’.

You can run ‘euserv-describe-services –filter service-type=objectstorage’ which should show broken.

euctl settings:

euctl objectstorage.providerclient=riakcs

euctl objectstorage.s3provider.s3endpoint=http://cephgw:80

euctl objectstorage.s3provider.s3accesskey=<RADOSGW_ACCESS_KEY>

euctl objectstorage.s3provider.s3secretkey=<RADOSGW_SECRET_KEY>

euctl objectstorage.s3provider.s3usehttps=false

Please check to ensure that your object storage gateways can resolve cephgw to one of the ceph gateways you’ve setup. You can either do this using round robin DNS or use an internal load balancer (look into eulb-create-lb).

Finally if you run ‘euserv-describe-services –filter service-type=objectstorage’ you should see the objectstorage services are showing enabled.

Client configuration

At this point you’ll want to test out the s3 functionality and make sure you can create buckets and files.

As advice before starting this please ensure you can connect to the objectstorage provider host IP on port 8773. If you’ve setup your resolv.conf correctly, the resolver will contact one of the UFS servers to get an answer to the s3 prefix.

e.g. curl

1. Python-Boto library is needed to use python to test out functionality:

yum install -y python-boto

2. Create a file named and populate the values to match your environment:

#!/bin/env python

import boto

import boto.s3.connection



conn = boto.connect_s3(

aws_access_key_id = access_key,

aws_secret_access_key = secret_key,

host = ‘’,

port = 8773,


calling_format = boto.s3.connection.OrdinaryCallingFormat()


print “* Creating bucket”

bucket = conn.create_bucket(‘my-new-bucket’)

print “* Creating files in bucket”

for x in range(1, 20):

key = bucket.new_key(‘hello_’ + str(x) + ‘.txt’)


key.set_contents_from_string(‘Hello World!’)

print “* Displaying buckets and files”

for bucket in conn.get_all_buckets():

print “{name}\t{created}”.format(

name =,

created = bucket.creation_date,


for file_key in bucket.list():

print “\t” +

print file_key.get_contents_as_string()

print “* Deleting all files in bucket”

bucket = conn.get_bucket(‘my-new-bucket’)

for key in bucket.list():

print “Deleting – ” +‘utf-8’)


print “* Deleting bucket”


This should run as expected, you may also want to install s3cmd or configure the awscli to use s3 functionality.

Now would be a good time to update your esi image using ‘esi-install-image –install-default –region’ and install some images using ‘bash <(curl -Ls’

If you’ve any questions please get in touch!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s