CloudStack S3

16
CloudStack S3 configuration Tech Preview Sebastien Goasguen August 23 rd

description

A tutorial on how to setup CloudStack to expose an S3 interface. S3 is the amazon web service simple storage service. It is used to create containers on a backend storage system and storage objects in them. S3 is one (if not the one) of the most successfull AWS web service, it scales to billions of objects and serves millions of users. In this talk we show how to enable a S3 service with the cloudstack management server. This is a tech preview to show the compatibility between CloudStack and AWS services. CloudStack does not implement a distributed data store behind this S3 compatible service but instead uses a traditional file system like NFS to store the objects. This has the advantage of giving users an S3 compatible interface to their cloudstack based cloud. In future Apache CloudStack releases a true S3 service will be available via the storage systems used like Riack CS, glusterfs and Ceph.

Transcript of CloudStack S3

Page 1: CloudStack S3

CloudStack S3 configurationTech Preview

Sebastien GoasguenAugust 23rd

Page 2: CloudStack S3

Introduction

• CloudStack provides an S3 compatible interface• In Apache CloudStack 4.0 (soon out), Cloudbridge

is now an integral part of the management server and not a separate server.

• This is not saying that CloudStack provides an S3 implementation. CloudStack supports object stores (e.g Swift, GlusterFS…) but is not itself an object store.

Page 3: CloudStack S3

Steps to use S3 in CloudStack

• Specify the mount point where you want to store the objects

• Enable the service via global configuration settings

• Generate API keys for the user(s)• Register the user and associate a certificate • Use boto or other S3 clients

Page 4: CloudStack S3

S3 mount point• S3 properties are set in /path/to/source/awsapi/conf/cloud-

bridge.properties or on the mgt server at $CATALINA_HOME/conf/cloud-bridge-properties

host=http://localhost:8080/awsapi

storage.root=/Users/john1/S3-Mount

storage.multipartDir=__multipart__uploads__

bucket.dns=false

serviceEndpoint=localhost:8080

Edit the storage.root to point to a file system mount point on the management server.

Page 5: CloudStack S3

Enabling S3

• Via the GUI

• Via API call on integration API port 8096http://localhost:8096/client/api?command=updateConfiguration&name=enable.s3.api&value=true

Page 6: CloudStack S3

Enabling S3

• Via an authenticated API call on port 8080 (e.g using a Python client)

apiurl = 'http://localhost:8080/client/api’

cloudstack = CloudStack.Client(apiurl,apikey,secretkey)

cloudstack.updateConfiguration \({‘name’:’enable.s3.api’,’value’:’true’})

Page 7: CloudStack S3

Generate Keys

• Via the GUI

Page 8: CloudStack S3

Generate Keys

• Via the API:http://localhost:8096/client/api?command=registerUserKeys&id=<id of the user>

Page 9: CloudStack S3

Register the user

• Get the script from the source at /path/to/source/awsapi-setup/setup/cloudstack-aws-api-register

cloud-bridge-register --apikey=<User’s Cloudstack API key> --secretkey=<User’s CloudStack Secret key> --cert=</path/to/cert.pem> --url=http://<cloudstack-server-ip>:8080/awsapi

Page 10: CloudStack S3

S3 Boto example 1/4• Import the boto S3 modules:

>>> from boto.s3.key import Key

>>> from boto.s3.connection import S3Connection

>>> from boto.s3.connection import OrdinaryCallingFormat

• Set your API keys, calling format and create the connection to the S3 endpoint:

>>> apikey='ChOw-pwdcCFy6fpeyv6kUaR0NnhzmG3tE7HLN2z3OB_s-ogF5HjZtN4rnzKnq2UjtnHeg_RjeDgdDAPyLA5gOw’>>>secretkey='IMY8R7CJQiSGFk4cHwfXXN3DUFXz07cCiU80eM3MCmfLs7kusgyOfm0g9qzXRXhoAPCOllGt637cWH-IRxXc3w’

>>> cf=OrdinaryCallingFormat()

>>> conn=S3Connection(aws_access_key_id=apikey,aws_secret_access_key=secretkey,is_secure=False,host='localhost',port=8080,calling_format=cf,path='/awsapi/rest/AmazonS3')

Page 11: CloudStack S3

S3 boto example 2/4

• Note the path of the connection: /awsapi/rest/AmazonS3 , this is not consistent with the EC2 endpoint and will probably be fixed soon, it is also not consistent with the information in the configuration file. That’s why it’s a Tech Preview.

• Help welcome !!!

Page 12: CloudStack S3

S3 Boto example 3/4

• Once you have the connection, start by creating a bucket, get a key and store a value for that key in the bucket.

>>> conn.create_bucket('test')

<Bucket: test>

>>> b=conn.get_bucket('test')

>>> k=Key(b)

>>> k.set_contents_from_string('This is a test')

>>> k.get_contents_as_string()

'This is a test'

Page 13: CloudStack S3

S3 boto example 4/4

• Same thing with a file:

>>> conn.create_bucket('cloud')

<Bucket: cloud>

>>> b=conn.get_bucket('cloud')

>>> k=Key(b)

>>> k.set_contents_from_filename('/Users/runseb/Desktop/code/s3cs.py')

>>> k.get_contents_to_filename('/Users/runseb/Desktop/code/foobar’)

>>> conn.get_all_buckets()

[<Bucket: test>, <Bucket: cloud>]

Page 14: CloudStack S3

Example of S3 Database tables• The cloudbridge database on the mgt server contains

information about the users registered• mysql> select * from usercredentials;• | ID | AccessKey | SecretKey | CertUniqueId |

| 1 | ChOw-pwdcCFy6fpeyv6kUaR0NnhzmG3tE7HLN2z3OB_s-ogF5HjZtN4rnzKnq2UjtnHeg_RjeDgdDAPyLA5gOw | IMY8R7CJQiSGFk4cHwfXXN3DUFXz07cCiU80eM3MCmfLs7kusgyOfm0g9qzXRXhoAPCOllGt637cWH-IRxXc3w | CN=AWS Limited-Assurance CA, OU=AWS, O=Amazon.com, C=US, serial=570614354026 |

• As well as the buckets (snippet cut)• mysql> select * from sbucket;• | ID | Name | OwnerCanonicalID | SHostID | CreateTime

| 1 | test | ChOw-pwdcCFy6fpeyv6kUaR0NnhzmG3tE7HLN2z23:42:21 | |

• | 2 | cloud | ChOw-pwdcCFy6fpeyv6kUaR0NnhzmG3tE7HLN2z3OB_s-ogF5HjZtN4rnzKnq2UjtnHeg_RjeDgdDAPyLA5gOw | 2 | 2012-08-23 23:42:29 | 0 |

Page 15: CloudStack S3

Mount Point

• The mount point now contains a flat directory structure with two buckets, and in each bucket a file containing the value for that key

root@devcloud:/tmp/s3mount# ls -l

total 8

drwxr-xr-x 2 root root 4096 Aug 23 16:45 cloud

drwxr-xr-x 2 root root 4096 Aug 23 16:47 test

root@devcloud:/tmp/s3mount# cat test/2

This is a test

Page 16: CloudStack S3

Conclusions

• This was all tested with DevCloud• Join the discussion on the future of the EC2/S3

compatibility of CloudStack

[email protected]#cloudstack on irc.freenode.net@CloudStack on Twitter