This is my first post on ansible-devel so, first of all, I would like to
say 'hello everybody!' Great project!
Some days ago I had a look in the code to check the Ansible support with
Ceph, a distributed object store and file system with focus on
performance and scalability [1], on S3 API. I realized Ansible supports
S3 on AWS, Walrus and FakeS3 but you face issues when you want to use
Ansible with Ceph. I would like to contribute some code to fix those
issues if possible.
I have one first PR waiting for review on ansible-modules-extras [2] but
I would like to get some feedback on the better way to go with the
module at ansible-modules-core/cloud/amazon/s3.py
I implemented a patch at [3] to add one boolean flag called 'ceph' and
some bits fixing the connection issues. It doesn't look intrusive with
the current s3.py code. Running the documented use cases against Ceph
are working fine.
Should I ask for PR at [3] to start with the revision process?
With both patches in place I guess we would have an initial Ceph support
in Ansible. The main use cases are working with buckets and keys now.
No, the code between s3 and s3_bucket modules is not common beyond of the expected code where it connects to s3 [1]
I would say the code in s3 is in good shape.
In the case of s3_bucket the new code can not be shared with fakes3, walrus or aws in a clean and extensible way to support different features among the s3 implementations. If you have a look in create_bucket [2] it does a lot more that creating a bucket. It does versioning, policy, payment request, etc. In this case, Ceph is catching up with some of these features yet. For example, payment request will ship with the next Ceph version, etc.
I took the approach to split the create_bucket and destroy_bucket in several methods. Ideally, those methods should dispatch to simple and specific methods supporting Walrus, fakes3, aws and Ceph. This way, every implementation could progress independently in order to converge in the future if possible. Currently, the Ceph specific methods are creating/deleting a bucket only.
My proposal would be going with the code as it is in the short, and then decoupling the fakes3 and walrus bucket creation/deletion in the same way that Ceph later. We would use create_bucket and destroy_bucket as facade methods to dispatch to fakes3, walrus, aws and ceph specific methods. aws would be the default path. Any implementation fully compatible with aws would dispatch to the aws implementation at the end.
I this this approach would let us include/test code easily.