Systems Engineer at a marketing services firm with 51-200 employees
Apr 22, 2018
Ceph has simplified my storage integration. I no longer need two or three storage systems, as Ceph can support all my storage needs. I no longer need OpenStack Swift for REST object storage access, I no longer need NFS or GlusterFS for filesystem sharing, and most importantly, I no longer need LVM or DRBD for my virtual machines in OpenStack.
What I found most valuable from Red Hat Ceph Storage is integration because if you are talking about a solution that consists purely of Red Hat products, this is where integration benefits come in. In particular, Red Hat Ceph Storage becomes a single solution for managing the entire environment in terms of the container or the infrastructure, or the worker nodes because it all comes from a single plug.
Infrastructure Architect & CEO at Tirzok Private Limited
Feb 17, 2022
It's a very performance-intensive, brilliant storage system, and I always recommend it to customers based on its benefits, performance, and scalability.
Systems Engineer at a marketing services firm with 51-200 employees
Apr 22, 2018
I have encountered issues with stability when replication factor was not 3, which is the default and recommended value. Go below 3 and problems will arise.
Senior Information Technology Specialist at Logicalis
Apr 11, 2018
In the deployment step, we need to create some config files to add Ceph functions in OpenStack modules (Nova, Cinder, Glance). It would be useful to have a tool that validates the format of the data in those files, before generating a deploy with failures.
Enterprise Solutions Architect at a tech services company with 1,001-5,000 employees
Apr 11, 2018
Ceph is not a mature product at this time. Guides are misleading and incomplete. You will meet all kind of bugs and errors trying to install the system for the first time. It requires very experienced personnel to support and keep the system in working condition, and install all necessary packets.
It took me a long time to get the storage drivers for the communication with Kubernetes up and running. The documentation could improve it is lacking information. I'm not sure if this is a Ceph problem or if Ceph should address this, but it was something I ran into. Additionally, there is a performance issue I am having that I am looking into, but overall I am satisfied with the performance.