Chief Software Engineer at a manufacturing company with 5,001-10,000 employees
Real User
Top 10
2023-11-09T17:39:12Z
Nov 9, 2023
Red Hat Ceph Storage is difficult to maintain. We use CLI tools for maintenance, and the concept seems challenging. Additionally, it is difficult to expand the product due to balancing errors. It takes some time to re-balance the storage in case of server failure. They could improve the speed of the process.
Software Engineer at a retailer with 10,001+ employees
Real User
Top 5
2023-01-12T13:06:07Z
Jan 12, 2023
The operational overhead is higher compared to Azure because we own the hardware. It would be nice to have a notification feature whenever an important action is completed.
UNIX Security Consultant at a retailer with 1,001-5,000 employees
Real User
Top 20
2022-07-26T09:52:52Z
Jul 26, 2022
Ceph Storage lacks RDMA support for inter-OSD communication. That is a huge loss in terms of performance. It's also very intensive on the backend network and takes a lot of resources from the network. I'd like to see a higher performing CephFS, which is the file system part of Ceph.
It took me a long time to get the storage drivers for the communication with Kubernetes up and running. The documentation could improve it is lacking information. I'm not sure if this is a Ceph problem or if Ceph should address this, but it was something I ran into. Additionally, there is a performance issue I am having that I am looking into, but overall I am satisfied with the performance.
Infrastructure Architect & CEO at Tirzok Private Limited
Real User
2022-02-17T12:41:00Z
Feb 17, 2022
An area for improvement would be that it's pretty difficult to manage synchronous replication over multiple regions. I also don't like the containerization method for Cephadm in the recent deployment. In the next release, I'd like to see reports for security and performance.
Data Storage Specialist at a tech services company with 1,001-5,000 employees
MSP
2019-12-12T07:48:00Z
Dec 12, 2019
The management features are pretty good, but they still have room for improvement. The solution needs to offer support for the fiber channel as a protocol.
What is software-defined storage? Software-defined storage (SDS) is a software-based storage solution that provides greater flexibility and independence than the traditional network-attached storage (NAS) or storage area network (SAN). Although software-defined storage can work in and on top of both NAS and SAN environments, it is usually created to perform on the industry common x86 servers.
Software-defined storage allows for separation and independence from traditional hardware...
Some documentation is very hard to find. The documentation must be quickly available.
Red Hat Ceph Storage is difficult to maintain. We use CLI tools for maintenance, and the concept seems challenging. Additionally, it is difficult to expand the product due to balancing errors. It takes some time to re-balance the storage in case of server failure. They could improve the speed of the process.
We have encountered slight integration issues.
The storage capacity of the solution can be improved.
The operational overhead is higher compared to Azure because we own the hardware. It would be nice to have a notification feature whenever an important action is completed.
Ceph Storage lacks RDMA support for inter-OSD communication. That is a huge loss in terms of performance. It's also very intensive on the backend network and takes a lot of resources from the network. I'd like to see a higher performing CephFS, which is the file system part of Ceph.
What could be improved in Red Hat Ceph Storage is its user interface or GUI.
It took me a long time to get the storage drivers for the communication with Kubernetes up and running. The documentation could improve it is lacking information. I'm not sure if this is a Ceph problem or if Ceph should address this, but it was something I ran into. Additionally, there is a performance issue I am having that I am looking into, but overall I am satisfied with the performance.
An area for improvement would be that it's pretty difficult to manage synchronous replication over multiple regions. I also don't like the containerization method for Cephadm in the recent deployment. In the next release, I'd like to see reports for security and performance.
The management features are pretty good, but they still have room for improvement. The solution needs to offer support for the fiber channel as a protocol.