My advice would be to thoroughly review the documentation and understand if Spring Cloud Dataflow is the right solution for your application. For applications with only one or two microservices, it may not be beneficial. It is more suitable for those with more than ten microservices that need synchronization. I'd rate the solution seven out of ten.
Our experience with Spring Cloud Data Flow has been phenomenal. The best thing I like about it is you can plug and play any data source, including MongoDB, Elasticsearch, and MySQL. We had a use case where we were supposed to get data from different sources and databases. You just define the configuration, whether it's a MySQL or Mongo database. You start getting the data automatically. You need not worry about how it is handling the failure mechanism. You just define the configuration for your queuing mechanism, be it RabbitMQ or Kafka, and it does the things on its own. Earlier, we used to have our own pipeline. Then, we thought of exploring two options, namely, Airflow or Spring Cloud Data Flow. Most of our applications are around the Java Spring Boot ecosystem, and it was easier for us. Since our applications are based on Java J2EE Spring Boot architecture, we decided to route via SCDF. This made our integration easier than any other pipeline workflow. The documentation also suggested that we'll get a robust platform where we won't face any challenges with respect to scalability. That's the reason our organization chose Spring Cloud Data Flow. Choosing Spring Cloud Data Flow depends on the use cases. Suppose you have a big pipeline where you need to churn out data from different sources. In that case, you need to have different processing layers, filter the data on different conditions, throw it here and there, and finally dump it somewhere. Spring Cloud Data Flow can be used when the use case revolves around having a processing pipeline with multiple layers and having proper failure handling on different layers. If you follow the documentation, Spring Cloud Data Flow is easier than any other platform. While building the pipeline, the biggest area where you feel a bit lacking is. Suppose you have different layers, and your data flows via ten different components. Code is just one part of it where you put the business logic. The biggest area of concern is when data is passing through ten different layers, and any failure happens at a certain layer. You should not be in a position where you're looking for all those aspects and handling them on your own. Spring Cloud Data Flow provides built-in capabilities where you do not need to reprocess the entire data in case your message gets lost, or you cannot process the data, and you bring your system back. You start the processing from where you have left it. The solution provides good robustness and scalability. We have been using it for years, and I've seen how it works on production in a scalable way. However, I had a few concerns about documentation while integrating with Kubernetes. Overall, I rate the solution an eight out of ten.
As it is a tool for meant programming, we need to write the code. I won't say it is easy to use because there are tools like Apache Sling and KSQL that simplify the actual work. Spring Cloud Flow is still in its early days, and we use it to process data from Kafka. Around three years ago, it was a good tool. There weren't many downsides linked to the tool. For now, I prefer Apache Sling and not Spring Cloud Data Flow. I won't recommend the tool to others. I rate the solution a seven out of ten.
Senior Software Engineer at QBE Regional Insurance
Real User
Top 20
2024-03-26T06:50:25Z
Mar 26, 2024
Spring Cloud Data Flow is a useful product if I consider how there are different providers with whom my company had to deal, and most of them offer cloud-based products. I can't explain any crucial circumstances where the product's integration capabilities were helpful, but the aforementioned details explain the scenario for which I used the solution. I was only involved with the development of the product and not with the data pipeline configuration phase. The use of Spring Cloud Data Flow greatly impacted projects' time to market since our company's intention was to actually deploy and ensure that the payment platform integrated with it, which was an easy process. The product's user interface was very intuitive. The tool was deployed in multiple environments, but I am not sure about the production. From the time I started taking up the job in my current organization, I saw that we have deployed the tool in multiple environments wherein the number of users extensively used the product in the UAT environment, which is one of the most stable environments. There were 20 different methods to test the tool. I wouldn't be able to tell you the production details of the tool as I was more part of the production deployment, but I can say that it was deployed with the intent of making it available for 10,000 users. Those who plan to use the product should enjoy the flexibility of the solution. I rate the tool a nine out of ten.
The solution requires little maintenance. My advice to others is for them to follow the documentation. The solution is very well-designed and they deliver on their promises. I rate Spring Cloud Data Flow a seven out of ten.
While the deployment is on-premises, the data center is not on-premises. It's in a different geographical location, however, it was the client's own data center. We deployed there, and we installed the CDF server, then the Skipper server, and everything else including all the microservices. We used the PCF Cloud Foundry platform and for the bank, we deployed in Kubernetes. Spring Cloud Data Flow server is pretty standard to implement. The year before it was a new project, however, now it is already implemented in many, many projects. I think developers should start using it if they are not using it yet. In the future, there could be some more improvements in the area of the data pipeline ETF process. That said, I'm happy with the Spring Cloud Data Flow server right now. Our biggest takeaway has been to design the pipeline depending on the customer's needs. We cannot just think about everything as a developer. Sometimes we need to think about what the customer needs instead. Everything needs to be based on customer flow. That helps us design a proper data pipeline. The task mechanism is also helpful if we can run some tasks instead of keeping the application live 24 hours. Overall, I'd rate the solution nine out of ten. It's a really good solution and a lot cheaper than a lot of infrastructure provided by big companies like Google or Amazon.
Senior Platform Associate L2 at a tech services company with 10,001+ employees
Real User
2020-10-19T09:33:41Z
Oct 19, 2020
We used this product with Kubernetes, which had been recently introduced and we liked it. It was very good, compared to Maven. We did try it with Maven; however, the server took 15 or 16 minutes to start. This is when we switched to Kubernetes and it was very good. They provide a lot of different configurations and environment types. We use Kafka on Kubernetes, as well. The configured was proved by SCDF. I would rate this solution a seven out of ten.
I would rate this product (or set of technologies) a solid eight out of 10. The things that would keep me from giving it a full 10 are the fact that the graphic user interface portion of the toolset still needs some polishing and performs somewhat slowly. However, I have not had an opportunity to run this tool set on higher performing machines, and have been limited to simply running it within a set of virtual machines on my own workstation.
Spring Cloud Data Flow is a toolkit for building data integration and real-time data processing pipelines.Pipelines consist of Spring Boot apps, built using the Spring Cloud Stream or Spring Cloud Task microservice frameworks. This makes Spring Cloud Data Flow suitable for a range of data processing use cases, from import/export to event streaming and predictive analytics. Use Spring Cloud Data Flow to connect your Enterprise to the Internet of Anything—mobile devices, sensors, wearables,...
My advice would be to thoroughly review the documentation and understand if Spring Cloud Dataflow is the right solution for your application. For applications with only one or two microservices, it may not be beneficial. It is more suitable for those with more than ten microservices that need synchronization. I'd rate the solution seven out of ten.
Our experience with Spring Cloud Data Flow has been phenomenal. The best thing I like about it is you can plug and play any data source, including MongoDB, Elasticsearch, and MySQL. We had a use case where we were supposed to get data from different sources and databases. You just define the configuration, whether it's a MySQL or Mongo database. You start getting the data automatically. You need not worry about how it is handling the failure mechanism. You just define the configuration for your queuing mechanism, be it RabbitMQ or Kafka, and it does the things on its own. Earlier, we used to have our own pipeline. Then, we thought of exploring two options, namely, Airflow or Spring Cloud Data Flow. Most of our applications are around the Java Spring Boot ecosystem, and it was easier for us. Since our applications are based on Java J2EE Spring Boot architecture, we decided to route via SCDF. This made our integration easier than any other pipeline workflow. The documentation also suggested that we'll get a robust platform where we won't face any challenges with respect to scalability. That's the reason our organization chose Spring Cloud Data Flow. Choosing Spring Cloud Data Flow depends on the use cases. Suppose you have a big pipeline where you need to churn out data from different sources. In that case, you need to have different processing layers, filter the data on different conditions, throw it here and there, and finally dump it somewhere. Spring Cloud Data Flow can be used when the use case revolves around having a processing pipeline with multiple layers and having proper failure handling on different layers. If you follow the documentation, Spring Cloud Data Flow is easier than any other platform. While building the pipeline, the biggest area where you feel a bit lacking is. Suppose you have different layers, and your data flows via ten different components. Code is just one part of it where you put the business logic. The biggest area of concern is when data is passing through ten different layers, and any failure happens at a certain layer. You should not be in a position where you're looking for all those aspects and handling them on your own. Spring Cloud Data Flow provides built-in capabilities where you do not need to reprocess the entire data in case your message gets lost, or you cannot process the data, and you bring your system back. You start the processing from where you have left it. The solution provides good robustness and scalability. We have been using it for years, and I've seen how it works on production in a scalable way. However, I had a few concerns about documentation while integrating with Kubernetes. Overall, I rate the solution an eight out of ten.
As it is a tool for meant programming, we need to write the code. I won't say it is easy to use because there are tools like Apache Sling and KSQL that simplify the actual work. Spring Cloud Flow is still in its early days, and we use it to process data from Kafka. Around three years ago, it was a good tool. There weren't many downsides linked to the tool. For now, I prefer Apache Sling and not Spring Cloud Data Flow. I won't recommend the tool to others. I rate the solution a seven out of ten.
Spring Cloud Data Flow is a useful product if I consider how there are different providers with whom my company had to deal, and most of them offer cloud-based products. I can't explain any crucial circumstances where the product's integration capabilities were helpful, but the aforementioned details explain the scenario for which I used the solution. I was only involved with the development of the product and not with the data pipeline configuration phase. The use of Spring Cloud Data Flow greatly impacted projects' time to market since our company's intention was to actually deploy and ensure that the payment platform integrated with it, which was an easy process. The product's user interface was very intuitive. The tool was deployed in multiple environments, but I am not sure about the production. From the time I started taking up the job in my current organization, I saw that we have deployed the tool in multiple environments wherein the number of users extensively used the product in the UAT environment, which is one of the most stable environments. There were 20 different methods to test the tool. I wouldn't be able to tell you the production details of the tool as I was more part of the production deployment, but I can say that it was deployed with the intent of making it available for 10,000 users. Those who plan to use the product should enjoy the flexibility of the solution. I rate the tool a nine out of ten.
The solution requires little maintenance. My advice to others is for them to follow the documentation. The solution is very well-designed and they deliver on their promises. I rate Spring Cloud Data Flow a seven out of ten.
While the deployment is on-premises, the data center is not on-premises. It's in a different geographical location, however, it was the client's own data center. We deployed there, and we installed the CDF server, then the Skipper server, and everything else including all the microservices. We used the PCF Cloud Foundry platform and for the bank, we deployed in Kubernetes. Spring Cloud Data Flow server is pretty standard to implement. The year before it was a new project, however, now it is already implemented in many, many projects. I think developers should start using it if they are not using it yet. In the future, there could be some more improvements in the area of the data pipeline ETF process. That said, I'm happy with the Spring Cloud Data Flow server right now. Our biggest takeaway has been to design the pipeline depending on the customer's needs. We cannot just think about everything as a developer. Sometimes we need to think about what the customer needs instead. Everything needs to be based on customer flow. That helps us design a proper data pipeline. The task mechanism is also helpful if we can run some tasks instead of keeping the application live 24 hours. Overall, I'd rate the solution nine out of ten. It's a really good solution and a lot cheaper than a lot of infrastructure provided by big companies like Google or Amazon.
We used this product with Kubernetes, which had been recently introduced and we liked it. It was very good, compared to Maven. We did try it with Maven; however, the server took 15 or 16 minutes to start. This is when we switched to Kubernetes and it was very good. They provide a lot of different configurations and environment types. We use Kafka on Kubernetes, as well. The configured was proved by SCDF. I would rate this solution a seven out of ten.
I would rate this product (or set of technologies) a solid eight out of 10. The things that would keep me from giving it a full 10 are the fact that the graphic user interface portion of the toolset still needs some polishing and performs somewhat slowly. However, I have not had an opportunity to run this tool set on higher performing machines, and have been limited to simply running it within a set of virtual machines on my own workstation.