Ingestion Solutions Head of Data at a financial services firm with 11-50 employees
Consultant
2022-10-31T12:44:29Z
Oct 31, 2022
I've been demonstrating the latest version of Equalum. Usually, customers have a hybrid deployment of the product. In terms of deployment and maintenance of Equalum, customers ask for help from my company. Equalum is not a huge organization and doesn't have vast operations, so you have to take care of every customer. Each customer has an account manager who makes sure that everything works perfectly. Otherwise, you won't get the contract in the end. At this point, I'm rating Equalum as nine out of ten because it's good at what it does. I help Equalum get customers, and I'm a consultant in my company.
I would encourage clients to start testing it. There is a very good way to get pilots set up, and they very quickly show the benefits of Equalum, and this is an area that is very strategic to most large clients' operations. If you're now in batch processing and you need to move to real-time, this is the product that does it, and it does it very fast. Clients that take this area seriously need to test Equalum. Whether they decide in the end to choose it or not, that's a different story, but it cannot be avoided as a real solution that also creates quite good cost savings through its efficiency. Technically, there are others, but the fact that this can be used with much less knowledge of coding makes a huge difference in the marketplace. In terms of Equalum improving data quality and accuracy, we haven't used it for that in a working environment. But in a demo and in proofs of concept we can show data quality. That's definitely one of the reasons to use it. We are happy with the product.
Senior Software Engineer at a retailer with 201-500 employees
Real User
2021-03-14T06:54:00Z
Mar 14, 2021
Know your use cases, e.g., will you be doing a lot of micro-batching, database work, or pulling data straight off of Kafka topics? The user interfaces are pretty good for data products. There is nothing amazing about it, but there is nothing that really detracts from it. We don't do any data testing inside of Equalum. It doesn't mean that we couldn't, but we don't at the moment. Eventually, when there are new data features coming out using Jupyter Notebooks, we will start incorporating those into our data science. The biggest lesson learnt: How to operate a Kafka cluster in Spark and do it well. I would rate it as a nine out of 10. Their customer support is phenomenal. Most companies usually sell it to you, then they disappear. Equalum is very interested in customer feedback.
Software Engineer Specialist at a energy/utilities company with 1,001-5,000 employees
Real User
2021-03-10T14:39:00Z
Mar 10, 2021
Go for it. It is a good tool. They are growing. Hop in now and take advantage of their pricing. The tool is worth it It is a very intuitive tool. You don't have to do much in it, just map the things and it works. I can use Equalum's API in other tools. They have implemented MongoDB. Initially, they had the Oracle CDC using Oracle LogMiner. Then, they came out with Oracle Binary Log Parser, which was super expensive: Same as Informatica. They were charging for everything, even the PoC. At that time, we were saying, "Equalum, you should have binary reads too." However, when you have a scope of a project for a growing product, you have to prioritize things. They are now coming out with the latest version Oracle Binary Log Parser, which they installed for us. The next version will be even better. It has very good collaboration with their clients. It is not just Oracle Database features that they are putting in. I think pretty much every other client gives them requests, then they put them in the priority list and they keep on growing with them. They just introduced Oracle Binary Log Parser. Two to three months back, we tested it. It is faster than LogMiner by 30 to 40 percent, which is an improvement time-wise. We have not implemented it yet. I would like to implement it for any new project. I have to find time to do that. I haven't worked on changing the existing one from LogMiner to Binary Log Parser. I have to work with Equalum on how redo all of them or how we can switch over to Binary Log Parser. It is not the highest priority, but if tomorrow I have to do a new project, then I would go with Oracle Binary Log Parser. There is a lot of promise in the future, which is something to think about. We do plan to increase usage. We have a lot of projects coming up. I want to experiment with it in more ways, like with Kafka as a source and as a target where we can distribute data to multiple applications. There are quite a few things in our pipeline. It is just finding time to figure out how we can do them. We have not fully explored this tool yet. It has a lot of potential, especially the transformation and creating workflows, which has very simple data replication to the target. Overall, I would rate Equalum as a nine out of 10.
Director of Enterprise Architecture at a pharma/biotech company with 10,001+ employees
Real User
2021-03-09T21:07:00Z
Mar 9, 2021
Make sure that your use cases for Equalum include change data capture. This is the sweet spot for Equalum and where it will save you time and money. We use the solution’s Oracle Binary Log Parser, which is one of our primary use cases. We have a lot of databases on shared Oracle servers and configuring the Log Parser on those servers is brought up often as a security issue because an owner of one database can potentially see the data of another database. Therefore, we have had to make some adjustments on how we organize our Oracle Databases because of it. However, I think this would be an issue with any tool using this methodology. I would rate this solution as a nine out of 10.
Managing Director at a consultancy with 11-50 employees
Reseller
2021-03-08T06:56:00Z
Mar 8, 2021
It's a great tool to use in any data transformation opportunity, especially focusing on real-time. The word "batch" should never be used in an organization, going forward. I know it's useful and it has its use cases and there are situations where it's helpful. Batch is a form of history and it will probably always be there until that legacy finally disappears. But, overall, if anybody wants to look at migrating and transforming their overall data into a real-time enterprise, there's not a better tool in the market today, in terms of its performance, price, usability, and support. Those four things are the reasons we're selling it. The biggest thing I have learned from working with Equalum is how difficult it is to actually manage your own Spark and Kafka clusters, and to process data at speed. It's difficult to have the infrastructure and all of the other software to combine everything. In some organizations the effort takes hundreds of people, depending on the size of the enterprise, how much data is involved, and the overall system architecture. What opened my eyes was the fact that, with this tool, you have the ability to alleviate all of the potential headaches associated with developing or maintaining your own clusters of these open source products. Large, well-known Asian companies literally have over 1,000 engineers dedicated to managing open source clusters. Those companies are wasting so much money, effort, and brain power by having their engineers focused on managing these really basic things, when they could be deploying a third-party tool like Equalum. They could be letting their engineers drive larger revenue opportunities with more value added around things like what to do with the data, and how to manage the data and the data flow. They could create real value from integrating data from disparate data sources, instead of focusing on the minutia, such as maintaining clusters. The mindset of some of these Japanese and Korean companies is back in 1995. That's the reality. It's a challenge because getting them to change their older business approach and ideology is always difficult. But that is what opened my eyes, the fact that this tool can literally alleviate thousands of people doing a job that they don't need to be doing. As for the data quality that results from using the tool, it's dependent upon the person who's using the tool. If you are able to transform the data and take the information you want out of it, then it can help your data quality. You can clean up the data and land it in whatever type of structure you would like. If you know what you're doing, you can create really high-quality data that is specific to the needs of the organization. If you don't know what you're doing, and you don't know how to use the tool, you can create more problems. But for the most part, it does allow for data cleansing and the ability to create higher-quality data. When it comes to Oracle Binary Log Parser as opposed to LogMiner, my understanding is that it has higher performance capabilities in terms of transacting data. It allows for the performance of data being migrated and/or replicated, as well as the transaction processing that takes place in the database, to happen in a much more performant way. The ability to handle those types of binary log requirements is really important to get the performance necessary. Equalum has a partnership with Oracle. Oracle seems to be very open and very positive about the partnership. Although we are replacing a lot of the Oracle GoldenGate legacy products, they don't seem to be too worried because we're connecting to their databases still, and we're landing data, in some cases, into their cloud-based data infrastructure as well. There's definitely a lot of power in the relationship between Equalum and Oracle. It's a very strong benefit for both the product and the company. From a business point of view, as a partner, we are continuing to educate and to bring resellers up to speed about the new kid on the block. It's always difficult being a new product in these markets because these markets are very risk-aversive and they don't really like small companies. They prefer to deal with large companies, even though the large companies' technologies are kind of outdated. It's a challenge for us to try to educate and to make them aware that their risk is going to be low. That's what's important at the end of the day: It's about lowering risk for partners. If adopting a new technology increases their risk, even if the performance of the technology is better, they won't go along with it, which is a very different mindset to North America, in my opinion. Overall, we sell a combination of multiple products in the real-time data and Big Data AI space, and Equalum is the core of our offering. It literally does provide the plumbing to the house. And once you get the plumbing installed, it's generally going to be there for a long time. As long as it fits performance and as long as it continues to evolve and adapt, most companies are really happy to keep it.
Database Administrator at a energy/utilities company with 1,001-5,000 employees
Real User
2021-02-24T21:43:00Z
Feb 24, 2021
We don't use it much for its transformation part. We didn't initially know about the transformation part of it. For example, if I have a new number column in the source and I want to round up the figures or do some string transformation, find, or replace, then I can directly do that from the transformation operators. We obviously used it for replication before. Now, we are using it for transformation as well. If you want strong replication between any source and target with JDBC, go for Equalum. It's simple, easy to use, and requires less maintenance and tasks to be done. The tool takes care of all your requirements. So, you don't need to do daily backup and restore tasks. It is a straightforward tool. So, if you're using ETL, try Equalum. It is the best bet. I would rate the solution as 10 out of 10. I have no issues so far.
Equalum is a fully-managed, end-to-end data integration and real-time data streaming platform, powered by industry-leading change data capture (CDC) tech and modern data transformation capabilities (streaming ETL and ELT). Equalum's enterprise-grade platform features intuitive UI allowing you to build robust, real-time data pipelines in minutes.
I've been demonstrating the latest version of Equalum. Usually, customers have a hybrid deployment of the product. In terms of deployment and maintenance of Equalum, customers ask for help from my company. Equalum is not a huge organization and doesn't have vast operations, so you have to take care of every customer. Each customer has an account manager who makes sure that everything works perfectly. Otherwise, you won't get the contract in the end. At this point, I'm rating Equalum as nine out of ten because it's good at what it does. I help Equalum get customers, and I'm a consultant in my company.
I would encourage clients to start testing it. There is a very good way to get pilots set up, and they very quickly show the benefits of Equalum, and this is an area that is very strategic to most large clients' operations. If you're now in batch processing and you need to move to real-time, this is the product that does it, and it does it very fast. Clients that take this area seriously need to test Equalum. Whether they decide in the end to choose it or not, that's a different story, but it cannot be avoided as a real solution that also creates quite good cost savings through its efficiency. Technically, there are others, but the fact that this can be used with much less knowledge of coding makes a huge difference in the marketplace. In terms of Equalum improving data quality and accuracy, we haven't used it for that in a working environment. But in a demo and in proofs of concept we can show data quality. That's definitely one of the reasons to use it. We are happy with the product.
Know your use cases, e.g., will you be doing a lot of micro-batching, database work, or pulling data straight off of Kafka topics? The user interfaces are pretty good for data products. There is nothing amazing about it, but there is nothing that really detracts from it. We don't do any data testing inside of Equalum. It doesn't mean that we couldn't, but we don't at the moment. Eventually, when there are new data features coming out using Jupyter Notebooks, we will start incorporating those into our data science. The biggest lesson learnt: How to operate a Kafka cluster in Spark and do it well. I would rate it as a nine out of 10. Their customer support is phenomenal. Most companies usually sell it to you, then they disappear. Equalum is very interested in customer feedback.
Go for it. It is a good tool. They are growing. Hop in now and take advantage of their pricing. The tool is worth it It is a very intuitive tool. You don't have to do much in it, just map the things and it works. I can use Equalum's API in other tools. They have implemented MongoDB. Initially, they had the Oracle CDC using Oracle LogMiner. Then, they came out with Oracle Binary Log Parser, which was super expensive: Same as Informatica. They were charging for everything, even the PoC. At that time, we were saying, "Equalum, you should have binary reads too." However, when you have a scope of a project for a growing product, you have to prioritize things. They are now coming out with the latest version Oracle Binary Log Parser, which they installed for us. The next version will be even better. It has very good collaboration with their clients. It is not just Oracle Database features that they are putting in. I think pretty much every other client gives them requests, then they put them in the priority list and they keep on growing with them. They just introduced Oracle Binary Log Parser. Two to three months back, we tested it. It is faster than LogMiner by 30 to 40 percent, which is an improvement time-wise. We have not implemented it yet. I would like to implement it for any new project. I have to find time to do that. I haven't worked on changing the existing one from LogMiner to Binary Log Parser. I have to work with Equalum on how redo all of them or how we can switch over to Binary Log Parser. It is not the highest priority, but if tomorrow I have to do a new project, then I would go with Oracle Binary Log Parser. There is a lot of promise in the future, which is something to think about. We do plan to increase usage. We have a lot of projects coming up. I want to experiment with it in more ways, like with Kafka as a source and as a target where we can distribute data to multiple applications. There are quite a few things in our pipeline. It is just finding time to figure out how we can do them. We have not fully explored this tool yet. It has a lot of potential, especially the transformation and creating workflows, which has very simple data replication to the target. Overall, I would rate Equalum as a nine out of 10.
Make sure that your use cases for Equalum include change data capture. This is the sweet spot for Equalum and where it will save you time and money. We use the solution’s Oracle Binary Log Parser, which is one of our primary use cases. We have a lot of databases on shared Oracle servers and configuring the Log Parser on those servers is brought up often as a security issue because an owner of one database can potentially see the data of another database. Therefore, we have had to make some adjustments on how we organize our Oracle Databases because of it. However, I think this would be an issue with any tool using this methodology. I would rate this solution as a nine out of 10.
It's a great tool to use in any data transformation opportunity, especially focusing on real-time. The word "batch" should never be used in an organization, going forward. I know it's useful and it has its use cases and there are situations where it's helpful. Batch is a form of history and it will probably always be there until that legacy finally disappears. But, overall, if anybody wants to look at migrating and transforming their overall data into a real-time enterprise, there's not a better tool in the market today, in terms of its performance, price, usability, and support. Those four things are the reasons we're selling it. The biggest thing I have learned from working with Equalum is how difficult it is to actually manage your own Spark and Kafka clusters, and to process data at speed. It's difficult to have the infrastructure and all of the other software to combine everything. In some organizations the effort takes hundreds of people, depending on the size of the enterprise, how much data is involved, and the overall system architecture. What opened my eyes was the fact that, with this tool, you have the ability to alleviate all of the potential headaches associated with developing or maintaining your own clusters of these open source products. Large, well-known Asian companies literally have over 1,000 engineers dedicated to managing open source clusters. Those companies are wasting so much money, effort, and brain power by having their engineers focused on managing these really basic things, when they could be deploying a third-party tool like Equalum. They could be letting their engineers drive larger revenue opportunities with more value added around things like what to do with the data, and how to manage the data and the data flow. They could create real value from integrating data from disparate data sources, instead of focusing on the minutia, such as maintaining clusters. The mindset of some of these Japanese and Korean companies is back in 1995. That's the reality. It's a challenge because getting them to change their older business approach and ideology is always difficult. But that is what opened my eyes, the fact that this tool can literally alleviate thousands of people doing a job that they don't need to be doing. As for the data quality that results from using the tool, it's dependent upon the person who's using the tool. If you are able to transform the data and take the information you want out of it, then it can help your data quality. You can clean up the data and land it in whatever type of structure you would like. If you know what you're doing, you can create really high-quality data that is specific to the needs of the organization. If you don't know what you're doing, and you don't know how to use the tool, you can create more problems. But for the most part, it does allow for data cleansing and the ability to create higher-quality data. When it comes to Oracle Binary Log Parser as opposed to LogMiner, my understanding is that it has higher performance capabilities in terms of transacting data. It allows for the performance of data being migrated and/or replicated, as well as the transaction processing that takes place in the database, to happen in a much more performant way. The ability to handle those types of binary log requirements is really important to get the performance necessary. Equalum has a partnership with Oracle. Oracle seems to be very open and very positive about the partnership. Although we are replacing a lot of the Oracle GoldenGate legacy products, they don't seem to be too worried because we're connecting to their databases still, and we're landing data, in some cases, into their cloud-based data infrastructure as well. There's definitely a lot of power in the relationship between Equalum and Oracle. It's a very strong benefit for both the product and the company. From a business point of view, as a partner, we are continuing to educate and to bring resellers up to speed about the new kid on the block. It's always difficult being a new product in these markets because these markets are very risk-aversive and they don't really like small companies. They prefer to deal with large companies, even though the large companies' technologies are kind of outdated. It's a challenge for us to try to educate and to make them aware that their risk is going to be low. That's what's important at the end of the day: It's about lowering risk for partners. If adopting a new technology increases their risk, even if the performance of the technology is better, they won't go along with it, which is a very different mindset to North America, in my opinion. Overall, we sell a combination of multiple products in the real-time data and Big Data AI space, and Equalum is the core of our offering. It literally does provide the plumbing to the house. And once you get the plumbing installed, it's generally going to be there for a long time. As long as it fits performance and as long as it continues to evolve and adapt, most companies are really happy to keep it.
We don't use it much for its transformation part. We didn't initially know about the transformation part of it. For example, if I have a new number column in the source and I want to round up the figures or do some string transformation, find, or replace, then I can directly do that from the transformation operators. We obviously used it for replication before. Now, we are using it for transformation as well. If you want strong replication between any source and target with JDBC, go for Equalum. It's simple, easy to use, and requires less maintenance and tasks to be done. The tool takes care of all your requirements. So, you don't need to do daily backup and restore tasks. It is a straightforward tool. So, if you're using ETL, try Equalum. It is the best bet. I would rate the solution as 10 out of 10. I have no issues so far.