The primary use case is using Qlik Replicate to interface with DB2, allowing for efficient data replication from various sources into a common one or other destinations. It is particularly beneficial for dealing with legacy systems.
Qlik Compose is basically an integration tool, which has been acquired by Qlik from an Israeli IT company. So that Qlik can become leaders or can jump into the integration space. So, there are two tools. One is Qlik Replicate, which replicates the whole data. And then after the replication is done, Qlik Compose is primarily designed because if users use Qlik Replicate, it will replicate the data. For example, if users have ERP data in entity relationships, users can offload it instead of building ETL jobs over the same system. Users can offload that kind of load by pulling the data from the Replicate system. Users are not building a data warehouse from that. So, Compose will come into the picture from the Qlik stack point of view, which will help users automate it quickly. Users need to define the relationships between different tables of what is present in the OLTP system. Based on that, it will automatically design the dimension. It will automatically design the fact, and it will automatically design the relationships. And it will create the table just like what users do in Erwin. Erwin, it is like users define the relationships and an SQL query is generated. But here, it’s 60% automation, where users define the relationship, and then automatically, it will even identify the dimension. Then, after identifying the dimension attributes and what needs to be in that, it will also generate the call for it. That is the data modeling part. So, the advantage is that data modeling is automated. And then users will have something like slowly changing dimensions, later having dimensions, similar pattern jobs in the dimension, and fact ETL process that users need to develop. That, again, is time-consuming. So even that is automated. Usually, that can be automated in Compose. So overall, what we claim from Qlik Replicate is that 60% of the process can be automated. Users have the data modeling effort, where users manually define relationships and put in some effort, and then it's automated. Even in the ETL process, users manually define some connections, map the attributes together, and specify what they need. After that, the rest is automated. So, 60% of the time is spent there.
We basically use it for fulfilling our data anayltics purpose. Our client is a major global semiconductor company. We use the tool to prepare data for analysis.
Senior Vice President at Polestar Solutions & Services India Pvt Ltd
Reseller
Top 10
2023-09-29T07:40:00Z
Sep 29, 2023
Qlik Replicate is used when a company wants to make some sort of universal bus, a data chain where you have a continuous feed of data flowing in from source to target. The solution helps you hook in that data over the bridge at any time or granularity.
Our main use case is replicating a source database to another database in order to have the information ready for our analytic users close to real-time and for use in different business intelligence software. I'm a solutions architect.
Business Intelligence Architect at a manufacturing company with 10,001+ employees
Real User
2022-08-30T18:32:52Z
Aug 30, 2022
Most of the use cases involve changing the data capture and dealing with data replications. The main use case is data replication. All real-time data migrations are happening through Qlik.
We use Qlik Sense for consolidating and bringing together data across multiple sources, so our business users can get access to that data more easily, as it's sometimes difficult to access because of where it sits.
We use EC2 for the cloud service for Qlik Replicate. We generally use Qlik Replicate from SQL server source to SQL server source destination, or to the AWS RDS instance. Either a single server on-premise or AWS RDS Postgre Aurora.
Solution Architect at Larsen & Toubro Infotech Ltd.
Real User
2019-12-19T06:32:00Z
Dec 19, 2019
I am a consultant who is using Qlik Replicate for one of our customers. It is primarily used for the historical load, as well as the incremental load. Our current project involves a client that has their legacy data in SAP HANA. They want to transfer all of the data from SAP into Snowflake. It involves terabytes of data and Qlik is being extensively used.
Qlik Replicate is a data replication solution for replicating data from one source database to another for business intelligence software. It offers data manipulation and transformations, replication without impacting source databases, and ease of use without needing ETL. The solution is stable and user-friendly, with detailed logging and support.
Qlik Replicate has improved the organization by allowing each team to replicate their data into a single-source data location. The most...
The primary use case is using Qlik Replicate to interface with DB2, allowing for efficient data replication from various sources into a common one or other destinations. It is particularly beneficial for dealing with legacy systems.
Qlik Compose is basically an integration tool, which has been acquired by Qlik from an Israeli IT company. So that Qlik can become leaders or can jump into the integration space. So, there are two tools. One is Qlik Replicate, which replicates the whole data. And then after the replication is done, Qlik Compose is primarily designed because if users use Qlik Replicate, it will replicate the data. For example, if users have ERP data in entity relationships, users can offload it instead of building ETL jobs over the same system. Users can offload that kind of load by pulling the data from the Replicate system. Users are not building a data warehouse from that. So, Compose will come into the picture from the Qlik stack point of view, which will help users automate it quickly. Users need to define the relationships between different tables of what is present in the OLTP system. Based on that, it will automatically design the dimension. It will automatically design the fact, and it will automatically design the relationships. And it will create the table just like what users do in Erwin. Erwin, it is like users define the relationships and an SQL query is generated. But here, it’s 60% automation, where users define the relationship, and then automatically, it will even identify the dimension. Then, after identifying the dimension attributes and what needs to be in that, it will also generate the call for it. That is the data modeling part. So, the advantage is that data modeling is automated. And then users will have something like slowly changing dimensions, later having dimensions, similar pattern jobs in the dimension, and fact ETL process that users need to develop. That, again, is time-consuming. So even that is automated. Usually, that can be automated in Compose. So overall, what we claim from Qlik Replicate is that 60% of the process can be automated. Users have the data modeling effort, where users manually define relationships and put in some effort, and then it's automated. Even in the ETL process, users manually define some connections, map the attributes together, and specify what they need. After that, the rest is automated. So, 60% of the time is spent there.
We use the tool as a plugin for CDC.
We basically use it for fulfilling our data anayltics purpose. Our client is a major global semiconductor company. We use the tool to prepare data for analysis.
Qlik Replicate is used when a company wants to make some sort of universal bus, a data chain where you have a continuous feed of data flowing in from source to target. The solution helps you hook in that data over the bridge at any time or granularity.
We are transferring our source data to Qlik Replicate.
There were a variety of sources which we needed to replicate data from, such as databases and file systems.
Our main use case is replicating a source database to another database in order to have the information ready for our analytic users close to real-time and for use in different business intelligence software. I'm a solutions architect.
We use this solution for data warehouse replication. It is used by three network developers in our company.
Most of the use cases involve changing the data capture and dealing with data replications. The main use case is data replication. All real-time data migrations are happening through Qlik.
We use Qlik Sense for consolidating and bringing together data across multiple sources, so our business users can get access to that data more easily, as it's sometimes difficult to access because of where it sits.
Qlik Replicate is a product used to replicate data.
We use EC2 for the cloud service for Qlik Replicate. We generally use Qlik Replicate from SQL server source to SQL server source destination, or to the AWS RDS instance. Either a single server on-premise or AWS RDS Postgre Aurora.
Our primary use case is to transform and modernize our analytics environment and modernize the data pipelines.
I am a consultant who is using Qlik Replicate for one of our customers. It is primarily used for the historical load, as well as the incremental load. Our current project involves a client that has their legacy data in SAP HANA. They want to transfer all of the data from SAP into Snowflake. It involves terabytes of data and Qlik is being extensively used.