It's a good log management platform. In terms of infrastructure management, it's good. Even when I check with the logs, I can easily create the filter accordingly, the logs. But different data sources. So, as per my understanding, my experience is also good. Basically, we are using the application logs, for example, from the EC2 and Kubernetes clusters, which will be coming from AWS. We push that log to the Stack Logs and Elasticsearch; for Elasticsearch, we view in Kibana for selecting the logs. Basically, we're just using the filters to understand the logs.
The purpose of the solution is basically for streaming of data, so Amazon Kinesis gets the data, and then it goes to Elasticsearch. Basically, for the data to analyze, whether it is the data we are receiving on the streaming process, means it goes once in two ways. After Kinesis, one is in S3 bucket, and another, it goes through Elasticsearch. In Kibana's dashboard, basically, they check whether the database is relevant or not. Another place where we use Elasticsearch is for the data which comes from the source. Basically, we receive the data from the web services, and then it comes to the different buckets, and finally, it loads into Snowflake data browser. Before entering into the Snowflake, it goes to the other applications also, and we are on the other application side. Before that, you can say that for the data we are receiving, we use DynamoDB and Elasticsearch to get some retry mechanism. Suppose any message fails, then the retry mechanism is handled by Elasticsearch and DynamoDB. In DynamoDB, we store some SQL, like PostgreSQL, and it works based on that. Once it processes the data, we do the payload confirmation, so if any new user comes, we use the UUID and create the proper format of the payload for the main application, like Braze's application. Braze is a US-based company. The software used by this company to send a notification or for any campaigning or any sort of things, and to fulfill the requirements, we were at the other end. Whatever the request to send by Braze's application, we have to fulfill that. It was in the pipeline. Regarding Redshift, I was talking about that basically in the migration project because of this one PostgreSQL.
IT Solutions Architect / Computer Scientist at Practical Semantics
Real User
Top 5
2023-03-08T19:41:00Z
Mar 8, 2023
Amazon Elasticsearch is a search engine. We set up the index from where the content was and moved the content in to get it indexed. We provided the end user with a fairly full-featured search, and they were included in being able to write their own bullion.
Senior Software Engineer at a financial services firm with 10,001+ employees
Real User
Top 20
2020-11-04T20:18:24Z
Nov 4, 2020
We are using Elasticsearch for embedded management and infrastructure management. Our environment includes instances with different tools and components from different companies.
We use the Amazon Elasticsearch systems for Java programming. It gives us the charges according to the use cases. If this solution is not used, it will not charge the client. In addition, we can upgrade it from microprocessor to mini, and from mini to micro anytime.
We use it to store and search for events that occur within our application, rather than traditional application logs.
It's a good log management platform. In terms of infrastructure management, it's good. Even when I check with the logs, I can easily create the filter accordingly, the logs. But different data sources. So, as per my understanding, my experience is also good. Basically, we are using the application logs, for example, from the EC2 and Kubernetes clusters, which will be coming from AWS. We push that log to the Stack Logs and Elasticsearch; for Elasticsearch, we view in Kibana for selecting the logs. Basically, we're just using the filters to understand the logs.
The purpose of the solution is basically for streaming of data, so Amazon Kinesis gets the data, and then it goes to Elasticsearch. Basically, for the data to analyze, whether it is the data we are receiving on the streaming process, means it goes once in two ways. After Kinesis, one is in S3 bucket, and another, it goes through Elasticsearch. In Kibana's dashboard, basically, they check whether the database is relevant or not. Another place where we use Elasticsearch is for the data which comes from the source. Basically, we receive the data from the web services, and then it comes to the different buckets, and finally, it loads into Snowflake data browser. Before entering into the Snowflake, it goes to the other applications also, and we are on the other application side. Before that, you can say that for the data we are receiving, we use DynamoDB and Elasticsearch to get some retry mechanism. Suppose any message fails, then the retry mechanism is handled by Elasticsearch and DynamoDB. In DynamoDB, we store some SQL, like PostgreSQL, and it works based on that. Once it processes the data, we do the payload confirmation, so if any new user comes, we use the UUID and create the proper format of the payload for the main application, like Braze's application. Braze is a US-based company. The software used by this company to send a notification or for any campaigning or any sort of things, and to fulfill the requirements, we were at the other end. Whatever the request to send by Braze's application, we have to fulfill that. It was in the pipeline. Regarding Redshift, I was talking about that basically in the migration project because of this one PostgreSQL.
Amazon Elasticsearch is a search engine. We set up the index from where the content was and moved the content in to get it indexed. We provided the end user with a fairly full-featured search, and they were included in being able to write their own bullion.
We are using Elasticsearch for embedded management and infrastructure management. Our environment includes instances with different tools and components from different companies.
We use the Amazon Elasticsearch systems for Java programming. It gives us the charges according to the use cases. If this solution is not used, it will not charge the client. In addition, we can upgrade it from microprocessor to mini, and from mini to micro anytime.