The distributed processing is excellent.
On the solution, Spark is very good.
The performance is pretty good.
The distributed processing is excellent.
On the solution, Spark is very good.
The performance is pretty good.
For the visualization tools, we use Apache Hadoop and it is very slow.
It lacks some query language. We have to use Apache Linux. Even so, the query language still has limitations with just a bit of documentation and many of the visualization tools do not have direct connectivity. They need something like BigQuery which is very fast. We need those to be available in the cloud and scalable.
The solution needs to be powerful and offer better availability for gathering queries.
The solution is very expensive.
I've been using the solution for about five years now.
The solution is stable and offers good performance. It doesn't crash or freeze. It's not buggy at all.
You can scale the solution if you need to. We find that it's pretty easy to expand it out.
There were about 13-20 people using it at any given time.
The technical support was pretty good. It's my understanding that the company was pretty satisfied with the level of support they received. They were knowledgeable and responsive.
I've also worked with MySQL and Postgres. Hadoop is more for analytical processing. While the others claim to have a distributor, Hadoop is far better in that regard. It's excellent compared to other options.
The initial setup was pretty straightforward. It was not overly complex for our team.
The solution isn't cheap. It's quite costly.
The solution is perfect for those dealing with a huge amount of data. Still, you need to check to make sure it meets your company's requirements. You need to understand them before actually choosing the technology you'll ultimately use.
Overall, I would rate the solution at a seven out of ten.
The primary use is as a data lake.
Using this solution has allowed us to consolidate the data. It has made it such that data science-based algorithms can be written for predictive analytics.
The most valuable features are powerful tools for ingestion, as data is in multiple systems.
It would be helpful to have more information on how to best apply this solution to smaller organizations, with less data, and grow the data lake.
I have been using Apache Hadoop for two years.
We are primarily dumping all the prior payment transaction data into a loop system and then we use some of the plug and play analytics tools to translate it.
The most valuable feature is the database.
We're finding vulnerabilities in running it 24/7. We're experiencing some downtime that affects the data.
It would be good to have more advanced analytics tools.
I've been using the solution for five years.
The solution is scalable. From a payments perspective, we're using the solution on a large scale.
We've never contacted technical support.
We didn't previously use a different solution.
The initial setup was complex. There was a lot of data that we had to bring over from various sources and it was quite a long process.
We did have some assistance with the implementation.
We use the on-premises deployment model.
We're more inclined towards an operational data source to fill our customer's needs. Hadoop is good for analytics and some reporting requirements.
It's a good solution for those needing something for the purposes of management reporting.
I'd rate the solution eight out of ten.
The solution is perfect for when you have big data. It's good for managing and replication.
It's good for storing historical data and handling analytics on a huge amount of data.
It could be because the solution is open source, and therefore not funded like bigger companies, but we find the solution runs slow.
The solution isn't as mature as SQL or Oracle and therefore lacks many features.
The solution could use a better user interface. It needs a more effective GUI in order to create a better user environment.
I've been using the solution for seven years.
The solution is stable.
I've used the solution under cloud, hybrid and on-premises deployment models.
I'd recommend the solution, but it depends on the company's requirements. If you don't have huge amounts of data, you probably don't need Hadoop. If you need a completely private environment, and you have lots of big data, consider Hadoop. You don't even need to invest in the infrastructure as you can just use a cloud deployment.
I'd rate the solution seven out of ten. I'd rate it higher if it had a better user interface.
We primarily use the solution for the enterprise data hub and big data warehouse extension.
The ability to add multiple nodes without any restriction is the solution's most valuable aspect.
What needs improvement depends on the customer and the use case. The classical Hadoop, for example, we consider an old variant. Most now work with flash data.
There is a very wide application for this solution, but in enterprise companies, if you work with classical BI systems, it would be good to include an additional presentation layer for BI solutions.
There is a lack of virtualization and presentation layers, so you can't take it and implement it like a radio solution.
We've been working with the solution for three to four years.
The solution is stable. It has very good disaster stability and multi-rack configuration.
It is possible to scale the solution. We work with companies that have hundreds of users.
The initial setup might not be straightforward for our customers, but it's easy enough for us to handle. However, if we don't build a proof of concept for the company first it may take some time and be quite complex. Pilot projects take about three months to deploy and full spec projects take up to a year because we have to work in all requirements in data governance, security, etc.
We originally built on Hortonworks tech which didn't require any licensing, but that is getting discontinued in 2022, so it's been proposed we move to Cloudera which will have licensing costs associated with it.
We use the on-premises deployment model. It's a requirement for the company we work with, which is a bank. Often customers demand we work with on-premises deployment models.
I'd rate the solution seven out of ten. In terms of the ability to build middleware and offer scalability, it would be 10 out of 10 from me. However, if you take into account only the visualization, I'd only rate it at three or four out of ten.
The primary use case of this solution is data engineering and data files.
The deployment model we are using is private, on-premises.
We don't use many of the Hadoop features, like Pig, or Sqoop, but what I like most is using the Ambari feature. You have to use Ambari otherwise it is very difficult to configure.
What comes with the standard setup is what we mostly use, but Ambari is the most important.
Hadoop itself is quite complex, especially if you want it running on a single machine, so to get it set up is a big mission.
It seems that Hadoop is on it's way out and Spark is the way to go. You can run Spark on a single machine and it's easier to setup.
In the next release, I would like to see Hive more responsive for smaller queries and to reduce the latency. I don't think that this is viable, but if it is possible, then latency on smaller guide queries for analysis and analytics.
I would like a smaller version that can be run on a local machine. There are installations that do that but are quite difficult, so I would say a smaller version that is easy to install and explore would be an improvement.
This solution is stable but sometimes starting up can be quite a mission. With a full proper setup, it's fine, but it's a lot of work to look after, and to startup and shutdown.
This solution is scalable, and I can scale it almost indefinitely.
We have approximately two thousand users, half of the users are using it directly and another thousand using the products and systems running on it. Fifty are data engineers, fifteen direct appliances, and the rest are business users.
There are several forums on the web, and Google search works fine. There is a lot of information available and it often works.
They also have good support in regards to the implementation.
I am satisfied with the support. Generally, there is good support.
We used the more traditional database solutions such as SAP IQ and Data Marks, but now it's changing more towards Data Science and Big Data.
We are a smaller infrastructure, so that's how we are set up.
The initial setup is quite complex if you have to set it up yourself. Ambari makes it much easier, but on the cloud or local machines, it's quite a process.
It took at least a day to set it up.
I did not use a vendor. I implemented it myself on the cloud with my local machine.
There was an evaluation, but it was a decision to implement with Data Lake and Hortonworks data platform.
It's good for what is meant to do, a lot of big data, but it's not as good for low latency applications.
If you have to perform quick queries on naive or analytics it can be frustrating.
It can be useful for what it was intended to be used for.
I would rate this solution a seven out of ten.
We primarily use this product to integrate legacy systems.
It helps us work with older products and more easily create solutions.
The most valuable thing about this program for us is that it is very powerful and very cheap. We're using a lot of the program's modules and features because we're using software and hardware that can be difficult to integrate. For example, we're using supersets and a lot of old products from difficult systems. We love having the various options and features that allow us to work with flexibility.
We are using HDTM circuit boards, and I worry about the future of this product and compatibility with future releases. It's a concern because, for now, we do not have a clear path to upgrade. The Hadoop product is in version three and we'd like to upgrade to the third version. But as far as I know, it's not a simple thing.
There are a lot of features in this product that are open-source. If something isn't included with the distribution we are not limited. We can take things from the internet and integrate them. As far as I know, we are using Presto which isn't included in HDP (Hortonworks Data Platform) and it works fine. Not everything has to be included in the release. If something is outside of HDP and it works, that is good enough for me. We have the flexibility to incorporate it ourselves.
The product is well tested and very stable. We have no problems with the stability of it at all. Really we just install it and forget about fussing with it. We just use the features it offers to be productive.
This is a scalable solution and we like what it does. It is currently serving about 100 users at our organization and it seems like it can handle more easily.
We actually have not used technical support. Everything we needed a solution for we just use Google and it's enough for us. Sometimes we do have issues, but not often. The issues are mainly to do with the terminals because it's a bit complicated to integrate these other systems. We have managed to solve all the problems up till now.
We had a very old version of Hadoop which was already installed by another company and we upgraded it. We didn't really switch we just upgraded what was here.
The initial setup wasn't very easy because of the incredible security, but we have managed to get by that. It's sort of simple, in my opinion, once you get past that part. I think, in all, it took about half of a year. But it wasn't a new deployment, it's an upgrade and the bigger challenge was moving the data. We pretty much just supported the existing product and moved to HDP.
We have everything on-premises and we did the deployment and maintenance.
It took four people. We want to increase usage of Hadoop and we are thinking about it very heavily. We're actually in the process of doing it. At the same time, we are integrating things from other systems to Hadoop.
I would give this product a rating of eight out of ten. It would not be a ten out of ten because of some problems we are having with the upgrade to the newer version. It would have been better for us if these problems were not holding us back. I think eight is good enough.
We use this solution for our Enterprise Data Lake.
Using this solution has reduced the overall TCO. It has also improved data processing time for the machine and provides greater insight into our unstructured data.
The most valuable features are the ability to process the machine data at a high speed, and to add structure to our data so that we can generate relevant analytics.
We would like to have more dynamics in merging this machine data with other internal data to make more meaning out of it.
