The solution can be deployed on the cloud or on-premise.
Co-Founder at FORMCEPT Technologies
Handles large volume data, cloud and on-premise deployments, but difficult to use
Pros and Cons
- "Apache Spark can do large volume interactive data analysis."
- "Apache Spark is very difficult to use. It would require a data engineer. It is not available for every engineer today because they need to understand the different concepts of Spark, which is very, very difficult and it is not easy to learn."
What is our primary use case?
How has it helped my organization?
We are using Apache Spark, for large volume interactive data analysis.
MechBot is an enterprise, one-click installation, trusted data excellence platform. Underneath, I am using Apache Spark, Kafka, Hadoop HDFS, and Elasticsearch.
What is most valuable?
Apache Spark can do large volume interactive data analysis.
What needs improvement?
Apache Spark is very difficult to use. It would require a data engineer. It is not available for every engineer today because they need to understand the different concepts of Spark, which is very, very difficult and it is not easy to learn.
Buyer's Guide
Apache Spark
December 2024
Learn what your peers think about Apache Spark. Get advice and tips from experienced pros sharing their opinions. Updated: December 2024.
824,067 professionals have used our research since 2012.
For how long have I used the solution?
I have been using Apache Spark for approximately 11 years.
What do I think about the stability of the solution?
The solution is stable.
What do I think about the scalability of the solution?
Apache Spark is scalable. However, it needs enormous technical skills to make it scalable. It is not a simple task.
We have approximately 20 people using this solution.
How was the initial setup?
If you want to distribute Apache Spark in a certain way, it is simple. Not every engineer can do it. You need DevOps specialized skills on Spark is what is required.
If we are going to deploy the solution in a one-layer laptop installation, it is very straightforward, but this is not what someone is going to deploy in the production site.
What's my experience with pricing, setup cost, and licensing?
Since we are using the Apache Spark version, not the data bricks version, it is an Apache license version, the support and resolution of the bug are actually late or delayed. The Apache license is free.
What other advice do I have?
We are well versed in Spark, the version, the internal structure of Spark, and we know what exactly Spark is doing.
The solution cannot be easier. Everything cannot be made simpler because it involves core data, computer science, pro-engineering, and not many people are actually aware of it.
I rate Apache Spark a six out of ten.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Co-Founder at FORMCEPT Technologies
Offers good machine learning, data learning, and Spark Analytics features
Pros and Cons
- "The features we find most valuable are the machine learning, data learning, and Spark Analytics."
- "We've had problems using a Python process to try to access something in a large volume of data. It crashes if somebody gives me the wrong code because it cannot handle a large volume of data."
What is our primary use case?
We have built a product called "NetBot." We take any form of data, large email data, image, videos or transactional data and we transform unstructured textual data videos in their structured form into reading into transactional data and we create an enterprise-wide smart data grid. That smart data grid is being used by the downstream analytics tool. We also provide machine-building for people to get faster insight into their data.
What is most valuable?
We use all the features. We use it for end-to-end. All of our data analysis and execution happens through Spark.
The features we find most valuable are the:
- Machine learning
- Data learning
- Spark Analytics.
What needs improvement?
We've had problems using a Python process to try to access something in a large volume of data. It crashes if somebody gives me the wrong code because it cannot handle a large volume of data.
For how long have I used the solution?
I have been using Apache Spark for more than five years.
What do I think about the stability of the solution?
We haven't had any issues with stability so far.
What do I think about the scalability of the solution?
As long as you do it correctly, it is scalable.
Our users mostly consist of data analysts, engineers, data scientists, and DB admins.
Which solution did I use previously and why did I switch?
Before using this solution we used Apache Storm.
How was the initial setup?
The initial setup is complex.
What about the implementation team?
We installed it ourselves.
What other advice do I have?
I would rate it a nine out of ten.
Which deployment model are you using for this solution?
On-premises
Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
Buyer's Guide
Apache Spark
December 2024
Learn what your peers think about Apache Spark. Get advice and tips from experienced pros sharing their opinions. Updated: December 2024.
824,067 professionals have used our research since 2012.
Vice President at Goldman Sachs at a computer software company with 10,001+ employees
Stable product with a valuable SQL tool
Pros and Cons
- "The product’s most valuable feature is the SQL tool. It enables us to create a database and publish it."
- "At the initial stage, the product provides no container logs to check the activity."
What is our primary use case?
We use the product for extensive data analysis. It helps us analyze a huge amount of data and transfer it to data scientists in our organization.
What is most valuable?
The product’s most valuable feature is the SQL tool. It enables us to create a database and publish it. It is a useful feature for us.
What needs improvement?
At the initial stage, the product provides no container logs to check the activity. It remains inactive for a long time without giving us any information. The containers could start quickly, similar to that of Jupyter Notebook.
For how long have I used the solution?
We have been using Apache Spark for eight months to one year.
What do I think about the stability of the solution?
It is a stable product. I rate its stability an eight out of ten.
What do I think about the scalability of the solution?
We have 45 Apache Spark users. I rate its scalability a nine out of ten.
How was the initial setup?
The complexity of the initial setup depends on the kind of environment an organization is working with. It requires one executive for deployment. I rate the process an eight out of ten.
What's my experience with pricing, setup cost, and licensing?
The product is expensive, considering the setup. However, from a standalone perspective, it is inexpensive.
What other advice do I have?
I advise others to analyze data and understand your business requirements before purchasing the product. I rate it an eight out of ten.
Which deployment model are you using for this solution?
On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Lead Data Scientist at International School of Engineering
A flexible solution that can be used for storage and processing
Pros and Cons
- "The most valuable feature of Apache Spark is its flexibility."
- "Apache Spark's GUI and scalability could be improved."
What is our primary use case?
We use Apache Spark for storage and processing.
What is most valuable?
The most valuable feature of Apache Spark is its flexibility.
What needs improvement?
Apache Spark's GUI and scalability could be improved.
For how long have I used the solution?
I have been using Apache Spark for four to five years.
What do I think about the scalability of the solution?
Around 15 data scientists are using Apache Spark in our organization.
How was the initial setup?
Apache Spark's initial setup is slightly complex compared to other other solutions. Data scientists could install our previous tools with minimal supervision, whereas Apache Spark requires some IT support. Apache Spark's installation is a time-consuming process because it requires ensuring that all the ports have been accessed properly following certain guidelines.
What about the implementation team?
While installing Apache Spark, I must look at the documentation and be very specific about the configuration settings. Only then I'll be able to install it.
What's my experience with pricing, setup cost, and licensing?
Apache Spark is an expensive solution.
What other advice do I have?
I would recommend Apache Spark to other users.
Overall, I rate Apache Spark an eight out of ten.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Data Engineer at Berief Food GmbH
A useful and easy-to-deploy product that has an excellent data processing framework
Pros and Cons
- "The data processing framework is good."
- "The solution must improve its performance."
What is our primary use case?
Our customers configure their software applications, and I use Apache to check them. We use it for data processing.
What is most valuable?
The data processing framework is good. The product is very useful.
What needs improvement?
The solution must improve its performance.
For how long have I used the solution?
I have been using the solution for four to five years.
What do I think about the stability of the solution?
The tool is stable. I rate the stability more than nine out of ten.
What do I think about the scalability of the solution?
We have a small business. Around four people in my organization use the solution.
How was the initial setup?
The deployment was easy.
What about the implementation team?
The solution was deployed with the help of third-party consultants.
What other advice do I have?
Overall, I rate the product more than eight out of ten.
Which deployment model are you using for this solution?
On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Director - Data Management, Governance and Quality at Hilton Worldwide
Powerful language but complicated coding
What is our primary use case?
Ingesting billions of rows of data all day.
How has it helped my organization?
Spark on AWS is not that cost-effective as memory is expensive and you cannot customize hardware in AWS. If you want more memory, you have to pay for more CPUs too in AWS.
What is most valuable?
Powerful language.
What needs improvement?
It is like going back to the '80s for the complicated coding that is required to write efficient programs.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Quantitative Developer at a marketing services firm with 11-50 employees
Seamless in distributing tasks, including its impressive map-reduce functionality
Pros and Cons
- "The distribution of tasks, like the seamless map-reduce functionality, is quite impressive."
- "When using Spark, users may need to write their own parallelization logic, which requires additional effort and expertise."
What is our primary use case?
Predominantly, I use Spark for data analysis on top of datasets containing tens of millions of records.
How has it helped my organization?
I have an example. We had a single-threaded application that used to run for about four to five hours, but with Spark, it got reduced to under one hour.
What is most valuable?
The distribution of tasks, like the seamless map-reduce functionality, is quite impressive. For the user, it appears as simple single-line data manipulations, but behind the scenes, the executor pool intelligently distributes the map and reduce functions.
What needs improvement?
The visualization could be improved.
For how long have I used the solution?
I have been working with Apache Spark for only a few months, not too long.
What do I think about the stability of the solution?
I haven't faced any stability issues. It has been stable in my experience.
What do I think about the scalability of the solution?
When it comes to the scalability of Spark, it's primarily a processing engine, not a database engine. I haven't tested it extensively with large record sizes.
In my organization, quite a few people are using Spark. In my smaller team, there are only two users.
What about the implementation team?
In terms of maintenance, when the load hits around 95%, we need to prioritize scripts and analysis within the team.
We coordinate and prioritize based on the available resources. If there were self-service tools or better hand-holding for such situations, it would make things easier.
Which other solutions did I evaluate?
Currently, we extensively use pandas and Polaris. We are leveraging Docker and Kubernetes as a framework, along with AWS Batch for distribution. This is the closest substitute we have for Spark Distribution.
Both Docker and Kubernetes are more general-purpose solutions. If someone is already using Kubernetes and it's provided as a service, it can be used for special-purpose utilization, similar to Docker and Kubernetes.
In such cases, users may need to write the parallelization logic themselves, but it's relatively easy to onboard and start with a distributed load. Spark, on the other hand, is primarily used for special-purpose utilization. Users typically choose Spark when they have data-intensive tasks.
Another significant issue with Spark is its syntactics. For instance, if we have libraries like Panda or Polaris, we can run them single-threaded on a single core, or we can distribute them leveraging Kubernetes.
We don't need to rewrite that code base for Spark. However, if we are writing code specifically for Spark Executors, it will not be amenable to running it locally.
What other advice do I have?
I would recommend understanding the use case better. Only if it fits your use case, then go for it. But it is a great tool.
Overall, I would rate Apache Spark an eight out of ten.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Software Consultant at a tech services company with 10,001+ employees
It provides large scale data processing with negligible latency at the cost of commodity hardwares.
Valuable Features:
The most important feature of Apache Spark is that it provides large scale data processing with negligible latency at the cost of commodity hardwares. Spark framework is just a blessings over Hadoop, as the later does not allow fast processing of data, which is accomplished by the in-memory data processing of Spark.
Improvements to My Organization:
Apache Spark is a framework, which allows one organization to perform business & data analytics, at a very low cost, as compared to Ab-Initio or Informatica. Thus, by using Apache Spark in place of those tools, one organization can achieve huge reduction in cost, & without compromising with any data security & other data related issues, if controlled by an expert Scala programmer & Apache Spark does not bear the overheads of Hadoop of having high latency. All these points, by which my organization is being benefitted as well.
Room for Improvement:
Question of improvement always comes to mind of the developers. Just like the most common need of the developers, if a user-friendly GUI along with 'drag & drop' feature can be attached to this framework, then it would be easier to access it.
Another thing to mention, there always is a place for improvement in terms of the memory usage. If in future, it is achievable to use less memory for processing, it would obviously be better.
Deployment Issues:
We've had no issues with deployment.
Stability Issues:
See above regarding memory usage.
Scalability Issues:
We've had no issues with scalability.
Other Advice:
My advice to others would be just to use Apache Spark for large scale data processing, as it provides good performance at low cost, unlike Ab-Initio or Informatica. But the main problem is, now in the market, there are not many people certified in Apache Spark.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Buyer's Guide
Download our free Apache Spark Report and get advice and tips from experienced pros
sharing their opinions.
Updated: December 2024
Popular Comparisons
Amazon EMR
Cloudera Distribution for Hadoop
Spark SQL
IBM Spectrum Computing
Hortonworks Data Platform
Informatica Big Data Parser
IBM Db2 Big SQL
Buyer's Guide
Download our free Apache Spark Report and get advice and tips from experienced pros
sharing their opinions.
Quick Links
Learn More: Questions:
- Which is the best RDMBS solution for big data?
- Apache Spark without Hadoop -- Is this recommended?
- Which solution has better performance: Spring Boot or Apache Spark?
- AWS EMR vs Hadoop
- Handling real and fast data - how do BigInsight and other solutions perform?
- When evaluating Hadoop, what aspect do you think is the most important to look for?
- Should we choose InfoSphere BigInsights or Cloudera?
The drag and drop GUI comment is very true. We developed such a GUI for spatial and time series data in Spark. But there are other tools out there. Maybe you should do a review of such tools.