Try our new research platform with insights from 80,000+ expert users
it_user746673 - PeerSpot reviewer
Sr. Software Engineer at a tech vendor with 1-10 employees
Real User
Helped us reduce 3TB Google Ngrams in hours instead of days
Pros and Cons
  • "The most valuable feature is the Fault Tolerance and easy binding with other processes like Machine Learning, graph analytics."
  • "More ML based algorithms should be added to it, to make it algorithmic-rich for developers."

What is most valuable?

The most valuable feature is the Fault Tolerance and easy binding with other processes like Machine Learning, graph analytics. The community is growing and hence executing ML in a distributed fashion is quite good.

How has it helped my organization?

Previously we were using Hadoop MapReduce to reduce the Google Ngrams (3TB), which took us approximately five days on our cluster. After using Spark, we were able to accomplish this task within hours.

What needs improvement?

This product is already improving as the community is developing it rapidly. More ML based algorithms should be added to it, to make it algorithmic-rich for developers.

For how long have I used the solution?

Two and a half years.

Buyer's Guide
Apache Spark
March 2025
Learn what your peers think about Apache Spark. Get advice and tips from experienced pros sharing their opinions. Updated: March 2025.
839,422 professionals have used our research since 2012.

What do I think about the stability of the solution?

No, I did not encounter any problems with the stability. It is also quite backwards compatible.

What do I think about the scalability of the solution?

No I did not as of now, it is quite scalable. Using simple scripts you can add as many workers as you want.

What other advice do I have?

This is a very good product for the big data analytics and integrates well with other parts like Machine Learning and graph analytics.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user374040 - PeerSpot reviewer
Systems Engineering Lead, Mid-Atlantic at a tech company with 10,001+ employees
Vendor
It allows you to construct event-driven information systems.

Valuable Features

Spark Streaming, which allows you to construct event-driven information systems and respond to the events in near-real time.

Improvements to My Organization

Apache Spark’s ability to perform batch processing at one second or less intervals is the most transformative and less pervasive for any data processing application. The ingested data can also be validated and verified for quality early in the data pipeline.

Room for Improvement

Apache Spark as a data processing engine has come a long way since its inception. Although you are able to perform complex transformations using Spark libraries, the support for SQL to perform transformations is still limited. You can alleviate some of these limitations by running Spark within Hadoop ecosystem and by leveraging the fairly evolved HiveQL.

Use of Solution

I've used it for 16 months.

Deployment Issues

The enterprise scale deployment of Apache Spark is slightly involved to derive its full potential of stability, scalability and security. However, some Hadoop vendors like Cloudera have integrated Spark data processing engine into their Hadoop platforms and have made it easier to deploy, scale and secure.

Customer Service and Technical Support

This is an open source technology and is dependent on community support. The Apache Spark community is vibrant and it is easy to find answers to questions. The enterprises can also get commercial support from Hadoop vendors such as Cloudera. I recommend enterprises to inspect Hadoop vendors’ commitment to open source as well as their ability to curate Apache Spark technology into the rest of the ecosystem before signing up for a commercial support or subscription.

Initial Setup

The initial set-up is straightforward as long as you have picked a right Hadoop distribution.

Implementation Team

I recommend engaging an experienced Hadoop vendor during the planning and initial implementation phases of the project. You will be able to avoid any potential pitfalls or reduce overall project time by having a Hadoop expert guiding you during the initial stages of the project.

Other Solutions Considered

I evaluated some other technologies such as Samza but community backing for Apache Spark stood out.

Other Advice

I also suggest having a Chief Technologist who has extensive experience in architecting several Big Data solutions. They should be able to communicate in business as well as technology language. Their expertise should range from infrastructure to application development and have command of Hadoop technologies.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Buyer's Guide
Apache Spark
March 2025
Learn what your peers think about Apache Spark. Get advice and tips from experienced pros sharing their opinions. Updated: March 2025.
839,422 professionals have used our research since 2012.
it_user1059558 - PeerSpot reviewer
Portfolio Manager, Enterprise Solutions Architect at Capgemini
Real User
Supports streaming and micro-batch

What is our primary use case?

Streaming telematics data.

How has it helped my organization?

It's a better MR, supports streaming and micro-batch, and supports Spark ML and Spark SQL.

What is most valuable?

It supports streaming and micro-batch.

What needs improvement?

Better data lineage support.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user374028 - PeerSpot reviewer
Core Engine Engineer at a computer software company with 51-200 employees
Real User
It makes web-based queries for plotting data easier. It needs to be simpler to use the machine learning algorithms supported by Octave.

Valuable Features

  • RDDs
  • DataFrames
  • Machine learning libraries

Improvements to My Organization

Faster time to parse and compute data. It makes web-based queries for plotting data easier.

Room for Improvement

It needs to be simpler to use the machine learning algorithms supported by Octave (example polynomial regressions, polynomial interpolation).

Use of Solution

I've been using it for one year.

Deployment Issues

There have been no issues with the deployment.

Stability Issues

There have been no issues with the stability.

Scalability Issues

There have been no issues with the scalability.

Customer Service and Technical Support

We still rely on user forums for my answers. We do not use a commercial product yet.

Initial Setup

The initial set-up was easy. I have not explored using this on AWS clusters.

Implementation Team

We did an in-house implementation and development for our regression tool.

ROI

The ROI will be an in-house product to do machine learning analytics on data obtained from customer.

Other Solutions Considered

We did not evaluate any other products.

Other Advice

It's easy to use and has a learning curve.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
reviewer1904019 - PeerSpot reviewer
Chief Technology Officer at a tech services company with 11-50 employees
Real User
Helpful support, easy to use, and high availability
Pros and Cons
  • "The most valuable feature of Apache Spark is its ease of use."
  • "Apache Spark can improve the use case scenarios from the website. There is not any information on how you can use the solution across the relational databases toward multiple databases."

What is our primary use case?

I am using Apache Spark for the data transition from databases. We have customers who have one database as a data lake.

What is most valuable?

The most valuable feature of Apache Spark is its ease of use.

What needs improvement?

Apache Spark can improve the use case scenarios from the website. There is not any information on how you can use the solution across the relational databases toward multiple databases.  

For how long have I used the solution?

I have been using Apache Spark for approximately 18 months.

What do I think about the stability of the solution?

Apache Spark is stable.

What do I think about the scalability of the solution?

We are using Apache Spark across multiple nodes and it is scalable.

We have approximately five people using this solution.

How are customer service and support?

The technical support from Apache Spark is very good.

What other advice do I have?

I rate Apache Spark an eight out of ten.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user746943 - PeerSpot reviewer
Big Data and Cloud Solution Consultant at a financial services firm with 10,001+ employees
Real User
Provides flexibility for application creation with less coding effort
Pros and Cons
  • "DataFrame: Spark SQL gives the leverage to create applications more easily and with less coding effort."
  • "Dynamic DataFrame options are not yet available."

What is most valuable?

DataFrame: Spark SQL gives the leverage to create applications more easily and with less coding effort.

How has it helped my organization?

We developed a tool for data ingestion from HDFS->Raw->L1 layer with data quality checks, putting data to elastic search, performing CDC.

What needs improvement?

Dynamic DataFrame options are not yet available.

For how long have I used the solution?

One and a half years.

What do I think about the stability of the solution?

No.

What do I think about the scalability of the solution?

No.

What other advice do I have?

Spark gives the flexibility for developing custom applications.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user371325 - PeerSpot reviewer
Data Scientist at a tech vendor with 10,001+ employees
Vendor
It allows the loading and investigation of very lard data sets, has MLlib for machine learning, Spark streaming, and both the new and old dataframe API.

What is most valuable?

It allows the loading and investigation of very lard data sets, has MLlib for machine learning, Spark streaming, and both the new and old dataframe API.

How has it helped my organization?

We're able to perform data discovery on large datasets without too much difficulty.

What needs improvement?

It needs better documentation as well as examples for all the Spark libraries. That would be very helpful in maximizing its capabilities and results.

For how long have I used the solution?

I've used it for over nine months now.

What was my experience with deployment of the solution?

I haven't encountered any issues with deployment.

What do I think about the stability of the solution?

There have been no stability issues.

What do I think about the scalability of the solution?

I haven't had any scalability issues. It scales better than Python and R.

How are customer service and technical support?

Customer Service:

I haven't had to use customer service.

Technical Support:

I haven't had to use technical support.

Which solution did I use previously and why did I switch?

I previously used Python and R, but neither of these scaled particularly well.

How was the initial setup?

The initial setup was complex. It was not easy getting the correct version and dependencies set up.

What about the implementation team?

I implemented it in-house on my own!

What was our ROI?

It's open-source, so ROI is inapplicable.

What other advice do I have?

Learn Scala as this will greatly reduce the pain in starting off with Spark.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user365301 - PeerSpot reviewer
Software Developer (Product Engineering) at a computer software company with 501-1,000 employees
Vendor
We have been using Spark to do a lot of batch and stream processing of inbound data from Apache Kafka. Scaling Spark on YARN is still an issue but we are getting acceptable performance.

Valuable Features:

\Spark Streaming, Spark SQL and MLib in that order.

Improvements to My Organization:

We have been using Spark to do a lot of batch and stream processing of inbound data from Apache Kafka. Scaling Spark on YARN is still an issue but we are getting acceptable performance.

Room for Improvement:

Like I said scalability is still an issue, also stability. Spark on Yarn still doesn't seem to have programming submission api, so have to rely on spark-submit script to run jobs on YARN. Scala vs Java API have performance differences which will require sometimes to code in Scala.

Other Advice:

Have Scala developers at hand. Base Java competency will not be enough during optimization rounds.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user