Try our new research platform with insights from 80,000+ expert users
Software Architect at Akbank
Real User
Provides fast aggregations, AI libraries, and a lot of connectors
Pros and Cons
  • "AI libraries are the most valuable. They provide extensibility and usability. Spark has a lot of connectors, which is a very important and useful feature for AI. You need to connect a lot of points for AI, and you have to get data from those systems. Connectors are very wide in Spark. With a Spark cluster, you can get fast results, especially for AI."
  • "Stream processing needs to be developed more in Spark. I have used Flink previously. Flink is better than Spark at stream processing."

What is our primary use case?

We just finished a central front project called MFY for our in-house fraud team. In this project, we are using Spark along with Cloudera. In front of Spark, we are using Couchbase. 

Spark is mainly used for aggregations and AI (for future usage). It gathers stuff from Couchbase and does the calculations. We are not actively using Spark AI libraries at this time, but we are going to use them.  

This project is for classifying the transactions and finding suspicious activities, especially those suspicious activities that come from internet channels such as internet banking and mobile banking. It tries to find out suspicious activities and executes rules that are being developed or written by our business team. An example of a rule is that if the transaction count or transaction amount is greater than 10 million Turkish Liras and the user device is new, then raise an exception. The system sends an SMS to the user, and the user can choose to continue or not continue with the transaction.

How has it helped my organization?

Aggregations are very fast in our project since we started to use Spark. We can tell results in around 300 milliseconds. Before using Spark, the time was around 700 milliseconds. 

Before using Spark, we only used Couchbase. We needed fast results for this project because transactions come from various channels, and we need to decide and resolve them at the earliest because users are performing the transaction. If our result or process takes longer, users might stop or cancel their transactions, which means losing money. Therefore, fast results time is very important for us.

What is most valuable?

AI libraries are the most valuable. They provide extensibility and usability. Spark has a lot of connectors, which is a very important and useful feature for AI. You need to connect a lot of points for AI, and you have to get data from those systems. Connectors are very wide in Spark. With a Spark cluster, you can get fast results, especially for AI. 

What needs improvement?

Stream processing needs to be developed more in Spark. I have used Flink previously. Flink is better than Spark at stream processing.

Buyer's Guide
Apache Spark
November 2024
Learn what your peers think about Apache Spark. Get advice and tips from experienced pros sharing their opinions. Updated: November 2024.
816,406 professionals have used our research since 2012.

For how long have I used the solution?

I am a Java developer. I have been interested in Spark for around five years. We have been actively using it in our organization for almost a year.

What do I think about the stability of the solution?

It is the most stable platform. As compare to Flink, Spark is good, especially in terms of clusters and architecture. My colleagues who set up these clusters say that Spark is the easiest.

What do I think about the scalability of the solution?

It is scalable, but we don't have the need to scale it. 

It is mainly used for reporting big data in our organization. All teams, especially the VR team, are using Spark for job execution and remote execution. I can say that 70% of users use Spark for reporting, calculations, and real-time operations. We are a very big company, and we have around a thousand people in IT.

We will continue its usage and develop more. We have kind of just started using it. We finished this project just three months ago. We are now trying to find out bottlenecks in our systems, and then we are ready to go.

How are customer service and support?

We have not used Apache support. We have only used Cloudera support for this project, and they helped us a lot during the development cycle of this project. 

How was the initial setup?

I don't have any idea about it. We are a big company, and we have another group for setting up Spark.

What other advice do I have?

I would advise planning well before implementing this solution. In enterprise corporations like ours, there are a lot of policies. You should first find out your needs, and after that, you or your team should set it up based on your needs. If your needs change during development because of the business requirements, it will be very difficult. 

If you are clear about your needs, it is easier to set it up. If you know how Spark is used in your project, you have to define firewall rules and cluster needs. When you set up Spark, it should be ready for people's usage, especially for remote job execution. 

I would rate Apache Spark a nine out of ten.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
reviewer2534727 - PeerSpot reviewer
Manager Data Analytics at a consultancy with 10,001+ employees
Real User
A flexible solution with real-time processing capabilities
Pros and Cons
  • "I like Apache Spark's flexibility the most. Before, we had one server that would choke up. With the solution, we can easily add more nodes when needed. The machine learning models are also really helpful. We use them to predict energy theft and find infrastructure problems."
  • "For improvement, I think the tool could make things easier for people who aren't very technical. There's a significant learning curve, and I've seen organizations give up because of it. Making it quicker or easier for non-technical people would be beneficial."

What is our primary use case?

We use the solution to extract data from our sensors.  We have lots of data streaming into our system, which used to get overwhelmed. We use Apache Spark to handle real-time streaming and do machine learning to predict supply and demand in the market and adjust operations.

What is most valuable?

I like Apache Spark's flexibility the most. Before, we had one server that would choke up. With the solution, we can easily add more nodes when needed. The machine learning models are also really helpful. We use them to predict energy theft and find infrastructure problems.

The tool's real-time processing has had a big impact. We used to get data from sensors after a month. We get it in less than 10 minutes, which helps us take quick action.

We use Apache Spark to map our data pipelines using MapReduce technology. We're also working on integrating tools like Hive with Apache Spark to distribute our data processing. We can also integrate other tools like Apache Kafka and Hadoop.

We faced some challenges when integrating the solution into our existing system, but good documentation helped solve them.

What needs improvement?

For improvement, I think the tool could make things easier for people who aren't very technical. There's a significant learning curve, and I've seen organizations give up because of it. Making it quicker or easier for non-technical people would be beneficial.

For how long have I used the solution?

I have been working with the product for five years. 

What do I think about the stability of the solution?

Apache Spark is stable. 

What do I think about the scalability of the solution?

We're a big company with about 4 million consumers. We handle huge amounts of data—around 30,000 sensors send data every 15 minutes, which adds up to 5-10 terabytes per day.

Which solution did I use previously and why did I switch?

Before Apache Spark, we had a different solution - a traditional system with one server handling everything, more like a data warehouse. We switched to Apache Spark because we needed real-time visibility in our operations.

How was the initial setup?

The initial setup process was challenging. We tried to do it ourselves at first, but we weren't used to distributed computing systems, creating nodes, and distributing data. Later, we engaged consulting groups that specialized in it. This is why there's a specific learning curve—it would be challenging for a company to start alone.

The initial deployment took us about six to eight months. We started with three people involved in the deployment process and later increased to five. From a maintenance point of view, it's pretty smooth now. It's not difficult to maintain and doesn't require much maintenance.

What was our ROI?

The tool has helped us reduce costs that run into billions of dollars yearly. The ROI is very significant for us.

Which other solutions did I evaluate?

We did evaluate other options. We started by looking at open-source Hadoop deployment, thinking we'd bring data into HDFS and do machine learning separately. But that would have been a hassle, so Apache Spark was a better fit.

What other advice do I have?

I rate the overall solution a seven out of ten. I would recommend Apache Spark to other users, but it depends on their use cases. I advise new users to get an expert involved from the start.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
PeerSpot user
Buyer's Guide
Apache Spark
November 2024
Learn what your peers think about Apache Spark. Get advice and tips from experienced pros sharing their opinions. Updated: November 2024.
816,406 professionals have used our research since 2012.
Suresh_Srinivasan - PeerSpot reviewer
Co-Founder at FORMCEPT Technologies
Real User
Top 10
Enables us to process data from different data sources
Pros and Cons
  • "We use Spark to process data from different data sources."
  • "In data analysis, you need to take real-time data from different data sources. You need to process this in a subsecond, do the transformation in a subsecond, and all that."

What is our primary use case?

Our primary use case is for interactively processing large volume of data.

What is most valuable?

We use Spark to process data from different data sources. 

What needs improvement?

In data analysis, you need to take real-time data from different data sources. You need to process this in a subsecond, and do the transformation in a subsecond

For how long have I used the solution?

I have been using Apache Spark for eight to nine years. 

What do I think about the stability of the solution?

It is a stable solution. The solution is ten out of ten on stability. 

What do I think about the scalability of the solution?

The solution is highly scalable. All of the technical guys use Spark. Our product is used by many people within our customers' company.

How was the initial setup?

The initial setup is straightforward. 

What's my experience with pricing, setup cost, and licensing?

The solution is moderately priced. 

What other advice do I have?

I rate the overall solution a ten out of ten. 

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
PeerSpot user
CTO at Hammerknife
Real User
Top 5
Provides a valuable implementation of distributed data processing with a simple setup process
Pros and Cons
  • "Apache Spark provides a very high-quality implementation of distributed data processing."
  • "There were some problems related to the product's compatibility with a few Python libraries."

What is our primary use case?

We use the product for real-time data analysis.

What is most valuable?

Apache Spark provides a very high-quality implementation of distributed data processing. I rate it 20 on a scale of one to ten.

What needs improvement?

There were some problems related to the product's compatibility with a few Python libraries. But I suppose they are fixed.

For how long have I used the solution?

We have been using Apache Spark for the last two to three years.

What do I think about the stability of the solution?

I rate the product's stability a ten out of ten.

What do I think about the scalability of the solution?

The product is enormously scalable.

How was the initial setup?

The initial setup process is simple if you are a good professional. You have to select a few parameters and press enter. It is already integrated into Databricks platform. One person is enough to manage small and medium implementations.

What's my experience with pricing, setup cost, and licensing?

It is an open-source platform. We do not pay for its subscription.

Which other solutions did I evaluate?

We are evaluating a few analytics engineering and DBT solutions. For now, Spark is in the secondary position.

What other advice do I have?

I recommend Apache Spark for batch analytics features.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
PLC Programmer at Alzero
Real User
Top 20
Highly-recommended robust solution for data processing
Pros and Cons
  • "I appreciate everything about the solution, not just one or two specific features. The solution is highly stable. I rate it a perfect ten. The solution is highly scalable. I rate it a perfect ten. The initial setup was straightforward. I recommend using the solution. Overall, I rate the solution a perfect ten."
  • "The solution’s integration with other platforms should be improved."

What is our primary use case?

We are a software solutions company that serves a variety of industries, including banking, insurance, and industrial sectors. The product is specifically employed for managing data platforms for our customers.


What is most valuable?

The solution, as a package, excels across the board. I appreciate everything, not just one or two specific features.


What needs improvement?

The solution’s integration with other platforms should be improved.


For how long have I used the solution?

I have been using the solution for the past eight years. Currently, I’m using the latest version of the solution.


What do I think about the stability of the solution?

The solution is highly stable. I rate it a perfect ten.


What do I think about the scalability of the solution?

The solution is highly scalable. I rate it a perfect ten.


How was the initial setup?

The initial setup was straightforward and was conducted on the cloud. The entire deployment process took just 15 minutes. The deployment process involves provisioning the computational part tool using Terraform.


What's my experience with pricing, setup cost, and licensing?

The solution is affordable and there are no additional licensing costs.


What other advice do I have?

I recommend using the solution. Overall, I rate the solution a perfect ten.


Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
PeerSpot user
reviewer2150616 - PeerSpot reviewer
Lead Data Scientist at a transportation company with 51-200 employees
Real User
Top 5
Offers user-friendliness, clarity and flexibility
Pros and Cons
  • "The product's initial setup phase was easy."
  • "From my perspective, the only thing that needs improvement is the interface, as it was not easily understandable."

What needs improvement?

The only issue I faced with the tool was that I used to choose the compute device to support parallel processing, and it has to be more like scaling up horizontally. The tool should be more scalable, not in terms of increasing the CPU or something, but more in the area of units. If two units are not enough, the third or fourth unit should be able to come into the picture.

From my perspective, the only thing that needs improvement is the interface, as it was not easily understandable. Sometimes, I get an error saying that it is an RDD-related error, and it becomes difficult to understand where it went wrong. When I deal with datasets using a library called Pandas in Python, I can actually apply functions on each column and get a transformation from the column. When I try to do the same thing with Apache Spark, it is okay and works, but it is not straightforward; I need to deal with it a little differently, and even after trying to do that differently, the problem I face there is, sometimes it will throw an error saying that it is looping back to the same, but I was not getting that kind of errors in Pandas.

In future updates, the tool should be made more user-friendly. I want to take fifty parallel processes rather than one, and I want to pick some particular columns to be split by partition, so if the tool is user-friendly and offers clarity and flexibility, then that will be good.

For how long have I used the solution?

I have been using Apache Spark for four years.

What do I think about the stability of the solution?

Stability-wise, I rate the solution a nine out of ten. The only issues with the tool revolve around user interaction and user flexibility.

What do I think about the scalability of the solution?

It is a scalable solution. Scalability-wise, I rate the solution an eight out of ten.

Around five people in my company use the tool.

How are customer service and support?

The solution's technical support is helpful, but I faced some problems which were more of a generic issue. If I face any problems which are non- generic issues, I get help from the tool's team. For the generic issues, I get answers mainly from the forums where the problem was already resolved. When it comes to some unknown problem or specific problem with my work, then the support takes time. I rate the technical support a seven out of ten.

How would you rate customer service and support?

Neutral

Which solution did I use previously and why did I switch?

I only work with Apache Spark.

How was the initial setup?

The product's initial setup phase was easy.

I managed the product's installation phase, both locally and on the cloud.

The solution is deployed on the on-premises version.

The solution can be deployed in two to three hours.

What was our ROI?

Apache Spark has helped save 50 percent of the operational costs. Time was reduced with the use of the tool, but the computing part increased. Overall, I can see that the tool's use has led to a 50 percent reduction in costs.

What's my experience with pricing, setup cost, and licensing?

I did not pay anything when using the tool on cloud services, but I had to pay on the compute side. The tool is not expensive compared with the benefits it offers. I rate the price as an eight out of ten.

Which other solutions did I evaluate?

Previously, I was more of a Python full-stack developer, and I was happy dealing with PySpark libraries, which gave me an edge in continuing the work with Apache.

What other advice do I have?

Speaking about Apache Spark's use in our company's data processing workflows, I would say that when we deal with large datasets of data, if we don't use Spark, then when we try to use a data frame consisting of one year of data, it used to take me 45 minutes to an hour. Moreover, sometimes I used to get the memory out of space errors, but such issues were avoided the moment I started using Apache Spark, as I was able to get the whole processing done in less than five minutes, and there were no memory issues.

For big data processing, the tool's parallel processing and time are areas that have been helpful. When I try to apply a function, I can directly data write one code. Basically, I used Apache Spark to forecast multiple units at the same time, and if not with Apache Spark, I would be doing that one by one, which is more of a serial processing process that used to take me around five hours. At the moment, we use Apache Spark in parallel processing, where computing happens parallelly, and all these computations are cut down by at least 90 percent. It helps me significantly to reduce the time needed for operations.

The tool's real-time processing is an area that I have not tried to use much. When it comes to real-time processing of my data, I use Kafka.

I am handling data governance using Databricks Unity Catalog.

When I try to apply an ML model, I am unable to get that model done on a table partitioned by a particular column; it makes me get the job done in a reduced number of partitions. If I go with five partitions, I am able to get at least three to four times the benefits in a lesser amount of time.

Regular maintenance exists, but it is not like I have to sit week by week and upgrade a patch or something like that. The maintenance is done mostly in about six months to a year.

I take care of the tool's maintenance.

I recommend the tool to others.

I rate the tool an eight out of ten.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
PeerSpot user
Lokesh Jayanna - PeerSpot reviewer
Vice President at Goldman Sachs at a computer software company with 10,001+ employees
Real User
Top 5
Stable product with a valuable SQL tool
Pros and Cons
  • "The product’s most valuable feature is the SQL tool. It enables us to create a database and publish it."
  • "At the initial stage, the product provides no container logs to check the activity."

What is our primary use case?

We use the product for extensive data analysis. It helps us analyze a huge amount of data and transfer it to data scientists in our organization.

What is most valuable?

The product’s most valuable feature is the SQL tool. It enables us to create a database and publish it. It is a useful feature for us.

What needs improvement?

At the initial stage, the product provides no container logs to check the activity. It remains inactive for a long time without giving us any information. The containers could start quickly, similar to that of Jupyter Notebook.

For how long have I used the solution?

We have been using Apache Spark for eight months to one year.

What do I think about the stability of the solution?

It is a stable product. I rate its stability an eight out of ten.

What do I think about the scalability of the solution?

We have 45 Apache Spark users. I rate its scalability a nine out of ten.

How was the initial setup?

The complexity of the initial setup depends on the kind of environment an organization is working with. It requires one executive for deployment. I rate the process an eight out of ten.

What's my experience with pricing, setup cost, and licensing?

The product is expensive, considering the setup. However, from a standalone perspective, it is inexpensive.

What other advice do I have?

I advise others to analyze data and understand your business requirements before purchasing the product. I rate it an eight out of ten.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Jagannadha Rao - PeerSpot reviewer
Lead Data Scientist at International School of Engineering
Real User
Top 10
A flexible solution that can be used for storage and processing
Pros and Cons
  • "The most valuable feature of Apache Spark is its flexibility."
  • "Apache Spark's GUI and scalability could be improved."

What is our primary use case?

We use Apache Spark for storage and processing.

What is most valuable?

The most valuable feature of Apache Spark is its flexibility.

What needs improvement?

Apache Spark's GUI and scalability could be improved.

For how long have I used the solution?

I have been using Apache Spark for four to five years.

What do I think about the scalability of the solution?

Around 15 data scientists are using Apache Spark in our organization.

How was the initial setup?

Apache Spark's initial setup is slightly complex compared to other other solutions. Data scientists could install our previous tools with minimal supervision, whereas Apache Spark requires some IT support. Apache Spark's installation is a time-consuming process because it requires ensuring that all the ports have been accessed properly following certain guidelines.

What about the implementation team?

While installing Apache Spark, I must look at the documentation and be very specific about the configuration settings. Only then I'll be able to install it.

What's my experience with pricing, setup cost, and licensing?

Apache Spark is an expensive solution.

What other advice do I have?

I would recommend Apache Spark to other users.

Overall, I rate Apache Spark an eight out of ten.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user