There are several valuable features.
- Interactive data access (low latency)
- Batch ETL-style processing
- Schema-free data models
- Algorithms
There are several valuable features.
We have 1000x improvement in performance over other techniques. It's enabled interactive self-service access to data.
Better integration of BI tools wold be a much appreciated improvement.
I've used it for about 14 months.
I haven't had any issues with deployment.
It's been stable for us.
It's scaled without issue.
Customer service is excellent.
Technical Support:Technical support is excellent.
Yes, we previously used Oracle, from which we ported our data.
The initial setup was simple.
We implemented it with our in-house team.
Be sure to Uuse the Apache versions and avoid vendor-specific extensions.
We use Apache Spark to prepare data for transformation and encryption, depending on the columns. We use AES-256 encryption. We're building a proof of concept at the moment. We prepare patches on Spark for Kubernetes on-premise and Google Cloud Platform.
I like that it can handle multiple tasks parallelly. I also like the automation feature. JavaScript also helps with the parallel streaming of the library.
The logging for the observability platform could be better.
I know about this technology for a long time, maybe for about three years.
Because my area is data analytics and analytics solutions, I use BigQuery, SQL, and ETL. I also use Dataproc and DataFlow.
We use an integrator sometimes, but recently we put together a team to support the infrastructural requirements. This is because the proof of concept is self-administered.
I would recommend Apache Spark to new users, but it depends on the use case. Sometimes, it's not the best solution.
On a scale from one to ten, I would give Apache Spark a ten.
We use this solution for information gathering and processing.
I use it myself when I am developing on my laptop.
I am currently using an on-premises deployment model. However, in a few weeks, I will be using the EMR version on the cloud.
The most valuable feature of this solution is its capacity for processing large amounts of data.
This solution makes it easy to do a lot of things. It's easy to read data, process it, save it, etc.
When you first start using this solution, it is common to run into memory errors when you are dealing with large amounts of data. Once you are experienced, it is easier and more stable.
When you are trying to do something outside of the normal requirements in a typical project, it is difficult to find somebody with experience.
This solution is difficult for users who are just beginning and they experience out of memory errors when dealing with large amounts of data.
I have not been in contact with technical support. I find all of the answers that I need in the forums.
The work that we are doing with this solution is quite common and is very easy to do.
My advice for anybody who is implementing this solution is to look at their needs and then look at the community. Normally, there are a lot of people who have already done what you need. So, even without experience, it is quite simple to do a lot of things.
I would rate this solution a nine out of ten.
ETL and streaming capabilities.
Made Big Data processing more convenient and a uniform framework adds to efficiency of usage since the same framework can be used for batch and stream processing.
Stability in terms of API (things were difficult, when transitioning from RDD to DataFrames, then to DataSet).
I have used Spark since its inception in March 2015, from Spark 1.1 onwards.
Currently, I use 2.2 extensively.
Yes, occasionally with different APIs.
No.
Since we were using the Open Source version of Apache Spark, without the Databricks support, we never used technical support form Databricks.
Yes we used Hive, Pig, and Storm. Having everything in the same framework has helped us out a lot.
Yes, we considered other big data products in the Big Data Ecosystem.
Go for it.
Spark Streaming, which allows you to construct event-driven information systems and respond to the events in near-real time.
Apache Spark’s ability to perform batch processing at one second or less intervals is the most transformative and less pervasive for any data processing application. The ingested data can also be validated and verified for quality early in the data pipeline.
Apache Spark as a data processing engine has come a long way since its inception. Although you are able to perform complex transformations using Spark libraries, the support for SQL to perform transformations is still limited. You can alleviate some of these limitations by running Spark within Hadoop ecosystem and by leveraging the fairly evolved HiveQL.
I've used it for 16 months.
The enterprise scale deployment of Apache Spark is slightly involved to derive its full potential of stability, scalability and security. However, some Hadoop vendors like Cloudera have integrated Spark data processing engine into their Hadoop platforms and have made it easier to deploy, scale and secure.
This is an open source technology and is dependent on community support. The Apache Spark community is vibrant and it is easy to find answers to questions. The enterprises can also get commercial support from Hadoop vendors such as Cloudera. I recommend enterprises to inspect Hadoop vendors’ commitment to open source as well as their ability to curate Apache Spark technology into the rest of the ecosystem before signing up for a commercial support or subscription.
The initial set-up is straightforward as long as you have picked a right Hadoop distribution.
I recommend engaging an experienced Hadoop vendor during the planning and initial implementation phases of the project. You will be able to avoid any potential pitfalls or reduce overall project time by having a Hadoop expert guiding you during the initial stages of the project.
I evaluated some other technologies such as Samza but community backing for Apache Spark stood out.
I also suggest having a Chief Technologist who has extensive experience in architecting several Big Data solutions. They should be able to communicate in business as well as technology language. Their expertise should range from infrastructure to application development and have command of Hadoop technologies.
Streaming telematics data.
It's a better MR, supports streaming and micro-batch, and supports Spark ML and Spark SQL.
It supports streaming and micro-batch.
Better data lineage support.
Faster time to parse and compute data. It makes web-based queries for plotting data easier.
It needs to be simpler to use the machine learning algorithms supported by Octave (example polynomial regressions, polynomial interpolation).
I've been using it for one year.
There have been no issues with the deployment.
There have been no issues with the stability.
There have been no issues with the scalability.
We still rely on user forums for my answers. We do not use a commercial product yet.
The initial set-up was easy. I have not explored using this on AWS clusters.
We did an in-house implementation and development for our regression tool.
The ROI will be an in-house product to do machine learning analytics on data obtained from customer.
We did not evaluate any other products.
It's easy to use and has a learning curve.
Our use case for Apache Spark was a retail price prediction project. We were using retail pricing data to build predictive models. To start, the prices were analyzed and we created the dataset to be visualized using Tableau. We then used a visualization tool to create dashboards and graphical reports to showcase the predictive modeling data.
Apache Spark was used to host this entire project.
The processing time is very much improved over the data warehouse solution that we were using.
The most valuable features are the storage engine, the memory engine, and the processing engine.
I would like to see integration with data science platforms to optimize the processing capability for these tasks.
I have been using Apache Spark for the past year.
We have not been in contact with technical support.
The initial setup is straightforward. It took us around one week to set it up, and then the requirements and creation of the project flow and design needed to be done. The design stage took three to four weeks, so in total, it required between four and five weeks to set up.
I would rate this solution an eight out of ten.