In terms of improvement, the only thing that could be enhanced is the stability aspect of Spark SQL. There could be additional features that I haven't explored but the current solution for working with databases seems effective. I haven't worked extensively with all components, so there might be untapped features that could enhance the solution's value.
I'm using DBeaver to connect Spark with external tools. I've experienced some incompatibilities when using the Delta Lake format. It is compatible when you're using Databricks on the cloud, but when I'm using Spark on-premise, there are some incompatibility issues. We expect interactive queries with Dremio to provide better results. We issue a query but see that it's a batch process in the background. The documentation is also limited, especially in the setup for Thrift servers.
It would be useful if Spark SQL integrated with some data visualization tools. For example, we could integrate Spark SQL with Tableau for data visualization.
Spark SQL can improve the documentation they have provided. It can be a bit unclear at times. They could improve the documentation a bit more so that we can understand it more easily. Moreover, they could improve SparkUI to have more advanced versions of the performance and the queries and all.
It takes a bit of time to get used to using this solution versus Panda as it has a steep learning curve. You need quite a high level of skill with SQL in general to use this solution. If SQL is not someone's primary language, they might find it difficult to get used to. This solution could be improved if there was a bridge between Panda and Spark SQL such as translating from Panda operations to SQL and then working with those queries that are generated. In a future release, it would be useful to have a real time dashboard versus batch updates to Power BI.
It would be beneficial for aggregate functions to include a code block or toolbox that explains calculations or supported conditional statements. Multiple functions come within an aggregate so it is important to understand them. When you are trying to do something new, it would be easier and quite unique to get information within the solution rather than having to search the web. For example, once you select an aggregate it tells you what type of functions the solution can perform and includes a code block explaining its calculations. Or, a certain conditional statement gives you a second option or explains other types of statements the solution performs as part of a rule-level function.
There are many inconsistencies in syntax for the different querying tasks like selecting columns and joining between two tables so I'd like to see a more consistent syntax. Notations should be unified for all tasks within Spark SQL.
Corporate Sales at a financial services firm with 10,001+ employees
Real User
2020-09-27T04:10:00Z
Sep 27, 2020
Being a new user, I am not able to find out how to partition it correctly. I probably need more information or knowledge. In other database solutions, you can easily optimize all partitions. I haven't found a quicker way to do that in Spark SQL. It would be good if you don't need a partition here, and the system automatically partitions in the best way. They can also provide more educational resources for new users.
I would like to have the ability to process data without the overhead. To use the same API to process both terabytes data and be able to process one GB of data.
The service is complex. This is due to the fact that it's a combination of a lot of technology. The solution needs to include graphing capabilities. Including financial charts would help improve everything overall.
Spark SQL is a Spark module for structured data processing. Unlike the basic Spark RDD API, the interfaces provided by Spark SQL provide Spark with more information about the structure of both the data and the computation being performed. There are several ways to interact with Spark SQL including SQL and the Dataset API. When computing a result the same execution engine is used, independent of which API/language you are using to express the computation. This unification means that developers...
In terms of improvement, the only thing that could be enhanced is the stability aspect of Spark SQL. There could be additional features that I haven't explored but the current solution for working with databases seems effective. I haven't worked extensively with all components, so there might be untapped features that could enhance the solution's value.
I'm using DBeaver to connect Spark with external tools. I've experienced some incompatibilities when using the Delta Lake format. It is compatible when you're using Databricks on the cloud, but when I'm using Spark on-premise, there are some incompatibility issues. We expect interactive queries with Dremio to provide better results. We issue a query but see that it's a batch process in the background. The documentation is also limited, especially in the setup for Thrift servers.
It would be useful if Spark SQL integrated with some data visualization tools. For example, we could integrate Spark SQL with Tableau for data visualization.
Spark SQL can improve the documentation they have provided. It can be a bit unclear at times. They could improve the documentation a bit more so that we can understand it more easily. Moreover, they could improve SparkUI to have more advanced versions of the performance and the queries and all.
It takes a bit of time to get used to using this solution versus Panda as it has a steep learning curve. You need quite a high level of skill with SQL in general to use this solution. If SQL is not someone's primary language, they might find it difficult to get used to. This solution could be improved if there was a bridge between Panda and Spark SQL such as translating from Panda operations to SQL and then working with those queries that are generated. In a future release, it would be useful to have a real time dashboard versus batch updates to Power BI.
It would be beneficial for aggregate functions to include a code block or toolbox that explains calculations or supported conditional statements. Multiple functions come within an aggregate so it is important to understand them. When you are trying to do something new, it would be easier and quite unique to get information within the solution rather than having to search the web. For example, once you select an aggregate it tells you what type of functions the solution can perform and includes a code block explaining its calculations. Or, a certain conditional statement gives you a second option or explains other types of statements the solution performs as part of a rule-level function.
There are many inconsistencies in syntax for the different querying tasks like selecting columns and joining between two tables so I'd like to see a more consistent syntax. Notations should be unified for all tasks within Spark SQL.
This solution could be improved by adding monitoring and integration for the EMR.
There should be better integration with other solutions.
Being a new user, I am not able to find out how to partition it correctly. I probably need more information or knowledge. In other database solutions, you can easily optimize all partitions. I haven't found a quicker way to do that in Spark SQL. It would be good if you don't need a partition here, and the system automatically partitions in the best way. They can also provide more educational resources for new users.
I would like to have the ability to process data without the overhead. To use the same API to process both terabytes data and be able to process one GB of data.
Anything to improve the GUI would be helpful. We have experienced a lot of issues, but nothing in the production environment.
The service is complex. This is due to the fact that it's a combination of a lot of technology. The solution needs to include graphing capabilities. Including financial charts would help improve everything overall.
In the next release, maybe the visualization of some command-line features could be added.