We work with the latest update.
We use the solution as a database. We primarily use it for the SAP application. Some of the use cases involve CDS Views, which provides a quicker processing of the report and the application.
We work with the latest update.
We use the solution as a database. We primarily use it for the SAP application. Some of the use cases involve CDS Views, which provides a quicker processing of the report and the application.
The in-memory database is a good and valuable feature.
Since we use BW, we are required to use an SLT tool to carry out the data for generating the reports. But, when it comes to in-memory database in respect of the realtime reporting, I do not see why this report cannot be made available from the system itself. This would allow for some partitioning of the database, so that there would not be a need for the EMP in respect of the realtime data.
The initial setup was complex. I am talking about how the data is replicated to the site. We had an Oracle Database and did replication to the VR site. Yet, when it comes to HANA, we are forced to work out the method for ensuring that this replication works as it should. It is at this point that the solution becomes stable.
Technical support could be better. When we have requested this, the tendency has been to instruct us to implement a note and keep them apprised. In reality, there is no one who helps us with actual troubleshooting of the problem.
The pricing is a bit on the high side.
While I would definitely recommend the solution, I would caution that one should employ the proper resources that are geared towards the system. Unfortunately, SAP does not provide a structured training program, which means a person must rely on multiple system integrators and some service providers.
We have been using SAP HANA for more than three years.
The stability is very good and we have encountered no issues regarding it.
The initial setup was complex. I am talking about how the data is replicated to the site. We had an Oracle Database and did replication to the VR site. Yet, when it comes to HANA, we are forced to work out the method for for ensuring that this replication works as it should. It is at this point that the solution becomes stable.
The solution is easy to scale.
Technical support could be better. When we have requested this, the tendency has been to instruct us to implement a note and keep them apprised. In reality, there is no one who helps us to actually troubleshoot the problem.
We had already been using SAP when we switched from it to Oracle, because all of the innovations were taking place on HANA.
The initial setup was complex. I am talking about how the data is replicated to the site. We had an Oracle database and did replication to the VR site. Yet, when it comes to HANA, we are forced to work out the method for for ensuring that this replication works as it should. It is at this point that the solution becomes stable.
The deployment lasted six months.
We deployed with the help of a vendor.
No real maintenance is required. Data volume management is needed and all the reports are available, based on which the maintenance is easy.
The pricing is a bit on the high side.
The use of hardware does not incur additional costs.
There are 40,000-plus users making use of the solution in our organization.
I rate SAP HANA as an eight out of ten.
Provides us with predictive capabilities for asset maintenance and real-time forecasts.
Real-time database, near zero downtime for production business.
Graphical programming without coding.
System recovery in version 1.0 failed due to corrupt log files. Version 2.0 is stable now.
Should have scalibity from terabytes to petabytes/zetabytes/yotabytes for both scale-up and scale-out, multi-tenancy approach.
Excellent.
Gradual deployment from straightforward to complex, on-premise and then to cloud platform.
Set up a consortium of consulting partners and hardware vendors to define your tech. Landscape TCO (total cost of ownership) and then approach the OEM for pricing (on-premise or on cloud or a hybrid model).
Check if you can bring your own licenses for some of the existing application licenses on the new platform, to reduce TCO.
Product was the first of its kind for us. However, we later evaluated other products: Oracle Exadata, Exalytics, Teradata, Hadoop, MongoDB.
The dashboard is great.
It's user-friendly so long as you use it frequently.
The product is stable.
You can scale the solution.
Technical support has been good.
Some production features are lacking. For example, we cannot see any dashboards in the production department. The generation of reports needs to be better. We have sales reports and yet no production reports.
It's a complex initial setup.
It can be an expensive solution.
I've used the solution for six years.
The stability is very, very good. There are no bugs or glitches. It doesn't crash or freeze.
The scalability is very good. It's very capable.
I've used technical support in the past. They are good.
I did not use a different solution. I use what my company uses in order to get reports.
The initial setup is difficult and complex. It's very complex compared to SQL, for example.
The initial deployment took 15 days, however, nowadays, it takes one or two days. Mostly, I can complete it in one day.
I'm a business consultant. I do implementations for various parts of my customers' organizations.
The licensing is typically chared yearly. It was rather expensive.
We deploy the solution mostly on-premises and sometimes on the cloud. When we use the cloud, we usually use the AWS cloud.
I'd warn new users that it might be difficult the initial time they use it if they are changing their ERP, however,t it'll be very helpful in the future if they're using it frequently.
I'd rate the entire product an eight out of ten. There are other well-known ERPs, such as Salesforce, and companies expect to have the same features. However, not everything may be on SAP, and that's something they need to work on.
Since its introduction in 2011, SAP tries to push HANA very heavily and there is a lot of marketing buzz over this new product. For a freelance consultant focused on SAP Sybase database products, like me, it is next to impossible to ignore HANA in year 2013. So, I decided not to rely to marketing slogans and check what HANA is, what it can do, and, importantly, what HANA is NOT. I put my first impressions to this blog post; hopefully other HANA-related posts will follow. Note that I’m not a HANA expert (yetJ) and I’m writing these rows as a person with a lot of experience with IQ and some other RDBMSs and trying to learn HANA.
So, why to compare HANA and IQ? Both are designed for data warehouse environment, both are column-based (with some support of row-based data), both provide a data compression out-of-the-box and highly-parallel. Years ago, much like SAP for HANA today, Sybase claimed that IQ processed data so fast that aggregation tables are not really needed, because the aggregations can be just performed on-the-fly. Well, experience with a number of big projects showed me how problematic that statement was, and it is only a single example.
According to SAP, the strong point of HANA is its ability to utilize CPU cache , which is much faster than accessing the main memory (0.5 - 15 ns. vs. 100 ns.). Currently, IQ and other Sybase RDBMSs lack this capability. Therefore, I decided to build a test environment which allows performing of queries that answer a number of conditions:
Some notes about the test environment:
For IQ, I used 16-core RHEL server with hyper-threading turned on (32 cores visible to OS) and 140GB RAM
available. I used IQ 16.0 SP01 for my tests.
For HANA, I had to use HANA SPS6 Developer Edition on a Cloudshare
VM, which provides HANA on a Linux server with 24GB RAM. However, only
19.5 GB
is actually available from the Linux point of view (free –m output) and
most of
this memory is allocated by various HANA processes. In fact,
less than 3GB
RAM is available for user data in HANA
. I only wish that SAP would allow us
to download HANA and install it on any server that answers to HANA’s
requirements for CPUs, but it seems that the SAP’s policy is to distribute HANA as a
part of appliances only, so I don’t expect free HANA download any time soon.
This brings us to an additional requirement for the test: the test dataset should be relatively small , because of severe RAM restrictions imposed by HANA Developer Edition on Cloudshare.
Finally, I decided to base my tests on a relatively narrow table that represents information about phone calls (for those involved in Telecom industry, it is like short and very much simplified CDRs). Here is the structure of the table:
create table CDRs (<br>
CDR_ID unsigned bigint, -- Phone
conversation ID
<br>
CC_ORIG varchar(3), -- Country code
of the call originatior
<br>
AC_ORIG varchar(2), -- Area code of
the call originatior
<br>
NUM_ORIG varchar(15), -- Phone number
of the call originatior
<br>
CC_DEST varchar(3), -- Country code
of the call destination
<br>
AC_DEST varchar(2), -- Area code of
the call destination
<br>
NUM_DEST varchar(15), -- Phone number
of the call destination
<br>
STARTTIME datetime, -- Start time of
the conversation
<br>
ENDTIME datetime, -- End time of
the conversation
<br>
DURATION unsigned int -- Duration of
the conversation in seconds
<br>
);
I developed a stored procedure that fills this table in SAP Sybase ASE row-by-row according to some meaningful logic and prepared delimited files for IQ and HANA. The input files are available upon request. At first, I planned to run tests on a dataset with 900 million rows, but I finally discovered that I have to go down to 15 million rows because of the VM memory limitations mentioned above.
Important note about the terminology. In IQ, inserting of the data from a delimited file into a database table is called LOAD, and retrieving of the data from a table to a delimited file is called EXTRACT. In HANA, the inserting is called IMPORT and the retrieving is called EXPORT. The term LOAD in HANA has a totally different meaning – it means loading of a whole table, or some of its columns, to the memory from disk, when the data is already in the database.
IMPORT functionality in HANA is not similar to IQ, at all. Actually, it contains two phases: IMPORT and MERGE. During the first phase, the data is imported to a “delta store” in an uncompressed form. Then, the data from the “delta store” is merged into “main store”, where the table data is actually resided. The merge is performed automatically, when a configurable threshold is crossed (for example, the size of the “delta store” becomes too big). To ensure that the imported data is fully inside the “main store”, a manual MERGE may be required. The memory requirements during the MERGE process are quite interesting, maybe I will write about it in a different post. It is pretty much possible that you will be able to IMPORT the data, but will not have enough memory to MERGE it; it happened to me a number of times during my tests. I would recommend you to read more about HANA architecture here: http://www.saphana.com/docs/DOC-1073, Chapter 9.
Given the significant difference between the test systems (a powerful dedicated server for IQ vs. small VM for HANA), I didn’t plan to compare the data load performance between IQ and HANA. However, so far I see HANA performing the IMPORT using not more than 1.5 core of 4 available, thus underutilizing the available hardware. The MERGE phase, though, is executed in a much more parallel way. The bottom line is that IQ seems outperform HANA in data loading, possibly quite by far. I will probably return to this topic in one of following posts, additional tests with larger dataset are required.
Now, we come to the data compression. Since IQ and HANA approach the indexing quite differently, I chose to compare the compression without non-default indexes in both IQ and HANA. It appears that IQ provides better data compression and needs 591M to store 15,000,000 rows, while HANA needs 748M to store the same data. HANA provides a number of compression algorithms for columns, which are chosen automatically, according to the data type and data distribution. However, it seems that neither of compression algorithms offered by HANA contains LZW-like compression used by IQ. I’d prefer to test the compression on a more representative data set (15,000,000 is way too small) and play with different HANA compression algorithms. I hope one of future posts will be dedicated to this topic.
Finally, the data is inside the database and we are ready to query it. To answer the test conditions mentioned above, I chose the following query:
select
<br>
a.CDR_ID CDR_ID_1, b.CDR_ID CDR_ID_2,
<br>
a.NUM_ORIG NUM_A, a.NUM_DEST NUM_B, a.STARTTIME STARTTIME_1, a.ENDTIME
ENDTIME_1,
<br>
a.DURATION DURATION_1,
<br>
b.NUM_DEST NUM_C, b.STARTTIME STARTTIME_2, b.ENDTIME ENDTIME_2,
<br>
b.DURATION DURATION_2
<br>
from CDRs a, CDRs b
<br>
where a.NUM_DEST = b.NUM_ORIG
<br>
and datediff(ss, a.ENDTIME, b.STARTTIME) between 5 and 60
<br>
order by a.STARTTIME;
This query finds cases when a person A called person B and then the person B called person C almost immediately (in 60 seconds). This query has to perform a lot of logical I/O by its very definition. With my test data set, this query returns 31 rows.
In IQ, this query takes 6.6 seconds while executed fully in memory and when all relevant indexes are in place. The query uses sort-merge join and runs with relatively high degree of parallelism, allocating about 60% of 32 CPU cores available.
In HANA, the same query takes only 1 second with no indexes in place ! Remember, that in my tests HANA is running on a small VM with just 4 virtual CPU cores! The query finishes so fast that I cannot measure the degree of parallelism. Creation of indexes on NUM_ORIG and NUM_DEST reduces the response time to 900 ms.
A note about indexes in HANA: HANA offers only two index types and, by default, it chooses the index type automatically. In my tests, I have found that indexes improve query performance in HANA, sometimes significantly. Unfortunately, I have not found any indication of index usage in HANA query plans, even when some indexes were used by the query for sure. The role of the optimizer statistics in the query plan generation is also not very clear to me. I hope to prepare a separate post about query processing in HANA, stay tuned!
Another amazing and totally unexpected finding in HANA – index creation on NUM_DEST (varchar(15)) takes 194 ms. Index on DURATION (int) is created in 12ms!
My conclusions so far:
Update: see IQ query plan for my test case here: Download ABC_15mln_fully_in_memory
We use the solution to store and migrate the data.
SAP HANA is very flexible and easy to integrate with SaaS components.
The solution could be more flexible. It is challenging to integrate it with third-party tools apart from SaaS components. Also, they should include a feature like local field in the next release.
The solution's workflow system application and functionalities are compatible with SaaS components. I advise others to consider knowing the kind of data validation, performance tool, and use cases per their business requirement. They should look for other solutions if there are multiple data sources involved.
I would rate it nine out of ten.
The most valuable features are the flexibility and the integration with other solutions in data quality.
HANA could be improved by adding analytics and development models in the institution.
I've been using this solution for two or three months.
The support is good and is available 24/7.
I'd rate SAP HANA as nine out of ten.
We are end users of SAP HANA and I'm assistant vice president of our company.
The solution's in-memory computing and the efficient response time are very good features. It's a good solution.
Because SAP HANA is completely in-memory, it requires the use of bigger systems which have a higher amount of RAM. If the system does go down then coming back up is difficult and can take 30 to 40 minutes. It's a big drawback of the solution and one they ought to solve because the multiple downtimes are a problem.
I've been using this solution for six years.
The solution is scalable so if you want to increase your node capabilities, it's very easy to scale out. You just have to add the node and it gets started.
The technical support people are very competent.
The initial setup is straightforward and there were no problems with deployment.
I recommend this solution but running SAP HANA requires deep pockets - it's costly.
I rate this solution eight out of 10.
We always make use of the latest version.
We like that SAP HANA is a new technology. We also like that the product is both vertically and horizontally scalable, allowing us to do around 86 percent compression of documentation from 50 to seven terabytes.
In light of the hosting cost, we find this to be very interesting. We also like the warm and cold data in respect of the solution's technology. There is a real team involved. The customer can initially utilize SAP ECC on the HANA interface and then go on S/4HANA. From this point on, doing upgrades will be very easy and smooth and the risk management will be extremely light.
Since only some accommodation exists, there is a need to enquire of SAP about which environment would be good. While I know the HANA database on the Azure environment comprises a good solution, for example, it does not easily accommodate SAP. HANA is a new technology which only dates back to 2004 and we must give it adequate time before assessing its room for improvement. This is in contrast to DB2, which has been around for 40 years or more years. Perhaps the flow will be improved. As of now, the technology is too new to properly comment on.
The documentation is not an issue and, if anything, a surfeit of it is made available. This is actually one of SAP's stronger points.
Capabilities are also not at issue at present. When it comes to how EAP connects with SAP we are looking at a revolutionary paradigm. For now, we could not ask for there to be more features. This area is wonderful. We find the solution to be very helpful, safe, good and secure. Only with the passage of time will we be afforded a proper understanding of where the technology can be improved.
This said, it would be nice to know when SAP plans to stop its maintenance of a previous version of SAP ECC ERP because, at this point, anyone utilizing SAP will have no choice but to go on S/4HANA Database. This will be contingent on when SAP will stop doing maintenance for the ECC version 6.0.
We saw this with the 4.6 version ECC. Once SAP stopped its maintenance those with SAP were forced to go on the new version.
We have been using SAP HANA for a couple of years.
The solution is secure and stable.
As the solution allows for document compression, I would consider it scalable.
We have had occasion to make use of technical support, although its level varies according to the locale of SAP's global presence.
While I was not personally involved in the installation, my engineers were.
My deployment and maintenance team comprises around seven people, although I could not tell you which percentage of this is made up of managers, administrators and engineers.
A monthly or yearly license must be purchased, although its utility will be based on the cost-benefit analysis that is reached by the individual customer.
The solution is cloud based
I would recommend S/4HANA to other users.
I rate SAP HANA as a nine out of ten.
Does Smart bear support SAP