Ours is a DW environment, ETL extracts data from SAP and loads into the reporting database. The DBA objective was optimal performance for both nightly batch and reporting. We used the below features which significantly helped improve he performance.
We used HCC with query high to compress all the fact tables, Interval partition the fact tables with daily partitions.
Most of the financial reports go back to maximum 2 months so we scheduled a stored procedure to load last 60 days partitions in to IN-MEMORY. We have also loaded highly used dimention tables as well in to IN-MEMORY.
Extended stats was the key to performance, we could achieve good performance of the reports by gathering extended stats. Histograms too were helpful, but extended stants and histograms dont go well together. in our testing phase, we had tested and chose the best one. Over the period when data changed, we switch between the two.
DMRM, was another key to make sure all the consumer groups get the required resources.
Forcing the optimizer to use BloomFilters, boosted the performance to a significant extent. We could achieve this by getting rid of quite a few indexes, parallel processing and optimizer statistics.
Tablespaces with NOLOGGING option. The nightly batch process also has a great performance with nologging tablespaces, parallel DML and insert append (direct path load).
Dynamic sampling set to 4 was. This was the value which gave us consistent performance across most of the reporting.
Disabling optimizer_adaptive_features. This feature turned on, flickered the performance of the reports. We could achieve consistent performance by turning this off.
Cognos Dynamic Cubing, This is a feature used at the cognos layer which helped increase the performance of the reports.
This data is of our BI reporting. This reporting is open for all the warehouse managers to know all their financial status.The period end . We have very tight schedules. Every report has to complete in milli seconds seconds. The SLA are very tight. In memory in combination with partitioning, HCC and offloading feature helped achieve this SLA's.
We use lots of aggregations, and a lot of transformations that happen beforehand. We use Information, and then the data comes into Oracle. Cognos actually runs those reports. That was a very big challenge for us. We didn't use in-memory before.
For most of the tables, we use partitions, we use HCC, and then we could not get through the day with that level of performance. What we did is we made sure that the latest partitions, on which most of the reports run, are actually put into in-memory, and then very highly compressed. We move the data – some of the key tables, master tables especially, and some of the FAT tables – into In-Memory, and we use very high compression ratios. After that, we saw a really dramatic improvement in the performance. We are doing much better than the SLAs require. Most of our reports are converting in 1, 2 or 3 seconds. Most of them are below 5, except if we have any stats issue or anything like that; it takes time for them to complete. After we started using the in-memory product, we saw really dramatic figures.