What is most valuable?
So what happened is that today IT is facing a lot of challenges, because the data has grown so fast. They have to find a way to manage the cost, the performance, and the capacity. So that's why we have this strategy called information management: a strategy to manage this data within a reasonable cost.
Oracle 12C introduced these two new features, called ADO, Automatic Data Management, and Heat Map and combine together to implement information management in the database. We found that it is very useful to implement a tiered storage strategy. Today, we all know that SSD, (which) stands for Solid State Drives, really can help improve database performance by reducing a lot of storage IO bottlenecks. But it is not very cost-effective to put a large amount of non-active data into SSD because they are not seriously impacting the performance, and also they tend to be in a large volume, and it can be very costly to put them in the SSD. That's why we introduced (the) tiered storage.
The idea is we put (the) active data in the tier one storage and put (the) non-active data in the tier two or three storage. We want to use ADO and the Heat Map together, to implement this tiered storage strategy. We found that it's very useful, because these features allow the database administrator to write a policy, and then, this feature will automatically move the data around you don't have to physically copy it, and the feature will do this for you. Your only manual work is to write policies. We already implemented this in one of our tiered storage solutions. We have this one, with the PCI storage, as the tier one storage. We also have the tier two storage using the traditional spinning disk. We used this ADO and Heat Map features to manage the data around the tired storage, and it turned out to work very well.
What needs improvement?
So this is a very good tool, but I would like to add some more features. One thing I would think about is that, the database lets me write some new rules. Right now, the data moving is mainly based on how much this tier one storage is full. Like, if 80 percent of full? Then it starts moving the data. What I really want is, based on how much the data has been used. So it's possible to do that, but today, the database administrator has to manual write up the custom solution to check that. So I would like it to allow us to use plain English like, no modification in 30 days, and so instead of writing the complex PL/SQL procedure to do that this is already implemented in data compression. There is another feature for ADO that is to compress data, instead of moving data.
And it's not moving data. That condition is already implemented in the compression. But I would like to implement the same way in the data movement. Another thing is that right now, currently, when they check data, they only check data the last time the data was used, instead of frequency. So I want to have some way to go and say this data has not been used, has been used only one time, Even data was used yesterday I still want to move, but (according to today’s ADO implementation) even if they use it (data) one time, as long as in recently like yesterday, it is equal to 1000 times usage, (so the data will not be moved.) So I would like to have some way to do that (to tell the difference).
Another feature is that this ADO, currently does not apply to multi tenant databases, which is a very important part of a database. I would like to implement that. by adding this feature, to support that (the multitenant database)
What do I think about the scalability of the solution?
It's a very, very stable product. It's part of our 12C new features, I didn't see or feel a lot of issues, but I do recommend it because the data moving could serious impact to your database performance - so test it, before you move the production. So this comes to, not exactly how stable the product is, but how stable your rules are.
If you write the wrong rule, you move the active data to tier two storage, you will suffer your performance. And also, another thing is, when moving data, be careful because all DBAs know that, if you move data across the storage, potentially your index becomes invalidated. Then all your database query will go to the full table scan. Then you actually get a worse problem than ever.
So ADO, they tried their best to re-enable the indexes. But just be careful because in our experience, it's not 100 percent covered. So my advice is, check that. So after they move, use single query to check the index's status. If you found some index not valid, rebuild it (with) another single command, you can do that. That will ensure that you only get a good part of it, not the issue.
So the scalability has something to do with how much data you move around, so that's why you need actually scale. You need to have some idea about how much data (to move). You want to schedule a good time window, so that off your peak time, so you can you do data moving. The DBA is the one who knows this most, you need to plan ahead and test it ahead.
What other advice do I have?
I would rate it, eight to nine. Because, one of the areas for improvement, for me to write a PL/SQL procedure, that can be implemented for the product. They already have this for compression. Why didn't they implement in the data movement? The writing procedure was not easy to write, yes. I would like to have that, yeah.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor. The reviewer's company has a business relationship with this vendor other than being a customer: We're partners.