We work as consultants, so we provide the solution to customers. It's deployed on-premises.
We write the queries using SQL Developer, and we do the synchronization.
We're using Oracle 10g and 11g.
We work as consultants, so we provide the solution to customers. It's deployed on-premises.
We write the queries using SQL Developer, and we do the synchronization.
We're using Oracle 10g and 11g.
The query optimization and backup features should be added.
I've been using this solution for more than eight years.
The on-premise version is stable. We have different teams and resources for the server side, for admin, and for development. We can easily take care of all the services and applications.
We do maintenance frequently. Every weekend, the admin team has to do some patches and some upgrade activity on the servers. So, maintenance is needed to stay up-to-date.
Since we aren't using this as a SaaS, if we move to cloud, we can go for scalability easily. The on-premise solution is scalable. It's also good for cost reduction. If the client is looking for some cost reduction, it is not possible with the on-premise version, so it's better to go for SaaS or cloud.
There are no costs in addition to the standard licensing fees.
I would rate this solution 8 out of 10.
For someone with small architecture and a small volume of data, they can go for the cloud services. If they have a huge volume or some security perspective like BFSI data and a banking system, they should definitely go for the on-premise solution.
If we go for any BFSI or some banking or card solution, we can easily see that most of the companies are working with the on-premise solution only. They aren't considering the cloud solution to be secure. In terms of security, they are going for the on-premise version because everything is in their hands only.
We use the solution for analytical purposes because it makes it faster to analyze processes. In-Memory also helps with reporting. My first project using the solution was for the government, and we deployed the solution on the cloud.
I like Oracle because it is a backward-compatible solution. When I changed from an In-Memory database to a non-In-Memory database, all I had to do was configure and restart. I did not have to change the coding at all. But it takes a bit more time to transform from non-In-Memory into In-Memory when using a MySQL server.
It would be good if Oracle could reduce downtime when transferring from non-In-Memory to In-Memory.
I've used Oracle Database In-Memory since 2019.
I rate Database In-Memory's stability an eight out of ten.
In-Memory's scalability is okay because it's focused on big databases. Scalability is a focus when used on a NoSQL database. When we use In-Memory in the Oracle database, which is an MDBMS, scalability is not a concern. The solution is only for enterprise-level customers.
I don't like tech support from Oracle. I've been with Oracle since 1992, and I've seen the quality of support decrease.
The initial setup is very simple, and that's why I like it. It's fairly transparent. There is nothing to change about the coding. The customer may not even realize an engine exists in the database except during downtime.
I rate the pricing a zero out of ten because Database In-Memory is too costly.
The best competitor for Oracle is Microsoft. But Microsoft is much harder than Oracle to scale.
After accessing the In-Memory database, we usually have some downtime. That's acceptable to me, but if they can eliminate that, it would be good. The downtime is needed now to change the allocated memory in Oracle. If you use a data block, the downtime lasts minutes.
Oracle provides Exadata for customers, and that's very good because usually another cloud is not provided to them.
I rate the solution a ten out of ten.
When I compare solutions like In-Memory to FOSS 3.0, FOSS 3.0 has more features. But if you add all those features to one person, they are incompatible. Compatibility is an important thing, but many people just skip it.
We stream and exchange data with multiple partners and vendors using the product.
Oracle Database In-Memory has a valuable transaction feature. It efficiently handles low-code data and supports read-and-write operations for clustering.
The product could be more economical. It is much on the higher side than other databases. Additionally, they should include OLTP functionality.
We have been using Oracle Database In-Memory for ten years.
It is a stable platform. I rate its stability a ten out of ten.
There were 5000 Oracle Database In-Memory users in my previous company. I rate the product's scalability a ten out of ten.
The technical support services are good.
Positive
The initial setup is easier for the cloud version. I rate the process a ten out of ten.
The product is expensive. However, it provides quality features. There is room for improvement, considering other options. I rate the pricing a ten out of ten.
We evaluated different providers. We went to Oracle Database In-Memory for better transaction size and volume.
I rate Oracle Database In-Memory a ten out of ten.
We use Oracle Database In-Memory for reporting and analytics.
The platform's most valuable features are query response time, columnar storage, and data cube setup.
The platform’s pricing needs improvement.
The platform is stable. I rate the stability a nine out of ten.
We have more than 50 Oracle Database In-Memory users in our organization. While occasional usage may increase, the technical infrastructure effectively manages these fluctuations. It demonstrates negligible impact on performance, even with incremental user growth. I rate the scalability a nine out of ten.
We have received technical support for reported bugs, and the team promptly releases patches.
Earlier, we utilized the Oracle Database Appliance for data warehousing purposes. Additionally, we relied on SQL databases, particularly older versions managed by Oracle as well as the old SAP DB. However, we transitioned to Oracle Database In-Memory to address the specific needs of our environment, particularly in scenarios where we needed to report data across distributed sites efficiently.
The initial setup process is simple.
The platform provides the best performance in terms of database analytics. It efficiently serves as a data lake. We can integrate it with any data sources as well.
I recommend it to others and rate it a ten out of ten.
We use the tool for real-time data transfer for risk management purposes. In a trading system, conversions happen fast. We use the product to handle fast transactions with low latency.
I would like Oracle Database In-Memory to include a data replication feature.
I have been working with the solution since 2005.
Oracle Database In-Memory is very stable.
I would rate the product's scalability a nine out of ten.
Oracle Database In-Memory's setup takes six months to complete. You need 10-15 resources to handle the deployment of the whole system.
Oracle Database In-Memory is expensive.
I would rate the solution a ten out of ten. If you do not have high-frequency transactions, then Oracle Database In-Memory is not for you. You would require Oracle Database In-Memory for mission critical high-frequency transactions.
My solution has a big database with terabytes of data and we use Database In-Memory for a lot of our data. Normally, we partition it and create big tables, but we can use In-Memory for data that we use every day or every hour. We put some partitions in In-Memory from some tables and we use that. It normally has good performance.
Normally, every database server uses hard disks. In-Memory has a feature, apart from its database, which is very good. When we start our server, all your data needs loading memory. We can use that. It's a very good feature. I think they added this feature in 2019. We can mount memory in the partition, create partitions in there, and create tablespace from that spot to share. It's a really good feature. We use it a lot.
We use some partitions in In-Memory. We have a very large table and a low dose. It is very expensive in data to load all of them into In-Memory. It takes up more memory slots in the server, as well as a lot of RAM. We use the last partitions on the table. We always need to create a script and make a schedule that can load the last partition In-Memory. Oracle doesn't have features to do this automatically. I would like them to allow us to load last partitions, as well as other table partitions, in In-Memory. I think a good feature would do that automatically, letting you see a table, load a large partition, and monitor loading memory. It's quite a good feature.
I have been using this version for about four months.
We have used this feature for four months and it's stable. I used to load one terabyte of data in my memory servers each time. I think it worked okay. I don't have any problem for now.
I live in Iran. Iran is sanctioned by the US government so Oracle can't provide any services to us. I study on my own. I use Oracle documentation and watch YouTube videos. I don't have any company to support us.
It is very easy to deploy. I always work in Linux, Oracle Linux. If you want to work from the GNOME GUI, it's really simple. You must state a parameter in Linux and then use their installer. Click next, next, next, then finish. It's simple. In Oracle 80 and 90, we have an installation system in a common UI. It's very simple. You must have the prerequisites and after that, install RPM in just one comment. Check your configuration and set up the database. It's simple.
Deployment in the GUI version, if you want to use that, is I think 14 to 20 minutes. In the command line, it would take 10 to 15 minutes. I can't remember exactly, but something close to that.
Oracle is the best database, but I love open-source software. Oracle always has the first original features for three or four years and we use them because they are stable and we can buy in a large scale and use it for our office. It has no problems. I think Oracle is ten out of ten.
About Oracle Database In-Memory, in particular, I would rate it as eight out of ten. It's a new feature. I think it's improved from the last version three years ago.
Oracle's new features and data are very useful for us for storing data, loading it, etc. Oracle features based on processes are good. In Oracle, we just have four functions based on data types, but in post-production, we have more than ten functions. That is very useful for us. We'll add more functions and features like index and categorization based on data type, output, and large data. That would be very useful.
We are using Oracle Database In-Memory as an indirect approach to improving response times. In mixed-workload environments, we use the In-Memory column store to support OLAP-type queries without harming the latency-critical OLTP operations the systems "earn money with". This was successful for many customers throughout 12.2 and 18c.
It helps to build successful mixed-workload environments. Thus, for smaller setups, it's enough to have one database setup, not two, and it saves one interface in between.
In recent versions, Oracle implemented storing the In-Memory column store contents in the database, to resurrect the IMCS quicker and in a repeatable way.
One very nice side-effect is the in-memory index. If this would be developed a bit more into being configurable, users could use it as a kind of in-memory partitioning. That opens a big field of possible use cases.
Very stable.
In my experience, it scales quite well. Unfortunately, decent scale-out with RAC only works in Exadata, since Oracle relies on RDMA which is only available for InfiniBand.
"It depends". If you get a good support engineer, it is a dream.
But, most times, it is not, unfortunately.
No, since there was no other solution offering in-memory without changing the SQL syntax.
We grew into it during beta and initial releases, so I can't answer this.
We do implementations ourselves, so I can't answer this.
If you can save setting up an additional interface and a second DB server, investment should return immediately.
The setup cost is not a big factor, but the engineer should have decent experience with Oracle's In-Memory system.
License cost is a factor; the benefit has to be carefully evaluated.
We tried several ways to offload OLAP queries from the database, especially using a second DB system.
We evaluated this product throughout the beta1 and beta2 phase.
It is always worth testing or running a proof of concept to check its value.
Our primary use case for the solution is within our on-premises environment. We do not utilize cloud services and rely on local hardware for our operations.
The solution's most valuable feature is its performance optimization within our hardware environment. This capability enhances processing speed, which is crucial for our operations, although the improvements are more hardware-dependent.
The product could benefit from enhancements in its graphical user interface.
I have been using Oracle Database In-Memory for approximately 25 to 30 years.
I rate the product stability an eight.
I rate the product scalability an eight out of ten.
The initial setup was relatively straightforward.
Oracle's local support team efficiently managed the deployment process, ensuring a smooth implementation.
The platform's licensing cost needs improvement.
We selected the product due to its stability, security features, and ability to handle replication with minimal downtime.
The platform is reliable and effective within our setup. However, potential users should be aware of the high costs and evaluate whether the benefits justify the investment.