Posts on Orafaq reveal that a significant subset of database administrators believe
that an export is a backup. This paper summarizes what an export is, what a backup is,
and why the two are different.
A 12c new feature that may have significant performance and scalability implications is the multithreaded database. All releases of Oracle below 12.x (and 12.x by default) run on Unix in a multi process model. The various background processes (typically at least fifty for 12c, thirty or forty for 11g) run as separate operating system processes. Furthermore, the dedicated server processes that support sessions also run as separate operating system processes. An Oracle instance will usually be running as hundreds (perhaps thousands) of processes. The multithreaded database changes this.
12 MArch 2013 update: Content removed - it exceeded what is permitted for the pre-release status of the product.
Sorry about that, will repost when I can.
Oracle Database 12c: New Features – Pluggable Databases by Michael Rajendran
Oracle has leap forwarded the middleware technologies especially the database technology into the cloud. So far Oracle has been the traditional RDBMS database suitable for the private enterprise data centers within corporate walls.
When I was doing a research on Oracle Database creation, through DBCA, last week, I came across a very interesting scenario which I couldn’t stop sharing with you :)
I have a cluster of machines (don’t confuse it with a RAC cluster but just a group of machines required for my application deployment) with 184.108.40.206 installed and a Database instance running.
I got a requirement to remove everything accumulated in the machine as a result of the Oracle installation and the Database creation: The installation directories, the datafiles, oratab entries etc…
After removing all th
Matthew Morris (who makes many constructive and knowledgeable contributions to the forum) has written a series of study guides for the OCA/OCP exams. I asked him if I could have a copy of one to review. The result: it's very good. This is a copy of the review I put up on Amazon.
The other day, I was asked how to move a table from one schema to another. The answer, as we all know, is "you can't do that: you have to create a new table as a copy of the old one, or use export/import. It will take a long long time." Not true.
Why do you sometimes not get partitionwise joins? Because the optimizer isn't clever enough. Reference partitioning has many benefits, one of which is that the optimizer understands it. You will always get a partitionwise join if your tables are reference partitioned.
When we launch a long operation, such as a RMAN backup or rebuild of a large index, we can come to despair of not having an estimate of the time it may take. We can even come to think that is doing nothing.
For the progress of a long operation we can query the view V$SESSION_LONGOPS, first obtaining the process ID from V$SESSION. In the case of DBA, we know exactly which user is rebuilding the index, so we can simplify it into a single query.
The following example shows the progress of the reconstruction of a partition of an index.
Several times I have had to deal with people who do not want to define constraints. I have never understood why they don't, because my experience is that the more constraints you can define, the better Oracle will perform. If anyone knows where the idea that not defining constraints is a Good Thing comes from, I would be interested to know.
Following are two very simple examples of constraints allowing the optimizer to develop better plans.
First, foreign key constraints. These give the optimizer a lot of information about the data that may mean it can cut out whole tables from a query.