When I was doing a research on Oracle Database creation, through DBCA, last week, I came across a very interesting scenario which I couldn’t stop sharing with you :)
I have a cluster of machines (don’t confuse it with a RAC cluster but just a group of machines required for my application deployment) with 184.108.40.206 installed and a Database instance running.
I got a requirement to remove everything accumulated in the machine as a result of the Oracle installation and the Database creation: The installation directories, the datafiles, oratab entries etc…
After removing all th
Matthew Morris (who makes many constructive and knowledgeable contributions to the forum) has written a series of study guides for the OCA/OCP exams. I asked him if I could have a copy of one to review. The result: it's very good. This is a copy of the review I put up on Amazon.
The other day, I was asked how to move a table from one schema to another. The answer, as we all know, is "you can't do that: you have to create a new table as a copy of the old one, or use export/import. It will take a long long time." Not true.
Why do you sometimes not get partitionwise joins? Because the optimizer isn't clever enough. Reference partitioning has many benefits, one of which is that the optimizer understands it. You will always get a partitionwise join if your tables are reference partitioned.
When we launch a long operation, such as a RMAN backup or rebuild of a large index, we can come to despair of not having an estimate of the time it may take. We can even come to think that is doing nothing.
For the progress of a long operation we can query the view V$SESSION_LONGOPS, first obtaining the process ID from V$SESSION. In the case of DBA, we know exactly which user is rebuilding the index, so we can simplify it into a single query.
The following example shows the progress of the reconstruction of a partition of an index.
Several times I have had to deal with people who do not want to define constraints. I have never understood why they don't, because my experience is that the more constraints you can define, the better Oracle will perform. If anyone knows where the idea that not defining constraints is a Good Thing comes from, I would be interested to know.
Following are two very simple examples of constraints allowing the optimizer to develop better plans.
First, foreign key constraints. These give the optimizer a lot of information about the data that may mean it can cut out whole tables from a query.
For any company the most important asset is data and the most challenging job is to recovery the database with less downtime with out any data loss, in the event of database failure. In many situations users end up with incomplete recovery of the database with out knowing which data files backed and which data files need to backup. You should ensure that your database is backed up efficiently and should restore successfully when needed. The RMAN reporting provides effective and easy way to determine database backup for a successful recovery.
Often DBAs may look to tuning the UNDO* parameters as a solution towards the infamous "ORA-1555 snapshot too old" error. In most cases, before looking to tune UNDO* parameters, the best solution is to tune the query that's running into the ORA-1555 error so the query will not error out with ORA-1555 to begin with!
Hope this helps a fellow DBA or two trying to resolve the ORA-1555(s) !
Summary: An index created on column that has many duplicated rows can be tuned to save space as well as I/O trips by compressing the index defined on it.
Details: By default when we create index in Oracle, the compression is disabled on it. What if we have an index defined on a column that contains last name of all the customers, some of the names are very common as a last name. We can take advantage of this duplicated data by compressing the index defined on it.