Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Mailing Lists -> Oracle-L -> RE: RE: controlfile schema global enqueue lock
Oh, some more info. We have 2 control file, all the disk just have 1 datafile. The company spend a lot money invested on this. All the redolog on separated disk too. all raw disk and mirrored. I want to know what I can do to reduce control file lock?
Joan
-----Original Message-----
Sent: Thursday, February 01, 2001 10:59 AM
To: Joan Hsieh; Multiple recipients of list ORACLE-L
Joan,
HUMMMMMMM! Lots to think about here. For one, if your not doing any
backups
what is the plan for when a disk drive fails? Predicting that is similar to
predicting the next asteroid impact on the earth, it's not a matter of if,
but
when. After that drive gets replaced, something has to be put back down
there.
OH, your on a raid or mirrored system!! That's nice, same scenario, just
more
drives to worry about.
But we digress. First, is this system in archivelog mode? If your not
doing backups, why archive the redo information? Are each of these redo
files
on separate drives, or is that the hottest drive in the system? Might be a
good
idea to "spread the wealth". Is there only one control file or more than
one?
are they on the same drive? are they mixed up with DB files? is one or
more on
very busy drives? What is the setting of DBWR_IO_SLAVES and
DB_WRITER_PROCESSES?
I'd also have a discussion with the applications people. While I
understand
that stuff on the Internet is constantly changing, all of it is not changing
every hour. News stuff normally has a 24 hour life expectancy. What will
enhance the overall performance of your site would be a process that
accumulates
historical life expectancies for various types of content there by allowing
the
processes that refresh your url data to have some intelligence built into
them.
Dick Goulet
____________________Reply Separator____________________ Author: "Joan Hsieh" <Joan.Hsieh_at_mirror-image.com> Date: 2/1/2001 10:25 AM
Dick,
The database size is 200Gb (datafiles), not including the redolog. This is a internet content delivery company. The database structure is very simple. Some processes called getter continually insert the http site to the database. Other processes keep check the url is expired or not. Based on that, will do delete the url. So we do have a lot of insert, update (update the expire date) and delete. The interesting thing is we don't do backup. even not cold backup. No snapshot either. Since the content of internet is keeping change. There is no point to backup. The major thing is performance. We all use raw disk on sun 5.6 or HP. The db_block_size is 8k, db_block_buffer is 50000. The hit ratio is around 85%.
Joan
-----Original Message-----
Sent: Thursday, February 01, 2001 9:33 AM
To: Joan Hsieh; Multiple recipients of list ORACLE-L
Joan,
Something is indeed very fishy in London. The first question I would
ask is
what is creating so much redo generation? 9x250MB = 2.1GB of data changes
inside of that hour. One thought may be that the database is still in hot
backup mode from a previously disturbed backup. The other is that several
users
are doing a lot of temporary table creation, but not in the temp space.
What is the size of this database? Where does incoming data come
from?
Are their a lot of very frequently refreshed snapshots? Do they do a full
refresh vs a fast refresh? What is the db_block_buffer hit ratio like?
Having
a low hit ratio can indicate an excessively busy dbwr writing dirty blocks
to
disk which can cause lots of check points.
Dick Goulet
____________________Reply Separator____________________ Author: "Joan Hsieh" <Joan.Hsieh_at_mirror-image.com> Date: 2/1/2001 6:05 AM
Dear Listers,
Our database in London has tremendous cf enque lock. Since I am new here. I checked the parameter found the log_checkpoint_interval set to 3200 and log_checkpoint_timeout set to default (1800 sec). So I suggest to set log_checkpoint_interval to 100000000 and log_checkpoint_timeout to 0. The second thing I found we have average log switch is 6 t0 8 per hour. We have 40 redo logs, each of them is 250m. Our log buffer set to 1m. I believe after we changed the parameter, the control file schema global enqueue lock should be released. But it get worse, we have 98% control file enqueue lock now. I think we have too many log switch (9 per hour now)and suggested to increase the log to 500m, but our principle dba is not convinced, he think log buffer size should play a more important role.
Any ideas,
Joan
-- Please see the official ORACLE-L FAQ: http://www.orafaq.com -- Author: Joan Hsieh INET: Joan.Hsieh_at_mirror-image.com Fat City Network Services -- (858) 538-5051 FAX: (858) 538-5051 San Diego, California -- Public Internet access / Mailing Lists -------------------------------------------------------------------- To REMOVE yourself from this mailing list, send an E-Mail message to: ListGuru_at_fatcity.com (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-LReceived on Thu Feb 01 2001 - 11:20:07 CST
(or the name of mailing list you want to be removed from). You may
also send the HELP command for other information (like subscribing). -- Please see the official ORACLE-L FAQ: http://www.orafaq.com -- Author: Joan Hsieh INET: Joan.Hsieh_at_mirror-image.com Fat City Network Services -- (858) 538-5051 FAX: (858) 538-5051 San Diego, California -- Public Internet access / Mailing Lists -------------------------------------------------------------------- To REMOVE yourself from this mailing list, send an E-Mail message to: ListGuru_at_fatcity.com (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from). You may
also send the HELP command for other information (like subscribing).