Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Mailing Lists -> Oracle-L -> Re: Oracle Tech Support
You think that's good? Try this one...
Me
---
RedHat Linux 7.2 Oracle 9.2.0.2
dbca *always* hangs at 41% - "Creating and starting Oracle instance".
No background process ever gets started - no pmon, no smon, nothing.
This is regardless of kernel parameters, memory or any other such stuff.
Running a script to create an *identical* database works every time.
"Support"
Me
---
New info : There are no messages in the alert log - except for the "shutdown
abort" after I cancel dbca. Prior to that, there is nothing. It is empty.
Support
Have you connected and checked from v$session_wait to see whether sessions are moving or not? See <Note:68738.1> "Hang or Spin?".
column sid format 990
column seq# format 99990
column wait_time heading 'WTime' format 99990
column event format a30
column p1 format 9999999990
column p2 format 9999999990
column p3 format 9990
select sid,event,seq#,p1,p2,p3,wait_time
from V$session_wait
/
Me
---
New info : Perhaps I need to be more explicit...
Here is the *entire* alert log immediately before canceling dbca (after it
sat there motionless for over an hour):
--- Start of alert log ----
"Support"
I have done some search and I found followings :
<Bug:2461946>
Abstract: CREATE DATABASE HUNGS UP WHEN WE SET DISK_ASYNCH_IO TRUE
O/S: 46 Intel Based Server LINUX
Status: 95,Closed, Vendor OS Problem
The problem is identified as being OS configuration.
Workaround :
echo 1048576 >/proc/sys/fs/aio-max-size and reboot.
Solution:
Use JRE not JDK.
Me
---
Guessed? "No pmon, no smon, nothing" seems to indicate "no instance".
I'm using Linux 7.2, not 7.1. I have Oracle 9.2.0.2, not 9.0.1
/proc/sys/fs/aio-max-size is a problem as there is no fs aio in RH 7.2.
However, on our RH AS 2.1 machine, also with Oracle 9.2.0.2,
dbca hangs also - and aio-max-size is already 1M.
CPU was sleepy, not spinning (evidently in wash cycle).
J-R-E J-D-K ... M-O-U-S-E
Solution: Resurrect orainst /c
Please close this TAR! I can't take any more!
Thanks for your help!
Every time I file a TAR anymore, I get this "are you sure its plugged in?" sort of treatment. I've been tempted several times to ask them: "If there is a CUSTOMER_IS_A_BOZO flag somewhere in your database, could you please set it to FALSE - the default must be TRUE."
I finally just stopped using Metalink months ago because it is so entirely useless and a huge waste of time. I thought this one might be easy though. Evidently, I was wrong.
[It couldn't really be "a problem with the DBCA Java program" could it? Nah! Not from a company famous for its "unbreakable" software! And all their other Java-based GUI tools have always been flawless. ;-]
I filed a TAR about 5 months ago about a 9i bug - extremely slow queries against v$datafile ("select name from v$datafile"). After jumping through entirely irrelevant hoops at support's request for a month, I just let the TAR die. Nobody was bothering to take anything I said seriously. I had uploaded a '10046' level 8 trace on my own initiative, ran tkprof against the query on both 8.1.7.4 (0.01 sec) and 9.2.0.2 (6.86 sec), uploaded both the .trc and the tkprof output for both, wrote up a detailed description of the differences (few except for time), and still got the idiot treatment. I said it was a bug and nobody would even consider the possibility.
For about twelve rounds, they kept saying stuff like that I couldn't expect the query to run the same unless the hardware was identical (even after I explained that 8.1.7.4 was running on a single 1.2 GHz PIII CPU Dell desktop with Linux 7.2 and 256M RAM and a single IDE drive, but 9.2.0.2 was running on a dual 1.4 GHz Dell server with Linux 7.2, 4G RAM and the 9i database was alone on a Dell/EMC FC4700 array with 2 GB of cache and a bunch of 15k SCSI drives set up with RAID 0+1 (except for redo on dedicated mirrored drives)
Or that I should consider moving redo logs off of disks with other datafiles..
Or that the datafiles might need to be redistributed to balance I/O (How much I/O does "select name from v$datafile" require? Is I/O redistributi9on likely to make it 400-600 times faster? Besides, the database and system was idle except for me and a few background processes.)
Arrrrgh!
Guess what? It was a bug. When I later found and applied patch #2773907, the problem disappeared entirely - the query went from >6.5 sec to 0.01-0.02 sec. All the nonsense that support suggested or asked for was entirely useless in solving the problem - and I politely told them so at the time (but humored them anyway).
Don Granaman
OraSaurus on the brink...
> > Today I opened yet another iTar on the VERY buggy 9iAS R2 Reports Server. > > Below is a CLASSIC response to my report of errors being generated. > > "There is a Unix generic solution that you can try . Use the command like > this : > > + rwclient.sh userid=mwh/*******@prod authid=orcladmin/****** > desformat=postscript > server=repcosmora001 report=edi810ii destype=PRINTER desname=cohpfin013 > print_apunprinted=Y mode=default > /dev/null > > This would not display the error message on the terminal. > Can you try this workaround and let me know whether this is okay?" > > So, the next time anybody gets an error message from Oracle > simply wrap a blindfold over your eyes, send the message into > the bit bucket, & go merrily on your way. > > Error? What error? > > UNBELIEVABLE!
-- Please see the official ORACLE-L FAQ: http://www.orafaq.net -- Author: Don Granaman INET: granaman_at_cox.net Fat City Network Services -- 858-538-5051 http://www.fatcity.com San Diego, California -- Mailing list and web hosting servicesReceived on Fri Feb 21 2003 - 01:58:55 CST
---------------------------------------------------------------------
To REMOVE yourself from this mailing list, send an E-Mail message to: ListGuru_at_fatcity.com (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing).