Performance of database runnig on RAC 10g with two Solaris nodes [message #406006] |
Mon, 01 June 2009 09:36 |
alexberi
Messages: 16 Registered: May 2009
|
Junior Member |
|
|
Hello,
I have performance problems one a database run on RAC installed on two Solaris nodes (node1 and node2).
ASM is used with two disk groups (DG1, DG2):
- database files and redo logs are on DG1
- flash recovery area (FRA) files and archive log files are on DG2
SQL> alter system set db_recovery_file_dest='+DG2' scope=both;
SQL> alter system set log_archive_dest_10='location=use_db_recovery_file_dest';
SQL> alter database archivelog;
SQL> alter database enable block change tracking using file '+DG2';
SQL> alter database flashback on;
Need to increase performance. Which way do you recommend?
I am thinking to move some files (FRA or redo logs) on an NFS file system mounted from network on the two RAC nodes. But I don't know if this is possible.
Thanks alot.
|
|
|
|
|
|
Re: Performance of database runnig on RAC 10g with two Solaris nodes [message #406577 is a reply to message #406159] |
Thu, 04 June 2009 13:04 |
alexberi
Messages: 16 Registered: May 2009
|
Junior Member |
|
|
Hello Alessandro,
Thank you for your acid answer. I suppose I was not very clear in my request.
Hardware available:
- 2 Solaris servers with Oracle RAC 10g installed
- 1 NAS configured with RAID 5 (DG1 and DG2 disk groups of the ASM are stored on the NAS).
- archived log mode and flashback are enabled, DG2 is used
- DG1 is used to store database files and redo logs
Alessandro Rossi wrote on Tue, 02 June 2009 18:33 | alexberi wrote on Mon, 01 June 2009 16:36 |
I have performance problems one a database run on RAC installed on two Solaris nodes (node1 and node2).
ASM is used with two disk groups (DG1, DG2):
- database files and redo logs are on DG1
- flash recovery area (FRA) files and archive log files are on DG2
|
So what could you tell us about DG1 and DG2.
What kind of storage system are you using and how is it performing on your benchmarks?
What is included in those diskgroups? ( physical disks, or raw devices )
What RAID configuration are you using?
|
During tests I saw that there is a high Waiting time on IO operation on both nodes of the RAC.
This is the reason I am thinking to store the redo logs and/or flashback area and/or archived redo logs on another hardware - the only one I have available is a NFS volume which is not really slow.
Hope now is more clear,
Thank you, Alex
|
|
|
|
|
|
|
|
Re: Performance of database runnig on RAC 10g with two Solaris nodes [message #406587 is a reply to message #406006] |
Thu, 04 June 2009 14:42 |
|
BlackSwan
Messages: 26766 Registered: January 2009 Location: SoCal
|
Senior Member |
|
|
Tablespace Reads Av Reads/s Av Rd(ms) Av Blks/Rd Writes Av Writes/s Buffer Waits Av Buf Wt(ms)
BLUE_M064_CLOB 171,791 24 1.14 1.01 30,928 4 1,541 69.15
UNDOTBS1 92,877 13 0.95 1.00 8,949 1 1,318 0.09
BLUE_AUTO_INDX 18,284 3 6.47 2.02 49,643 7 18,360 24.54
In round numbers I see about 3 times more WRITES than READS to BLUE_AUTO_INDX tablespace.
If this TS really holds mostly indexes, I would suspect that index "rebuilds" were being done.
Typically in OLTP READs are 10 times more than WRITEs.
I am not claiming this is bad; just curious.
upon further review I see the following:
Segments by Row Lock Waits
* % of Capture shows % of row lock waits for each top segment compared
* with total row lock waits for all segments captured by the Snapshot
Owner Tablespace Name Object Name Subobject Name Obj. Type Row Lock Waits % of Capture
HEAD BLUE_AUTO_INDX JMS_MESSAGES_DESTINATION_IB INDEX 5,018 78.71
My initial reaction is that this index is involved with message logging and that this index is a SEQUENCE.
If so, a slight performance gain might result from including the keyword REVERSE in the DDL for this index.
[Updated on: Thu, 04 June 2009 14:59] Report message to a moderator
|
|
|
|
Re: Performance of database runnig on RAC 10g with two Solaris nodes [message #406766 is a reply to message #406587] |
Fri, 05 June 2009 12:59 |
alexberi
Messages: 16 Registered: May 2009
|
Junior Member |
|
|
1) You were right (again), all table indexes are stored in on BLUE_AUTO_INDX tablespace:
CREATE TABLESPACE BLUE_AUTO_INDX DATAFILE ''&5.BLUE_AUTO_INDX01.DBF'' SIZE 200M REUSE AUTOEXTEND ON NEXT 2M MAXSIZE UNLIMITED LOGGING ONLINE PERMANENT EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO
I dont know if this is the best way to do it.
2) In particular the index JMS_MESSAGES_DESTINATION_IB is a BITMAP index:
create bitmap index JMS_MESSAGES_DESTINATION_IB on JMS_MESSAGES (DESTINATION)
tablespace BLUE_AUTO_INDX
pctfree 10
initrans 2
maxtrans 255
storage
(
initial 64K
minextents 1
maxextents unlimited
);
I will put the REVERSE on index and see the improvement.
3) In AWR report there are 3 columns of type CLOB slowing most the performance for both read/write operations. I don't know how to improve this. Columns are: MESSAGECONTENTS.PLAINBODY , MESSAGECONTENTS.PLAINBODY and JMS_MESSAGES.MESSAGEBLOB:
CREATE TABLE MESSAGECONTENTS
(
ID VARCHAR2(16 BYTE) NOT NULL,
SYSTEMID VARCHAR2(8 BYTE) NOT NULL,
CREATIONTIMESTAMP DATE NOT NULL,
BODY CLOB,
PLAINBODY CLOB,
FORMATTYPE VARCHAR2(8 BYTE) NOT NULL,
TEMPLATEID VARCHAR2(18),
CUSTOMDATA CLOB
)
TABLESPACE BLUE_M008_NTAB
PCTUSED 60
PCTFREE 10
INITRANS 4
MAXTRANS 255
STORAGE (
INITIAL 8M
NEXT 8M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
LOGGING
LOB (BODY) STORE AS MESSAGECONTENTS_BODY
( TABLESPACE BLUE_M064_CLOB
ENABLE STORAGE IN ROW
CHUNK 4096
PCTVERSION 10
NOCACHE
STORAGE (
INITIAL 64M
NEXT 64M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
)
LOB (PLAINBODY) STORE AS MESSAGECONTENTS_PLAINBODY
( TABLESPACE BLUE_M064_CLOB
ENABLE STORAGE IN ROW
CHUNK 4096
PCTVERSION 10
NOCACHE
STORAGE (
INITIAL 64M
NEXT 64M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
)
LOB (CUSTOMDATA) STORE AS MESSAGECONTENTS_CUSTOMDATA
( TABLESPACE BLUE_M064_CLOB
ENABLE STORAGE IN ROW
CHUNK 4096
PCTVERSION 10
NOCACHE
STORAGE (
INITIAL 64M
NEXT 64M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
)
NOCACHE
NOPARALLEL;
Tablespace creation:
CREATE TABLESPACE BLUE_M064_CLOB DATAFILE ''&5.BLUE_M064_CLOB01.DBF'' SIZE 2048M REUSE AUTOEXTEND ON NEXT 1024M MAXSIZE UNLIMITED LOGGING ONLINE PERMANENT EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64M SEGMENT SPACE MANAGEMENT AUTO
|
|
|