Home » Other » General » reading from AUD$ Tables causes the performance issue ? (Oracle 11g R2)
reading from AUD$ Tables causes the performance issue ? [message #580316] Fri, 22 March 2013 12:32 Go to next message
needee
Messages: 19
Registered: November 2009
Junior Member
I am not a DBA.

One of my Customer is running Oracle 11g R2.

I am responsible to Collect logs from Oracle Database via a Log Manager. The Log Manager connects the Database via JDBC, and read the audit information(logs/events) from AUD_TRAIL(AUD$) Table.

as per the Log Manager Documentation
<snip>
As the Database Connector reads data from the database and keeps track of its offset, or position in the data.
The next time the Connector starts, it uses the saved offset to start where it stopped.
This prevents the Connector from skipping data or sending duplicate data.
</snip>

Problem:
The DBA said that configure the Log Manager to connect at an interval of 80 seconds, because every time Log Manager connects, it
reads/query/scan the whole/complete AUD$ Table which causes significant performance issues.

My Question here is does it really effect the performance if a Log Manager reads the AUD$ Table at an Interval of 10 seconds ?

Or if we keep the AUD$ Table size low, e.g if the DBA set the size AUD$ Table to not grow larger then 200 MB.. will help in this issue ?

[Updated on: Fri, 22 March 2013 12:34]

Report message to a moderator

Re: reading from AUD$ Tables causes the performance issue ? [message #580319 is a reply to message #580316] Fri, 22 March 2013 12:46 Go to previous messageGo to next message
Michel Cadot
Messages: 68728
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
Quote:
My Question here is does it really effect the performance if a Log Manager reads the AUD$ Table at an Interval of 10 seconds ?


Yes. The worst thing is not query the SYS.AUD$ table (every 10 seconds although...) but to connect every 10 seconds.
Are you sure your log manager CONNECTs every 10 seconds?

Regards
Michel
Re: reading from AUD$ Tables causes the performance issue ? [message #580320 is a reply to message #580319] Fri, 22 March 2013 12:57 Go to previous messageGo to next message
needee
Messages: 19
Registered: November 2009
Junior Member
Michel Cadot wrote on Fri, 22 March 2013 12:46

Are you sure your log manager CONNECTs every 10 seconds?


I didnt find anything in the docs that how often the database conector connects the database to read from SYS.AUD$ table, but this is what the DBA told me.

Also if I got you properly then.. reading/scanning the whole SYS.AUD$ table does not cause the performance issue ? its the connection frequency that can cause performance issue.

[Updated on: Fri, 22 March 2013 13:00]

Report message to a moderator

Re: reading from AUD$ Tables causes the performance issue ? [message #580323 is a reply to message #580320] Fri, 22 March 2013 13:10 Go to previous messageGo to next message
Michel Cadot
Messages: 68728
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
Quote:
eading/scanning the whole SYS.AUD$ table does not cause the performance issue ?


It depends on the query and size of the table.

Quote:
its the connection frequency that can cause performance issue.


Most likely unless your DBA configured connection pool on the database server.

Regards
Michel
Re: reading from AUD$ Tables causes the performance issue ? [message #580324 is a reply to message #580323] Fri, 22 March 2013 13:21 Go to previous messageGo to next message
needee
Messages: 19
Registered: November 2009
Junior Member
I dont know about the size of the database, but this tool(log manager, and jdbc connector) uses following Stored Procedure to collect information from Oracle

create or replace PROCEDURE SAMPLESTOREDPROC_GET ( rc OUT 
SYS_REFCURSOR,offsetStr IN string)
is 
begin
open rc for 'SELECT * FROM SAMPLETABLE WHERE s_db_offset > offsetStr;
end SAMPLESTOREDPROC_GET;
/

also from the docs
Quote:

The Oracle stored procedure should have two parameters, the OUT parameter of type
SYS_REFCURSOR that returns a cursor that could be used to retrieve data and an IN parameter,
which is an offset that can be used to decide which data is selected.


Re: reading from AUD$ Tables causes the performance issue ? [message #580325 is a reply to message #580324] Fri, 22 March 2013 13:29 Go to previous messageGo to next message
Michel Cadot
Messages: 68728
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
Good! it is the efficient way to use Oracle from JDBC.
Hoping the program use array fetch but this is tool code and I think you have not it but maybe the documentation mentions an array size configuration parameter.

Also be sure of the type of the parameter, it must be of DATE datatype.
Can you post the actual code of the procedure that query SYS.AUD$ table.

Regards
Michel
Re: reading from AUD$ Tables causes the performance issue ? [message #580326 is a reply to message #580325] Fri, 22 March 2013 13:58 Go to previous messageGo to next message
needee
Messages: 19
Registered: November 2009
Junior Member
from the docs
<snip>
Log Manager provides two different methods by which audit data can be captured from Oracle Database:
1. Normal JDBC read of the audit tables: with this method, the Log Manager will initiate a JDBC
connection to the database, issue a standard SQL query, and process the results.
2. Invocation of a stored procedure to read the audit tables: with this method, the Log Manager
will initiate a JDBC connection to the database, but will invoke a stored procedure which must be pre-installed in
the database; this stored procedure will read the audit tables and return records to Sentinel for processing.
</snip>

I just recall we are not using the Stored Procedure but using the SQL Query method.

Also product documentation does not provide any information about array size parameter
Re: reading from AUD$ Tables causes the performance issue ? [message #580327 is a reply to message #580326] Fri, 22 March 2013 14:54 Go to previous messageGo to next message
Michel Cadot
Messages: 68728
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
Both ways are efficient, it is just a matter of security policy to choose one or the other one.
The important thing is that the parameter must of the same type than the column it is checked again (I think "timestamp# > [last date]" in this case).

Regards
Michel
Re: reading from AUD$ Tables causes the performance issue ? [message #580330 is a reply to message #580327] Fri, 22 March 2013 15:51 Go to previous messageGo to next message
needee
Messages: 19
Registered: November 2009
Junior Member
thanks for your quick and continuous responses, appreciated.
Quote:
The important thing is that the parameter must of the same type than the column it is checked again (I think "timestamp# > [last date]" in this case).


I dont know if following shows the parameter info.. but thats how we enabled the auditing
as per the docs:

Grant the privileges CREATE SESSION and SELECT on v_$session, v_$version and SYS.DBA_AUDIT_TRAIL to
a user account; this account will be used by the Log Manager to query the audit trail. Execute the following queries to grant
privileges:
grant CREATE SESSION to <user>;
grant select on v_$session to <user>;
grant select on v_$version to <user>;
grant select on SYS.DBA_AUDIT_TRAIL to <user>;

Re: reading from AUD$ Tables causes the performance issue ? [message #580331 is a reply to message #580330] Fri, 22 March 2013 16:06 Go to previous messageGo to next message
Michel Cadot
Messages: 68728
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
So you use DBA_AUDIT_TRAIL and not SYS.AUD$ (and this is better), the condition should then be "TIMESTAMP > [last date]".
What should be interesting is the actual query or procedure code used by the tool.
If you have not it, you can ask your DBA to trace the tool and then get it.
If it is the case, then your DBA should create an index on sys.aud$.timestamp# column but before, to keep your contract with Oracle, you have to ask them to confirm this will not invalidate your Oracle support (it should not but you must ask them to keep their support). After that, your DBA must keep statistics up to date on sys.aud$; the frequency depends on the insert rate into this table (how many rows are inserted per hour/day).
Your DBA should also take care that he must drop the index before applying any patch/patchset and recreate it afterwards.

Regards
Michel
Re: reading from AUD$ Tables causes the performance issue ? [message #580339 is a reply to message #580331] Sat, 23 March 2013 01:28 Go to previous messageGo to next message
needee
Messages: 19
Registered: November 2009
Junior Member
Michel Cadot wrote on Fri, 22 March 2013 16:06

What should be interesting is the actual query or procedure code used by the tool.
If you have not it, you can ask your DBA to trace the tool and then get it.

I will try to provide the actual SQL Query.

Michel Cadot wrote on Fri, 22 March 2013 16:06

If it is the case, then your DBA should create an index on sys.aud$.timestamp# column but before, to keep your contract with Oracle, you have to ask them to confirm this will not invalidate your Oracle support (it should not but you must ask them to keep their support). After that, your DBA must keep statistics up to date on sys.aud$; the frequency depends on the insert rate into this table (how many rows are inserted per hour/day).
Your DBA should also take care that he must drop the index before applying any patch/patchset and recreate it afterwards.

Thanks for your recommendation, it will really help us.

Michel Cadot wrote on Fri, 22 March 2013 12:46

Are you sure your log manager CONNECTs every 10 seconds?

You were right, log manager(via jdbc connector) does not connects every 10 seconds,.. but it remains connected to the Database

As I have no idea about the SQL Query, or the size of the DBA_AUDIT_TRAIL table, and I will provide you this information as soon I get it from DBA.

But as DBA is complaining that reading audit events by log manager are causing performance issue.. rather then reading the DBA_AUDIT_TRAIL all the times(or continuously) as a quick fix would you recommend me to start the Log Manager JDBC connector at an interval of 1 hour for five minutes only and then stop it.

Do you think reading/scanning the DBA_AUDIT_TRAIL table with an Interval of 1 hour will minimize the performance issue ?

Re: reading from AUD$ Tables causes the performance issue ? [message #580343 is a reply to message #580339] Sat, 23 March 2013 01:49 Go to previous messageGo to next message
Michel Cadot
Messages: 68728
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
Quote:
But as DBA is complaining that reading audit events by log manager are causing performance issue..


This should no more the case with the index.

Quote:
rather then reading the DBA_AUDIT_TRAIL all the times(or continuously) as a quick fix would you recommend me to start the Log Manager JDBC connector at an interval of 1 hour for five minutes only and then stop it.


If you can change the frequency it'd be the best solution rather than stopping and restarting the tool, if you can't then yes but you have to take care to the security/audit policy you have: how much audit time you can lose? With a query every 10 seconds a hacker has no more than 10 seconds and has to erase every less than 10 seconds his traces, with a query every hour, he has one hour to work without hurrying. If your purpose is not to trap a hacker but to follow application users activity then a query every hour is sufficient (I think but it depends on number of users and the number of audited actions they does per minute/hour).

Regards
Michel

[Updated on: Sat, 23 March 2013 01:49]

Report message to a moderator

Re: reading from AUD$ Tables causes the performance issue ? [message #580397 is a reply to message #580343] Sun, 24 March 2013 00:31 Go to previous messageGo to next message
needee
Messages: 19
Registered: November 2009
Junior Member
finally I got the sql query
SELECT OS_USERNAME, USERNAME, USERHOST, TERMINAL,
   TO_CHAR(EXTENDED_TIMESTAMP ,'YYYY/MM/DD HH24:MI:SS') AS EXTENDEDTIME,
   TO_CHAR(LOGOFF_TIME ,'YYYY/MM/DD HH24:MI:SS') AS LFTIME,	
   TO_CHAR(TIMESTAMP ,'YYYY/MM/DD HH24:MI:SS') TIMESTAMP,  
   ACTION, ACTION_NAME, SESSIONID, ENTRYID, STATEMENTID, RETURNCODE, 
   OWNER, OBJ_NAME , GLOBAL_UID, SCN, GRANTEE, SQL_BIND, SQL_TEXT,
   (select MACHINE from v$session where schemaname='SYS' and rownum=1) as DHN,
   (select * from v$version where BANNER LIKE 'Oracle%') AS DB_VERSION
FROM SYS.DBA_AUDIT_TRAIL 
WHERE TIMESTAMP >= TO_TIMESTAMP ('%s', 'YYYY/MM/DD HH24:MI:SS.FF6') and rownum <= %d
ORDER BY TIMESTAMP


Regards,
Re: reading from AUD$ Tables causes the performance issue ? [message #580398 is a reply to message #580397] Sun, 24 March 2013 02:24 Go to previous messageGo to next message
Michel Cadot
Messages: 68728
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
1/ The index I mentioned will speed up the query and avoid the table full scan
2/ The query should be rewrite as follow
with db_version as (select /*+ materialize */ banner from v$version where BANNER LIKE 'Oracle%' and rownum=1)
SELECT OS_USERNAME, USERNAME, USERHOST, TERMINAL,
   TO_CHAR(EXTENDED_TIMESTAMP ,'YYYY/MM/DD HH24:MI:SS') AS EXTENDEDTIME,
   TO_CHAR(LOGOFF_TIME ,'YYYY/MM/DD HH24:MI:SS') AS LFTIME,	
   TO_CHAR(TIMESTAMP ,'YYYY/MM/DD HH24:MI:SS') TIMESTAMP,  
   ACTION, ACTION_NAME, SESSIONID, ENTRYID, STATEMENTID, RETURNCODE, 
   OWNER, OBJ_NAME , GLOBAL_UID, SCN, GRANTEE, SQL_BIND, SQL_TEXT,
   sys_context('USERENV','SERVER_HOST') as DHN,
   banner as DB_VERSION
FROM SYS.DBA_AUDIT_TRAIL, db_version 
WHERE TIMESTAMP >= TO_TIMESTAMP ('%s', 'YYYY/MM/DD HH24:MI:SS.FF6') and rownum <= %d
ORDER BY TIMESTAMP

I don't understand why EXTENDED_TIMESTAMP is used when the query ignores the specific part of this column: the time zone; this will lead to interpretation error if client and server are not in the same time zone (and also subsecond is ignored); so TIMESTAMP is sufficient and you know it is in the server time zone. If you really want to have the extended timestamp then the format should be something like "YYYY/MM/DD HH24:MI:SS.FF6 TZR".

In the end, TIMESTAMP is of DATE datatype, so the test should be:
WHERE TIMESTAMP >= TO_DATE('%s', 'YYYY/MM/DD HH24:MI:SS') and rownum <= %d


Regards
Michel

[Edit: Add last comment]

[Updated on: Sun, 24 March 2013 02:27]

Report message to a moderator

Re: reading from AUD$ Tables causes the performance issue ? [message #580436 is a reply to message #580398] Sun, 24 March 2013 07:27 Go to previous messageGo to next message
needee
Messages: 19
Registered: November 2009
Junior Member
Hi Michael,

really appreciate your continuous support

I asked this same question forums.oracle.come, and experts their does not recommend me to create/add index. Would highly appreciate if you please have a look at
https://forums.oracle.com/forums/thread.jspa?threadID=2516048&tstart=0 and help me understand.

Even one of the expert their said that creating/adding index is waste of time, and the other said its not supported.

Also as per your recommendation I would use the following sql query to capture audit information
SELECT OS_USERNAME, USERNAME, USERHOST, TERMINAL,
   TO_CHAR(EXTENDED_TIMESTAMP ,'YYYY/MM/DD HH24:MI:SS') AS EXTENDEDTIME,
   TO_CHAR(LOGOFF_TIME ,'YYYY/MM/DD HH24:MI:SS') AS LFTIME,	
   TO_CHAR(TIMESTAMP ,'YYYY/MM/DD HH24:MI:SS') TIMESTAMP,  
   ACTION, ACTION_NAME, SESSIONID, ENTRYID, STATEMENTID, RETURNCODE, 
   OWNER, OBJ_NAME , GLOBAL_UID, SCN, GRANTEE, SQL_BIND, SQL_TEXT,
   sys_context('USERENV','SERVER_HOST') as DHN,
   banner as DB_VERSION
FROM SYS.DBA_AUDIT_TRAIL, db_version 
WHERE TIMESTAMP >= TO_DATE('%s', 'YYYY/MM/DD HH24:MI:SS') and rownum <= %d
ORDER BY TIMESTAMP


Also we have all the Database servers and Log manager in the same timezone. Also every server is synchronized by time servers, so chances are almost null that time will different on any server.

Thanks once again

Regards

[Updated on: Sun, 24 March 2013 07:28]

Report message to a moderator

Re: reading from AUD$ Tables causes the performance issue ? [message #580441 is a reply to message #580436] Sun, 24 March 2013 09:32 Go to previous messageGo to next message
Michel Cadot
Messages: 68728
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
Jonathan is correct I didn't check the view definition (I should). The best option is to directly query SYS.AUD$ and not DBA_AUDIT_TRAIL and add an index on SYS.AUD$.NTIMESTAMP# as TIMESTAMP# is no more used.

Don't forget the WITH part of my query:
ith db_version as (select /*+ materialize */ banner from v$version where BANNER LIKE 'Oracle%' and rownum=1)
with db_version as (select /*+ materialize */ banner from v$version where BANNER LIKE 'Oracle%' and rownum=1)
SELECT spare1 as OS_USERNAME, 
       userid as USERNAME, 
       USERHOST, 
       TERMINAL,
       TO_CHAR(from_tz(ntimestamp#,'00:00') at local,'YYYY/MM/DD HH24:MI:SS.FF6 TZR') 
         AS EXTENDEDTIME,
       TO_CHAR(logoff$time,'YYYY/MM/DD HH24:MI:SS') AS LFTIME,	
       TO_CHAR(cast((from_tz(ntimestamp#,'00:00') at local) as date),'YYYY/MM/DD HH24:MI:SS') 
         as TIMESTAMP,  
       aud.action# as ACTION, 
       act.name as ACTION_NAME, 
       SESSIONID, 
       ENTRYID, 
       statement as STATEMENTID, 
       RETURNCODE, 
       obj$creator as OWNER, 
       obj$name as OBJ_NAME, 
       user$guid as GLOBAL_UID, 
       SCN, 
       auth$grantee as GRANTEE, 
       to_nchar(substr(sqlbind,1,2000)) as SQL_BIND, 
       to_nchar(substr(sqltext,1,2000)) as SQL_TEXT,
       sys_context('USERENV','SERVER_HOST') as DHN,
       banner as DB_VERSION
FROM SYS.aud$ aud, audit_actions act, db_version 
WHERE ntimestamp# >= TO_TIMESTAMP ('%s', 'YYYY/MM/DD HH24:MI:SS.FF6') 
  and rownum <= %d
  and act.action (+) = aud.action# 
ORDER BY ntimestamp#


Creating an index on SYS table is not supported unless you have Oracle support backing, this is why I ask you to ask them.

Regards
Michel
Re: reading from AUD$ Tables causes the performance issue ? [message #580442 is a reply to message #580441] Sun, 24 March 2013 09:45 Go to previous messageGo to next message
Michel Cadot
Messages: 68728
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
There is an error in the line checking the timestamp, it should be:
WHERE ntimestamp# >= FROM_TZ(TO_TIMESTAMP ('%s', 'YYYY/MM/DD HH24:MI:SS.FF6'),dbtimezone) at time zone 'UTC'

Regards
Michel
Re: reading from AUD$ Tables causes the performance issue ? [message #580446 is a reply to message #580442] Sun, 24 March 2013 14:03 Go to previous messageGo to next message
needee
Messages: 19
Registered: November 2009
Junior Member
I just passed your comments and sql query to the log manager support team

Quote:

Dear Support,

would appreciate if you please pass the following recommendation I received from an Oracle Expert to the developement team

The best option is to directly query SYS.AUD$ and not DBA_AUDIT_TRAIL and add an index on SYS.AUD$.NTIMESTAMP# as TIMESTAMP# is no more used
The query should be rewrite as follow:
with db_version as (select /*+ materialize */ banner from v$version where BANNER LIKE 'Oracle%' and rownum=1)
SELECT spare1 as OS_USERNAME, 
       userid as USERNAME, 
       USERHOST, 
       TERMINAL,
       TO_CHAR(from_tz(ntimestamp#,'00:00') at local,'YYYY/MM/DD HH24:MI:SS.FF6 TZR') 
         AS EXTENDEDTIME,
       TO_CHAR(logoff$time,'YYYY/MM/DD HH24:MI:SS') AS LFTIME,	
       TO_CHAR(cast((from_tz(ntimestamp#,'00:00') at local) as date),'YYYY/MM/DD HH24:MI:SS') 
         as TIMESTAMP,  
       aud.action# as ACTION, 
       act.name as ACTION_NAME, 
       SESSIONID, 
       ENTRYID, 
       statement as STATEMENTID, 
       RETURNCODE, 
       obj$creator as OWNER, 
       obj$name as OBJ_NAME, 
       user$guid as GLOBAL_UID, 
       SCN, 
       auth$grantee as GRANTEE, 
       to_nchar(substr(sqlbind,1,2000)) as SQL_BIND, 
       to_nchar(substr(sqltext,1,2000)) as SQL_TEXT,
       sys_context('USERENV','SERVER_HOST') as DHN,
       banner as DB_VERSION
FROM SYS.aud$ aud, audit_actions act, db_version 
WHERE ntimestamp# >= FROM_TZ(TO_TIMESTAMP ('%s', 'YYYY/MM/DD HH24:MI:SS.FF6'),dbtimezone) at time zone 'UTC'
  and rownum <= %d
  and act.action (+) = aud.action# 
ORDER BY ntimestamp#

I don't understand why EXTENDED_TIMESTAMP is used when the query ignores the specific part of this column: the time zone; this will lead to interpretation error if client and
server are not in the same time zone (and also subsecond is ignored); so TIMESTAMP is sufficient and you know it is in the server time zone.
If you really want to have the extended timestamp then the format should be something like "YYYY/MM/DD HH24:MI:SS.FF6 TZR".



Regards,
Re: reading from AUD$ Tables causes the performance issue ? [message #580450 is a reply to message #580446] Sun, 24 March 2013 14:37 Go to previous message
Michel Cadot
Messages: 68728
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
The last part is no more true as in my query EXTENDEDTIME take time zone into account (see TZR in format).

Regards
Michel
Previous Topic: DDL Not working
Next Topic: My Oracle Support
Goto Forum:
  


Current Time: Fri Dec 27 05:08:13 CST 2024