Re: Controlling load

From: Jack van Zanen <jack_at_vanzanen.com>
Date: Wed, 22 Feb 2023 23:09:59 +1100
Message-ID: <CAFeFPA9YXpihE4B928LdTCnVpUVzfnsG4XQoE+L7bvCnZSQ2nQ_at_mail.gmail.com>



Hi

If both use same service which has preferred node 1, doesn't that mean that none are connecting to node 2? Service will only be active on node 2 if node one is down

Set up another service preferred node 2 and separate the two processes...at least that way the batch job running wild on one node will not take away all cpu and memory for the other processes

On Wed, 22 Feb 2023, 07:20 yudhi s, <learnerdatabase99_at_gmail.com> wrote:

> Hello Listers,
> On a normal day, we see sessions/AAS per ~1 minute interval as ~50 in this
> database. and they are mainly from two specific 'programs' (say e.g.
> prog_batch, prog_online). But during certain periods of the day we saw a
> large number of sessions with programs as 'prog_batch' (i.e. with AAS>~500)
> flooding the database and we are seeing large concurrency waits, bct buffer
> space waits all over the database. And its impacting critical sessions of
> other programs i.e. prog_online. So considering 'prog_batch' is meant for
> batch kind of processes and it's okay for those sessions to queue-up and
> run a few minutes longer, but at the same time we can't afford to impact
> the sessions from prog_online, as those are latency sensitive online users.
> But we found both of these programs are pointing to the same database
> services and running on the same node-1. So , apart from controlling the
> number of sessions/connections from the application end, can we do some
> easy fixes with regards to the service config level, which will better
> control the incoming load? Or say putting some cap on the incoming sessions
> from prog_batch?
> This is a 19C database with 2 node RAC and its an Exadata machine. Each
> node is a 48 core, 2 socket. Both the programs were using the same service
> and having preferred nodes as node-1.
>
> Regards
> Yudhi
>

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Feb 22 2023 - 13:09:59 CET

Original text of this message