Skip navigation.

Hemant K Chitale

Syndicate content
I am an Oracle Database Specialist in Singapore.
get an rss feed of this blog at http://hemantoracledba.blogspot.com/feeds/posts/default?alt=rss
follow me on twitter : @HemantKChitale
Updated: 14 hours 29 min ago

A blog on Oracle Standard Edition

Sun, 2015-01-18 08:40
Here's a blog on Oracle Standard Edition by Ann Sjokvist.

.
.
.
Categories: DBA Blogs

Inserting into a table with potentially long rows

Fri, 2015-01-09 09:25
Note :  This post builds on blog posts by Nikolay Savvinov and Jonathan Lewis.


Nikolay Savvinov and Jonathan Lewis have identified that when you have a table with a potentially large row size -- where the theoretical maximum row length exceeds the block size -- redo overheads are significantly greater for multi-row inserts.

First, I demonstrate with a table with a small potential row size. A table with 1 number column and 3 columns of a total max length of 30 characters.  The actual data inserted is also very little.  The first row has the values (1,'1','1','1'), the second row has the values (2,'2','2','2') and so on to (1000,'1000','1000','1000) for a total of 1,000 rows. :

SQL> drop table hkc_test_small_row_size purge;
drop table hkc_test_small_row_size purge
*
ERROR at line 1:
ORA-00942: table or view does not exist


SQL>
SQL> create table hkc_test_small_row_size
2 (id_column number, data_col_1 varchar2(10), data_col_2 varchar2(10), data_col_3 varchar2(10));

Table created.

SQL> create unique index hkc_tlrs_undx on hkc_test_small_row_size(id_column);

Index created.

SQL>
SQL>
SQL>
SQL> select n.name, s.value At_Beginning
2 from v$statname n, v$sesstat s
3 where n.statistic#=s.statistic#
4 and n.name in ('redo entries', 'redo size')
5 and s.sid = (select distinct sid from v$mystat)
6 order by 1
7 /

NAME AT_BEGINNING
---------------------------------------------------------------- ------------
redo entries 102
redo size 23896

SQL>
SQL> insert into hkc_test_small_row_size
2 select rownum+4000, to_char(rownum), to_char(rownum), to_char(rownum)
3 from dual
4 connect by level < 1001
5 /

1000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> select n.name, s.value Normal_Insert
2 from v$statname n, v$sesstat s
3 where n.statistic#=s.statistic#
4 and n.name in ('redo entries', 'redo size')
5 and s.sid = (select distinct sid from v$mystat)
6 order by 1
7 /

NAME NORMAL_INSERT
---------------------------------------------------------------- -------------
redo entries 154
redo size 92488

SQL>
SQL>
SQL>
SQL> insert /*+ APPEND */ into hkc_test_small_row_size
2 select rownum, to_char(rownum), to_char(rownum), to_char(rownum)
3 from dual
4 connect by level < 1001
5 /

1000 rows created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> select n.name, s.value APPEND_Insert
2 from v$statname n, v$sesstat s
3 where n.statistic#=s.statistic#
4 and n.name in ('redo entries', 'redo size')
5 and s.sid = (select distinct sid from v$mystat)
6 order by 1
7 /

NAME APPEND_INSERT
---------------------------------------------------------------- -------------
redo entries 252
redo size 193396

SQL>
SQL>
SQL> drop table hkc_test_small_row_size purge;

Table dropped.

SQL>

Thus, we can see that, for the "SMALL_ROW_SIZE" table, the redo entries for 1000 of these small rows  :
Simple Insert of 1,000 rows :   52 redo entries and 68,592 bytes.
Direct Path (APPEND) of 1,000 rows : 98 redo entries and 100,908 bytes.


Next, I demonstrate with a  table with a large potential row size -- exceeding the database block size.  Exactly the same data set is inserted  --  The first row has the values (1,'1','1','1'), the second row has the values (2,'2','2','2') and so on to (1000,'1000','1000','1000) for a total of 1,000 rows.

SQL> drop table hkc_test_large_row_size purge;
drop table hkc_test_large_row_size purge
*
ERROR at line 1:
ORA-00942: table or view does not exist


SQL>
SQL> create table hkc_test_large_row_size
2 (id_column number, data_col_1 varchar2(4000), data_col_2 varchar2(4000), data_col_3 varchar2(4000));

Table created.

SQL> create unique index hkc_tlrs_undx on hkc_test_large_row_size(id_column);

Index created.

SQL>
SQL>
SQL>
SQL> select n.name, s.value At_Beginning
2 from v$statname n, v$sesstat s
3 where n.statistic#=s.statistic#
4 and n.name in ('redo entries', 'redo size')
5 and s.sid = (select distinct sid from v$mystat)
6 order by 1
7 /

NAME AT_BEGINNING
---------------------------------------------------------------- ------------
redo entries 102
redo size 23900

SQL>
SQL>
SQL> insert into hkc_test_large_row_size
2 select rownum+4000, to_char(rownum), to_char(rownum), to_char(rownum)
3 from dual
4 connect by level < 1001
5 /

1000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> select n.name, s.value Normal_Insert
2 from v$statname n, v$sesstat s
3 where n.statistic#=s.statistic#
4 and n.name in ('redo entries', 'redo size')
5 and s.sid = (select distinct sid from v$mystat)
6 order by 1
7 /

NAME NORMAL_INSERT
---------------------------------------------------------------- -------------
redo entries 2145
redo size 526320

SQL>
SQL>
SQL> insert /*+ APPEND */ into hkc_test_large_row_size
2 select rownum, to_char(rownum), to_char(rownum), to_char(rownum)
3 from dual
4 connect by level < 1001
5 /

1000 rows created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> select n.name, s.value APPEND_Insert
2 from v$statname n, v$sesstat s
3 where n.statistic#=s.statistic#
4 and n.name in ('redo entries', 'redo size')
5 and s.sid = (select distinct sid from v$mystat)
6 order by 1
7 /

NAME APPEND_INSERT
---------------------------------------------------------------- -------------
redo entries 2243
redo size 627228

SQL>
SQL>
SQL> drop table hkc_test_large_row_size purge;

Table dropped.

SQL>

Thus, we can see that, for the "LARGE_ROW_SIZE" table, the redo entries for 1000 of these actually small rows :
Simple Insert of 1,000 rows :   2,043 redo entries and 502,420 bytes.
Direct Path (APPEND) of 1,000 rows : 98 redo entries and 100,908 bytes.

Therefore, a simple (regular) Insert into such a table where the *potential* row size (not necessarily the actual row size) exceeds the block size is very expensive.  Apparently, the optimization to insert multiple rows into a single block with minimal redo is not invoked when Oracle thinks that the row may exceed the block size.  It switches to a row-by-row insert.  This is evident in the LARGE_ROW_SIZE case where 2,043 redo entries is more than 1000 rows + 1000 index entries.  The SMALL_ROW_SIZE had only 52 redo entries.
Remember : This happens when
(a) The potential row size exceeds the block size (irrespective of the actual row size which may be very few bytes)
AND
(b) a regular (non-Direct) Insert is used.
In such cases, a Direct Path Insert works out better.  Although there are obvious downsides to doing Direct Path Inserts -- the HighWaterMark keeps extending, irrespective of Deletes that may have been issued.
.
.
.


Categories: DBA Blogs

Statistics on this blog

Sun, 2014-12-07 08:40
I began this blog on 28-Dec-2006.  For the 8 years 2007 to 2014, I have averaged 56 posts per year.  Unfortunately, this year, 2014, has produced the fewest posts -- 40 including this one.  This includes the "series" on Grid / ASM / RAC and the series on StatsPack / AWR.

2011 was my most prodigious year -- 99 posts.

There were 8,176 page views in July 2007.  To date, there have been more than 930thousand (946thousand at the end of 2014) page views on this blog.  By month, the peak count has been for March 2012 -- 24,346 page views.

My largest viewer counts are from USA, India, UK, Germany and France.  www.google.com has been the largest source of traffic to this blog.

.
.
.



Categories: DBA Blogs

StatsPack and AWR Reports -- Bits and Pieces -- 4

Tue, 2014-12-02 08:05
This is the fourth post in a series.

Post 1 is here.
Post 2 is here.
Post 3 is here.

Buffer Cache Hit Ratios

Many novice DBAs may use Hit Ratios as indicators of performance.  However, these can be misleading or incomplete.

Here are two examples :

Extract A: 9i StatsPack

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer  Hit   %:   99.06

It would seem that with only 0.94% of reads being physical reads, the database is performing optimally.  So, the DBA doesn't need to look any further.  
Or so it seems.
If he spends some time reading the report, he also then comes across this :
Top 5 Timed Events~~~~~~~~~~~~~~~~~~                                                     % TotalEvent                                               Waits    Time (s) Ela Time-------------------------------------------- ------------ ----------- --------db file sequential read                           837,955       4,107    67.36CPU time                                                        1,018    16.70db file scattered read                             43,281         549     9.00


                                                                   Avg                                                     Total Wait   wait    WaitsEvent                               Waits   Timeouts   Time (s)   (ms)     /txn---------------------------- ------------ ---------- ---------- ------ --------db file sequential read           837,955          0      4,107      5    403.3db file scattered read             43,281          0        549     13     20.8
Physical I/O is a significant proportion (76%) of total database time.  88% of the physical I/O is single-block  reads ("db file sequential read").  This is where the DBA must identify that tuning *is* required.
Considering the single block access pattern it is likely that a significant proportion are index blocks as well.  Increasing the buffer cache might help cache the index blocks.


Extract B : 10.2 AWR
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %:99.98Redo NoWait %:100.00Buffer Hit %:96.43In-memory Sort %:99.99Library Hit %:97.16Soft Parse %:98.16Execute to Parse %:25.09Latch Hit %:99.85Parse CPU to Parse Elapsd %:89.96% Non-Parse CPU:96.00
The Buffer Hit Ratio is very good.  Does that mean that I/O is not an issue ?
Look again at the same report 
Top 5 Timed Events
EventWaitsTime(s)Avg Wait(ms)% Total Call TimeWait ClassCPU time147,59342.3db file sequential read31,776,67887,659325.1User I/Odb file scattered read19,568,22079,142422.7User I/ORMAN backup & recovery I/O1,579,31437,6502410.8System I/Oread by other session3,076,11014,21654.1User I/O
User I/O is actually significant.  The SQLs with the highest logical I/O need to be reviewed for tuning.

.
.
.

Categories: DBA Blogs

StatsPack and AWR Reports -- Bits and Pieces -- 3

Sat, 2014-11-08 09:48
This is the third post in this series.
Post 1 is here
Post 2 is here

Note : Some figures / details may be slightly changed / masked to hide the real source.

Identifying Unusual Events / Behaviours / Applications

Here I find a few "unusual" events and mark them out.

Extract A : 11.2 AWR

Snap IdSnap TimeSessionsCursors/SessionBegin Snap:.... 03:00:57107.7End Snap:.... 07:00:07114.9Elapsed:239.17 (mins)DB Time:22.61 (mins)
Top 5 Timed Foreground Events

    EventWaitsTime(s)Avg wait (ms)% DB timeWait ClassDB CPU1,33298.16SQL*Net more data to client49,7012001.50NetworkSQL*Net more data from client213,915500.34Networkdb file scattered read1,159110.08User I/Odb file sequential read7,547100.07User I/O
    The two "SQL*Net more data" sets of waits are the unusual events.The Time on SQL*Net more data to/from client is negligible isn't it ?  So, should I be concerned ?  Over a 4 hour period, only 20seconds were on "SQL*Net more data to client".  Time on "SQL*Net more data from client" is much lower at a total time of 5seconds only.  So "Time based" tuning would ignore these two waits.Foreground Wait EventsEventWaits%Time -outsTotal Wait Time (s)Avg wait (ms)Waits /txn% DB timeSQL*Net more data to client49,701020037.781.50SQL*Net more data from client213,915050162.620.34db file scattered read1,1580110.880.08db file sequential read7,5500105.740.07






    SQL*Net message to client652,102000495.890.04













    SQL*Net message from client652,1020183,327281495.89
    Not that Oracle treats "SQL*Net message from client" as an idle wait so the 183,327seconds of wait time do NOT appear in the Top 5 Timed Foreground Events list.

    I would draw attention to the high number of "more data from client" waits and the correlation with the "message from client" waits.  Either extremely large SQL statements or PLSQL blocks are being submitted very frequently or row inserts with very large array sizes (number of rows per insert) are being received.  In this case, further investigation reveals an ETL loader that does bulk inserts of a number of rows per array.  If we need tuning, tuning the SDU may help.
    Similarly the "more data to client" indicates large data sets are being returned.  The numbers of columns and rows per every "send" are high.
    Instance Activity Stats (from the same AWR)StatisticTotalper Secondper Trans


    Requests to/from client654,73845.63497.90



    SQL*Net roundtrips to/from client654,74045.63497.91



    bytes received via SQL*Net from client1,793,072,463124,950.541,363,553.20bytes sent via SQL*Net to client552,048,24738,469.57419,808.55



    logical read bytes from cache762,514,227,20053,135,924.61579,858,727.91



    physical read total bytes8,772,479,488611,311.626,671,087.06


    physical write total bytes25,334,243,3281,765,420.7619,265,584.28



    redo size6,373,204,848444,117.794,846,543.61




    1.793billion bytes received in 654K SQL*Net trips is 2741bytes per trip received at 45 messages per second.  Given that it is still only 2,741bytes per trip, possibly the array size could also be tuned with the SDU and TDU.
    So, this is an AWR that doesn't call for tuning but reveals information about how the database is being used.  Large number of rows (large number of columns) being inserted and retrieved in each call.  The Performance Analyst needs to be aware of the nature of the "application" --- here it is not OLTP users but an ETL job that is the "application".  Although the database had more than a hundred sessions a very small number of sessions (possibly 4 ?) were active doing ETL and checking the status of ETL tables during this window.

    Would you care to analyze the other statistics I've listed -- bytes read and bytes written ?



    Extract B : 10.2 AWR
    Snap IdSnap TimeSessionsCursors/SessionBegin Snap:
     00:00:072284.5End Snap:
     23:00:062324.3Elapsed:1,379.97 (mins)DB Time:11,543.24 (mins)
    I know.  It is really bad and most likely meaningless to get an AWR for a 24hour range.  (I'll not go into the details about why the AWR is for 24hours -- save that I did NOT ask for a 24hour AWR report).
    Top 5 Timed Events
    EventWaitsTime(s)Avg Wait(ms)% Total Call TimeWait ClassCPU time258,10137.3db file sequential read62,150,655208,148330.1User I/Odb file scattered read28,242,529141,638520.4User I/ORMAN backup & recovery I/O1,597,42137,137235.4System I/Oenq: TX - row lock contention22,27634,9421,5695.0ApplicationThe RMAN Backup load is expected.  (Why ? Given a 24hour report, I expect RMAN to have run at least once during the day).For performance tuning, I would look at the "db file ... read" events and identify and analyze SQL statements and the schema.
    What is the "unusual" event here ?  It is the "enq: TX - row lock contention".  Over a period of 1,380minutes, there were 22,276 Row-Lock Contention waits.  Actually, this application does not have the same load throughout the 23hours.  Most likely, it had load for 15hours only.  So, we had 22,276 Row-Lock Contention waits over 15hours.  That translates to 1,485 waits per hour or one Row-Lock Contention wait every 2.5seconds.  Now, that is a very high frequency.  Either users are locking each other out for a very short while (1.569seconds per wait on average) or there is/are one or more jobs that run at a very high frequency and update a common "reference" table.  I won't reveal my findings here but analysis of the SQL statements indicates what the "problem" is.
    Now, should "Time based performance tuning" be worried about the 5% of time lost on these waits ?  Probably not.  But they do indicate something peculiar in the design of this system.  There are less than 250 user sessions in this OLTP database but there is/are one or more jobs that is/are locking itself every 2.5seconds -- so there is some point of serialisation occurring.  Is that job also accounting for CPU time or 'db file read' time ?  That needs further analysis.
    Both these cases show how a Performance Analyst needs to know how the database is being used.  What sort of jobs are submitted, besides OLTP users ?
    ...


    Categories: DBA Blogs

    StatsPack and AWR Reports -- Bits and Pieces -- 2

    Sat, 2014-11-01 08:52
    This is the second post in a series on reading StatsPack and AWR reports.
    (The first is available here)


    Comparing Reports :

    Here are two 9.2 StatsPack extracts from one database:

    Extract A  : 9.2 StatsPack
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    --------------- ---------------
    Redo size: 56,031.63 3,084.68
    Logical reads: 68,286.24 3,759.32
    Block changes: 314.88 17.33
    Physical reads: 842.92 46.40
    Physical writes: 134.76 7.42
    User calls: 271.32 14.94
    Parses: 146.46 8.06
    Hard parses: 7.37 0.41
    Sorts: 93.83 5.17
    Logons: 0.33 0.02
    Executes: 296.70 16.33
    Transactions: 18.16

    Extract B : 9.2 StatsPack
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    --------------- ---------------
    Redo size: 89,615.76 2,960.48
    Logical reads: 210,302.81 6,947.42
    Block changes: 541.83 17.90
    Physical reads: 1,465.04 48.40
    Physical writes: 161.68 5.34
    User calls: 213.82 7.06
    Parses: 125.28 4.14
    Hard parses: 6.13 0.20
    Sorts: 104.31 3.45
    Logons: 0.35 0.01
    Executes: 664.81 21.96
    Transactions: 30.27

    Extract B shows a higher volume of activity -- Transactions per second, Redo Size per second, Logical Reads per second.
    Actually, although these are both from the same database but for two different time-windows on different dates.  The fact that the time-windows aren't the same make the comparisons difficult.  Extract A is for an 11-hour window on day 1 and Extract B is for a 6-hour window on day 6.  The two windows aren't strictly comparable.  You have to be very careful when comparing StatsPack or AWR reports, even when from the same database if they don't cover comparable time-windows with similar loads.  The first Report includes non-peak hours, the second is for Peak hours only. The level of activity averaged out over 11hours that include non-peak hours isn't strictly comparable with a 6hour peak window.

    Extract C : 10.2 AWR
    Load Profile
    Per SecondPer TransactionRedo size: 520,776.15 50,948.36Logical reads: 353,525.71 34,585.98Block changes: 1,854.93 181.47Physical reads: 14,285.23 1,397.55Physical writes: 295.84 28.94User calls: 1,265.14 123.77Parses: 822.64 80.48Hard parses: 15.40 1.51Sorts: 168.09 16.44Logons: 0.16 0.02Executes: 1,040.31 101.78Transactions: 10.22 
    Extract D : 10.2 AWRLoad Profile
    Per SecondPer TransactionRedo size: 517,862.01 54,681.39Logical reads: 288,341.09 30,446.12Block changes: 1,879.27 198.43Physical reads: 12,820.96 1,353.77Physical writes: 323.90 34.20User calls: 1,115.78 117.82Parses: 719.39 75.96Hard parses: 17.28 1.82Sorts: 95.74 10.11Logons: 0.15 0.02Executes: 935.33 98.76Transactions: 9.47 
    Extracts C and D are from the same database for the same time-window on 2 consecutive workdays.  They are comparable.  A difference of 10% for some statistic may not be significant.  Understand that every business, every activity, every system will have some "normal" fluctuations day-to-day.

    .
    .
    .



    Categories: DBA Blogs

    StatsPack and AWR Reports -- Bits and Pieces -- 1

    Wed, 2014-10-22 04:58
    I am planning to put up a few posts on snippets from StatsPack and AWR reports.  This is my first post.
    Note : Some figures / details may be slightly changed / masked to hide the real source.

    Logical I/O and Change rates :
    1.  From a 9.2 StatsPack Report:
    Cache Sizes (end)
    ~~~~~~~~~~~~~~~~~
    Buffer Cache: Std Block Size: 4K
    Shared Pool Size: Log Buffer:

    Load Profile
    ~~~~~~~~~~~~ Per Second
    ---------------
    Redo size: 56,031.63
    Logical reads: 68,286.24
    Block changes: 314.88
    Physical reads: 842.92
    Physical writes: 134.76

    With a 4KB Block Size 68,286.24 Logical Reads translates to slightly over 266MB/second. Logical I/O is CPU-bound.  Database activity is Read-Intensive with a high rate of Reads relative to Writes.

    2.  From an 11.2 AWR Report :
    Cache Sizes
    BeginEndBuffer Cache:
    Std Block Size:16KShared Pool Size:
    Log Buffer:Load Profile
    Per SecondPer TransactionPer ExecPer CallDB Time(s):



    DB CPU(s):



    Redo size:1,593,612.1Logical reads:51,872.5
    Block changes:4,212.4Physical reads:63.8
    Physical writes:133.1

    With a 16KB Block Size, 51,872.5 Logical Reads translates to slightly over 810MB/second.  This consumes CPU cycles.  However, here the number of Block Changes is noticeably high in this environment. This is also reflected in the high Redo rate -- slightly over 5,471MB/hour (Note : "Redo size" is in Bytes).


    CPU Consumption :
    1.  From a 9.2 StatsPack Report :
    Statistic                                      Total     per Second    per Trans
    --------------------------------- ------------------ -------------- ------------
    CPU used by this session 37.5 2.1
    CPU used when call started 37.6 2.1

    This indicates 0.375seconds of CPU usage per second -- i.e. approximately 37.5% of 1 CPU (let's take this as an older non-multicore architecture). If the server has 4 CPUs, CPU consumption is 9.4%

    2.  From an 11.2 AWR Report :
    Instance Activity Stats
    • Ordered by statistic name
    StatisticTotalper Secondper Trans... deleted rows ....


    ... deleted rows ....


    CPU used by this session46.85CPU used when call started46.27
    This indicates 0.468seconds of CPU usage per second -- i.e. approximately 46.8% of 1 Core.  This is also reflected in the Load Profile section :
    Load Profile
    Per SecondPer TransactionPer ExecPer CallDB Time(s):



    DB CPU(s):0.50.10.000.00
    How many CPUs does this machine have ?  AWR reports this :
    Host NamePlatformCPUsCoresSocketsMemory (GB)Linux x86 64-bit16162

    That means we are using less than half of 1 of 16 cores !  This translates to CPU consumption of 3.125%  The server has too many CPU cores !
    Categories: DBA Blogs

    Bandwidth and Latency

    Sat, 2014-10-18 08:40
    Here is, verbatim, an article I posted on Linked-In yesterday  [For other posts on Linked-In, view my Linked-In profile] :

    Imagine an 8-lane highway. Now imagine a 4-lane highway. Which has the greater bandwidth ?Imagine your organisation sends its employees on a wekend "retreat" by bus. You have the choice of two locations, one that is 200kilometres away and the other is 80kilometres away. Assume that buses travel at a constant speed of 80kmph. Which resort will your employees get to faster ?The first question is about bandwidth. The second is about latency.(Why should I assume a fixed speed for the buses ? Because, I can assume a fixed speed at which electrons transfer over a wire or photons over a light channel).Expand the question further. What if the organisation needs to send 32 employees in a 40-seater bus. Does it matter that the bus can travel on an 8-lane highway versus a 4-lane highway (assuming minimal other traffic on the highways at that time) ?Too often, naive "architects" do not differentiate between the two. If my organisation needs to configure a standby (DR) location for the key databases and has a choice of two locations but varying types of network services, it should consider *both* bandwidth and latency. If the volume of redo is 1000MBytes per minute and this, factoring overheads for packetizing the "data", translates to 167Mbits per second, should I just go ahead and buy bandwidth of 200Mbits per second ? If the two sites have two different network services providers offering different bandwidths, should I simply locate at the site with the greater bandwidth ? What if the time it takes to synchronously write my data to site "A" is 4ms and the time to site "B" is 8ms ? Should I not factor the latency ? (I am assuming that the "write to disk" speed of hardware at either site is the same -- i.e. the hardware is the same). I can then add the complications of network routers and switches that add to the latency. Software configurations, flow-control mechanisms, QoS definitions and hardware configuration can also impact bandwidth and latency in different ways.Now, extend this to data transfers ("output" or "results") from a database server to an application server or end-user. If the existing link is 100Mbps and is upgraded to 1Gbps, the time to "report" 100 rows is unlikely to change as this time is a function of the latency. However, if the number of concurrent users grows from 10 to 500, the bandwidth requirement may increase and yet each user may still have the same "wait" time to see his results (assuming that there are no server hardware constraints returning results for 500 users).On the flip side, consider ETL servers loading data into a database. Latency is as important as bandwidth. An ETL scheme that does "row-by-row" loads relies on latency, not bandwidth. Increasing bandwidth doesn't help such a scheme.Think about the two.
    Categories: DBA Blogs