Re: intermittent long "log file sync" waits

From: Chris Stephens <cstephens16_at_gmail.com>
Date: Tue, 28 Jan 2020 13:39:38 -0600
Message-ID: <CAEFL0swTwOv=Av+kU7paprNoeMDrZA_Q5jPF=YLFFNf6yyW7hg_at_mail.gmail.com>



the workload is run now in a loop. i found lgwr waiting for "log file parallel write" which i think is as close to I/O syscall as oracle instrumentation get. i'm also collecting snapper ash,stats data on log writer (started a few minutes prior to FIRST_SEEN ASH value:

SQL> _at_ashtop session_id,blocking_session,event2 session_id=1710 sysdate-5/60/24 sysdate

   TOTALSECONDS    AAS      %This    SESSION_ID    BLOCKING_SESSION
            EVENT2             FIRST_SEEN              LAST_SEEN
 DIST_SQLEXEC_SEEN
_______________ ______ __________ _____________ ___________________
__________________________ ______________________ ______________________
____________________
              3      0  100% |             1710                     log
file parallel write    2020-01-28 13:29:13    2020-01-28 13:31:42
            1


SQL> [oracle_at_lsst-oradb05 bin]$ egrep -E "WAIT, log file parallel write" /u01/app/oracle/diag/rdbms/lsst2db/lsst2db2/trace/lsst2db2_ora_15677.trc   1710 _at_2, (LGWR) , WAIT, log file parallel write

            ,          1196,    93.91us,      .0%, [          ],         4,

.31, 299us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 630, 50.31us, .0%, [ ], 2,
.16, 315us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 147789, 11.81ms, 1.2%, [W ], 667, 53.28, 221.57us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 49235, 3.91ms, .4%, [ ], 195, 15.49, 252.49us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 3298, 262.28us, .0%, [ ], 4,
.32, 824.5us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 2452, 209.78us, .0%, [ ], 4,
.34, 613us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 1355, 107.26us, .0%, [ ], 4,
.32, 338.75us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 19458, 1.55ms, .2%, [ ], 101, 8.03, 192.65us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 139403, 11.07ms, 1.1%, [W ], 654, 51.92, 213.15us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 24798, 1.97ms, .2%, [ ], 95, 7.56, 261.03us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 4816, 380.56us, .0%, [ ], 9,
.71, 535.11us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 566, 45.11us, .0%, [ ], 2,
.16, 283us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 1153, 91.97us, .0%, [ ], 4,
.32, 288.25us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 140698, 11.2ms, 1.1%, [W ], 677, 53.9, 207.83us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 42907, 3.4ms, .3%, [ ], 187, 14.82, 229.45us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 5166, 412.41us, .0%, [ ], 11,
.88, 469.64us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 1968, 156.09us, .0%, [ ], 3,
.24, 656us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 986, 78.64us, .0%, [ ], 3,
.24, 328.67us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 21365, 1.7ms, .2%, [ ], 106, 8.45, 201.56us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 139508, 11.07ms, 1.1%, [W ], 686, 54.43, 203.36us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 48671, 3.87ms, .4%, [ ], 263, 20.89, 185.06us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 2592, 206.18us, .0%, [ ], 5, .4, 518.4us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 3723, 295.78us, .0%, [ ], 4,
.32, 930.75us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 1171, 93.15us, .0%, [ ], 4,
.32, 292.75us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 142015, 11.29ms, 1.1%, [W ], 663, 52.73, 214.2us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 47919, 3.82ms, .4%, [ ], 187, 14.92, 256.25us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 3892, 309.88us, .0%, [ ], 5, .4, 778.4us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 2321, 184.91us, .0%, [ ], 4,
.32, 580.25us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 1039, 82.57us, .0%, [ ], 3,
.24, 346.33us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 56706, 4.52ms, .5%, [ ], 275, 21.94, 206.2us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 123077, 9.71ms, 1.0%, [W ], 575, 45.38, 214.05us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 1911, 151.92us, .0%, [ ], 3,
.24, 637us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 4127, 327.46us, .0%, [ ], 5, .4, 825.4us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 1938, 154.74us, .0%, [ ], 6,
.48, 323us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 380, 30.14us, .0%, [ ], 1,
.08, 380us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 136460, 10.81ms, 1.1%, [W ], 650, 51.51, 209.94us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 51015, 4.06ms, .4%, [ ], 201, 15.98, 253.81us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 3463, 274.19us, .0%, [ ], 4,
.32, 865.75us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 3182, 251.84us, .0%, [ ], 7,
.55, 454.57us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 951, 76.06us, .0%, [ ], 3,
.24, 317us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 65434, 5.21ms, .5%, [W ], 320, 25.49, 204.48us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 85294, 6.78ms, .7%, [W ], 396, 31.46, 215.39us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 4291, 340.19us, .0%, [ ], 18, 1.43, 238.39us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 5931, 471.32us, .0%, [ ], 24, 1.91, 247.13us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 3314, 264.35us, .0%, [ ], 13, 1.04, 254.92us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 24436, 1.95ms, .2%, [ ], 90, 7.19, 271.51us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 4052, 319.83us, .0%, [ ], 6,
.47, 675.33us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 273, 21.59us, .0%, [ ], 1,
.08, 273us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 903, 71.85us, .0%, [ ], 3,
.24, 301us average wait
1710 _at_2, (LGWR) , WAIT, log file parallel write , 134568, 10.64ms, 1.1%, [W ], 639, 50.54, 210.59us average wait 1710 _at_2, (LGWR) , WAIT, log file parallel write , 50692, 4.32ms, .4%, [ ], 208, 17.72, 243.71us average wait

On Tue, Jan 28, 2020 at 1:20 PM Chris Stephens <cstephens16_at_gmail.com> wrote:

> doesn't look like any ash data for lgwr:
>
> SQL> _at_ashtop session_id,blocking_session,event2 1=1 "TIMESTAMP'2020-01-27
> 15:50:48'" "TIMESTAMP'2020-01-27 15:52:40'"
>    TOTALSECONDS    AAS      %This    SESSION_ID    BLOCKING_SESSION
>          EVENT2             FIRST_SEEN              LAST_SEEN
>  DIST_SQLEXEC_SEEN
> _______________ ______ __________ _____________ ___________________
> _____________________ ______________________ ______________________
> ____________________
>              81    0.7   82% |             1713                1710 log
> file sync         2020-01-27 15:50:54    2020-01-27 15:52:14
>         1
>               5      0    5% |             1221                     ON CPU
>                2020-01-27 15:50:59    2020-01-27 15:52:14
>     1
>               3      0    3% |              246                     ON CPU
>                2020-01-27 15:51:00    2020-01-27 15:52:00
>     1
>               3      0    3% |             1713                     ON CPU
>                2020-01-27 15:50:51    2020-01-27 15:52:27
>     2
>               1      0    1% |              123                     ON CPU
>                2020-01-27 15:51:35    2020-01-27 15:51:35
>     1
>               1      0    1% |              127                     ON CPU
>                2020-01-27 15:51:50    2020-01-27 15:51:50
>     1
>               1      0    1% |              252                2321 latch
> free            2020-01-27 15:52:28    2020-01-27 15:52:28
>       1
>               1      0    1% |              978                     ges
> remote message    2020-01-27 15:50:48    2020-01-27 15:50:48
>         1
>               1      0    1% |              983                     latch
> free            2020-01-27 15:52:28    2020-01-27 15:52:28
>       1
>               1      0    1% |             1713
> library cache lock    2020-01-27 15:50:48    2020-01-27 15:50:48
>             1
>               1      0    1% |             1831                     ON CPU
>                2020-01-27 15:50:49    2020-01-27 15:50:49
>     1
>
>
> 11 rows selected.
>
>
> SQL> _at_ashtop session_id,blocking_session,event2 session_id=1710
> "TIMESTAMP'2020-01-27 15:50:48'" "TIMESTAMP'2020-01-27 15:52:40'"
>
> no rows selected
>
> On Tue, Jan 28, 2020 at 1:09 PM Noveljic Nenad <
> nenad.noveljic_at_vontobel.com> wrote:
>
>> You’re using v0.2, but  v0.6 should be the latest version. 0.2 wouldn’t
>> show idle blockers.
>>
>> I looked at the ashtop output once again. Strange that lgwr processes nor
>> any other bg processes show up significantly there. The question is, what
>> was lgwr doing while the fg processes were waiting on him.
>>
>>
>>
>>
>> *Von: *Chris Stephens <cstephens16_at_gmail.com>
>> *Datum *Dienstag, 28. Jan. 2020, 7:59 PM
>> *An: *Noveljic Nenad <nenad.noveljic_at_vontobel.com>
>> *Cc: *oracle-l <Oracle-L_at_freelists.org>
>> *Betreff: *Re: intermittent long "log file sync" waits
>>
>> probably just 19c RAC
>>
>> SQL> _at_ash_wait_chains event2 session_id=1713 "TIMESTAMP'2020-01-27
>> 15:50:48'" "TIMESTAMP'2020-01-27 15:52:40'"
>>
>> -- Display ASH Wait Chain Signatures script v0.2 BETA by Tanel Poder (
>> http://blog.tanelpoder.com )
>>
>>           , TO_CHAR(CASE WHEN session_state = 'WAITING' THEN p1 ELSE null
>> END, '0XXXXXXXXXXXXXXX') p1hex
>>                                                                        *
>> ERROR at line 14:
>> ORA-12850: Could not allocate slaves on all specified instances: 3
>> needed, 1 allocated
>>
>> SQL>
>> Session: c2
>>
>> On Tue, Jan 28, 2020 at 11:30 AM Noveljic Nenad <
>> nenad.noveljic_at_vontobel.com> wrote:
>>
>>> The script should work with 19. It must be something else. Could you try
>>> with ash_wait_chains2 - 2 is the rac version? For start, I would group only
>>> on event2 (the first parameter).
>>>
>>> If it doesn’t work, please post the error message.
>>>
>>>
>>>
>>> *Von: *Chris Stephens <cstephens16_at_gmail.com>
>>> *Datum *Dienstag, 28. Jan. 2020, 5:43 PM
>>> *An: *Noveljic Nenad <nenad.noveljic_at_vontobel.com>
>>> *Cc: *oracle-l <Oracle-L_at_freelists.org>
>>> *Betreff: *Re: intermittent long "log file sync" waits
>>>
>>> unfortunately, ash_wait_chains.sql doesn't work on 19.3 but here is
>>> ashtop showing foreground process blocked on lgwr (1710):
>>>
>>> SQL> _at_ashtop inst_id,username,blocking_session,blocking_inst_id,event2
>>> 1=1 "TIMESTAMP'2020-01-27 15:50:48'" "TIMESTAMP'2020-01-27 15:52:40'"
>>>    TOTALSECONDS    AAS      %This    INST_ID         USERNAME
>>>  BLOCKING_SESSION    BLOCKING_INST_ID                EVENT2
>>> FIRST_SEEN              LAST_SEEN    DIST_SQLEXEC_SEEN
>>> _______________ ______ __________ __________ ________________
>>> ___________________ ___________________ _____________________
>>> ______________________ ______________________ ____________________
>>>              81    0.7   82% |             2 GEN3_MGOWER_3
>>>     1710                   2 log file sync         2020-01-27 15:50:54
>>>  2020-01-27 15:52:14                       1
>>>               9    0.1    9% |             1 SYS
>>>                              ON CPU                2020-01-27 15:50:49
>>>  2020-01-27 15:52:14                       1
>>>               3      0    3% |             2 GEN3_MGOWER_3
>>>                              ON CPU                2020-01-27 15:50:51
>>>  2020-01-27 15:52:27                       2
>>>               2      0    2% |             2 SYS
>>>                              ON CPU                2020-01-27 15:51:35
>>>  2020-01-27 15:51:40                       1
>>>               1      0    1% |             2 GEN3_MGOWER_3
>>>                              library cache lock    2020-01-27 15:50:48
>>>  2020-01-27 15:50:48                       1
>>>               1      0    1% |             2 SYS
>>>     2321                   2 latch free            2020-01-27 15:52:28
>>>  2020-01-27 15:52:28                       1
>>>               1      0    1% |             2 SYS
>>>                              ges remote message    2020-01-27 15:50:48
>>>  2020-01-27 15:50:48                       1
>>>               1      0    1% |             2 SYS
>>>                              latch free            2020-01-27 15:52:28
>>>  2020-01-27 15:52:28                       1
>>>
>>>
>>> 8 rows selected.
>>>
>>> SQL> _at_bg lgwr
>>>    NAME    DESCRIPTION     SID    OPID     SPID               PADDR
>>>           SADDR
>>> _______ ______________ _______ _______ ________ ___________________
>>> ___________________
>>> LGWR    Redo etc.         1710      34 26552    00000001E8718860
>>>  00000001D8BFF4A0
>>>
>>> On Tue, Jan 28, 2020 at 10:25 AM Noveljic Nenad <
>>> nenad.noveljic_at_vontobel.com> wrote:
>>>
>>>> Hi Chris,
>>>>
>>>>
>>>>
>>>> log file sync measures much more than IO.
>>>>
>>>>
>>>>
>>>> First of all, I’d run Tanel’s ash_wait_chains (
>>>> https://github.com/tanelpoder/tpt-oracle/blob/master/ash/ash_wait_chains.sql
>>>>  ), because if often points straightly to the root cause .
>>>>
>>>>
>>>>
>>>> Here’s is a usage example, when intermittent log file sync waits where
>>>> caused by slow control file writes:
>>>> https://nenadnoveljic.com/blog/bad-commit-performance-control-file-writes/
>>>> .
>>>>
>>>>
>>>>
>>>> Best regards,
>>>>
>>>>
>>>> Nenad
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *From:* oracle-l-bounce_at_freelists.org <oracle-l-bounce_at_freelists.org> *On
>>>> Behalf Of *Chris Stephens
>>>> *Sent:* Dienstag, 28. Januar 2020 17:09
>>>> *To:* oracle-l <Oracle-L_at_freelists.org>
>>>> *Subject:* intermittent long "log file sync" waits
>>>>
>>>>
>>>>
>>>> 3-node Oracle 19.3 RAC
>>>>
>>>> Centos 7
>>>>
>>>>
>>>>
>>>> We have a SQLAlchemy/Python based application workload that is running
>>>> the exact same steps with widely varying response times which appear to be
>>>> related to varying "log file sync" wait times.
>>>>
>>>>
>>>>
>>>> Here is a profile of a "fast" run:
>>>>
>>>>
>>>>
>>>> CALL-NAME                        DURATION       %   CALLS      MEAN
>>>>   MIN        MAX
>>>> ------------------------------  ---------  ------  ------  --------
>>>>  --------  ---------
>>>> SQL*Net message from client     53.197782   91.8%  10,092  0.005271
>>>>  0.000177  28.568493
>>>> EXEC                             3.759177    6.5%   9,816  0.000383
>>>>  0.000000   0.239592
>>>> row cache lock                   0.233153    0.4%     541  0.000431
>>>>  0.000113   0.000941
>>>> PARSE                            0.140399    0.2%   4,867  0.000029
>>>>  0.000000   0.006620
>>>> DLM cross inst call completion   0.137330    0.2%     956  0.000144
>>>>  0.000004   0.000505
>>>> library cache lock               0.100171    0.2%     215  0.000466
>>>>  0.000151   0.002133
>>>> library cache pin                0.079729    0.1%     216  0.000369
>>>>  0.000056   0.000710
>>>> FETCH                            0.058253    0.1%   1,062  0.000055
>>>>  0.000000   0.004148
>>>> log file sync                    0.048217    0.1%     149  0.000324
>>>>  0.000259   0.000505
>>>> CLOSE                            0.045416    0.1%   4,929  0.000009
>>>>  0.000000   0.000073
>>>> 20 others                        0.135624    0.2%  11,854  0.000011
>>>>  0.000000   0.001700
>>>> ------------------------------  ---------  ------  ------  --------
>>>>  --------  ---------
>>>> TOTAL (30)                      57.935251  100.0%  44,697  0.001296
>>>>  0.000000  28.568493
>>>>
>>>>
>>>>
>>>> Here is a profile of a "slow" run:
>>>>
>>>>
>>>>
>>>> CALL-NAME                         DURATION       %   CALLS      MEAN
>>>>     MIN         MAX
>>>> ------------------------------  ----------  ------  ------  --------
>>>>  --------  ----------
>>>> SQL*Net message from client     131.186118   61.0%  10,092  0.012999
>>>>  0.000212  106.789360
>>>> log file sync                    79.291166   36.8%     150  0.528608
>>>>  0.000264    2.986575
>>>> EXEC                              3.728402    1.7%   9,816  0.000380
>>>>  0.000000    0.221403
>>>> row cache lock                    0.248868    0.1%     542  0.000459
>>>>  0.000111    0.001036
>>>> PARSE                             0.164267    0.1%   4,867  0.000034
>>>>  0.000000    0.004652
>>>> DLM cross inst call completion    0.146981    0.1%     957  0.000154
>>>>  0.000005    0.001188
>>>> library cache lock                0.104354    0.0%     218  0.000479
>>>>  0.000160    0.000728
>>>> library cache pin                 0.082504    0.0%     202  0.000408
>>>>  0.000157    0.000672
>>>> FETCH                             0.056687    0.0%   1,062  0.000053
>>>>  0.000000    0.003969
>>>> CLOSE                             0.043590    0.0%   4,929  0.000009
>>>>  0.000000    0.000180
>>>> 20 others                         0.142044    0.1%  11,866  0.000012
>>>>  0.000000    0.001792
>>>> ------------------------------  ----------  ------  ------  --------
>>>>  --------  ----------
>>>> TOTAL (30)                      215.194981  100.0%  44,701  0.004814
>>>>  0.000000  106.789360
>>>>
>>>>
>>>>
>>>> looking at even histogram for that event:
>>>>
>>>>
>>>>
>>>> SQL> _at_evh "log file sync"
>>>>        EVH_EVENT    EVH_WAIT_TIME_MILLI    WAIT_COUNT    EVH_EST_TIME
>>>>                     LAST_UPDATE_TIME
>>>> ________________ ______________________ _____________ _______________
>>>> ______________________________________
>>>> log file sync                < 1               200051         100.026
>>>> 27-JAN-20 11.39.57.344734 PM -06:00
>>>> log file sync                < 2                  165           0.248
>>>> 28-JAN-20 12.18.10.429089 AM -06:00
>>>> log file sync                < 4                  150            0.45
>>>> 27-JAN-20 11.18.31.158102 PM -06:00
>>>> log file sync                < 8                  199           1.194
>>>> 27-JAN-20 11.19.14.209947 PM -06:00
>>>> log file sync               < 16                  253           3.036
>>>> 28-JAN-20 08.03.17.851328 AM -06:00
>>>> log file sync               < 32                  472          11.328
>>>> 27-JAN-20 11.20.22.746033 PM -06:00
>>>> log file sync               < 64                  728          34.944
>>>> 28-JAN-20 01.13.37.364541 AM -06:00
>>>> log file sync              < 128                  691          66.336
>>>> 27-JAN-20 11.31.37.400504 PM -06:00
>>>> log file sync              < 256                  414          79.488
>>>> 28-JAN-20 12.18.10.423987 AM -06:00
>>>> log file sync              < 512                  405          155.52
>>>> 28-JAN-20 03.27.50.540383 AM -06:00
>>>> log file sync             < 1024                  459         352.512
>>>> 27-JAN-20 11.35.14.378363 PM -06:00
>>>> log file sync             < 2048                  482         740.352
>>>> 28-JAN-20 01.18.20.556248 AM -06:00
>>>> log file sync             < 4096                  576        1769.472
>>>> 27-JAN-20 11.21.05.084998 PM -06:00
>>>> log file sync             < 8192                   89         546.816
>>>> 27-JAN-20 11.57.36.436460 AM -06:00
>>>> log file sync            < 16384                   60          737.28
>>>> 25-JAN-20 07.48.31.460408 AM -06:00
>>>> log file sync            < 32768                   39         958.464
>>>> 27-JAN-20 11.59.09.869286 AM -06:00
>>>> log file sync            < 65536                   27        1327.104
>>>> 25-JAN-20 09.49.13.856563 AM -06:00
>>>>
>>>>
>>>>
>>>> The weird thing is that I don't see corresponding log I/O waits
>>>> (awaits) in iostat output.
>>>>
>>>>
>>>>
>>>> I have a ticket open w/ oracle but does anyone have any suggestions to
>>>> discover root cause and/or solution?
>>>>
>>>> ____________________________________________________
>>>>
>>>> Please consider the environment before printing this e-mail.
>>>>
>>>> Bitte denken Sie an die Umwelt, bevor Sie dieses E-Mail drucken.
>>>>
>>>>
>>>> Important Notice
>>>> This message is intended only for the individual named. It may contain
>>>> confidential or privileged information. If you are not the named addressee
>>>> you should in particular not disseminate, distribute, modify or copy this
>>>> e-mail. Please notify the sender immediately by e-mail, if you have
>>>> received this message by mistake and delete it from your system.
>>>> Without prejudice to any contractual agreements between you and us
>>>> which shall prevail in any case, we take it as your authorization to
>>>> correspond with you by e-mail if you send us messages by e-mail. However,
>>>> we reserve the right not to execute orders and instructions transmitted by
>>>> e-mail at any time and without further explanation.
>>>> E-mail transmission may not be secure or error-free as information
>>>> could be intercepted, corrupted, lost, destroyed, arrive late or
>>>> incomplete. Also processing of incoming e-mails cannot be guaranteed. All
>>>> liability of Vontobel Holding Ltd. and any of its affiliates (hereinafter
>>>> collectively referred to as "Vontobel Group") for any damages resulting
>>>> from e-mail use is excluded. You are advised that urgent and time sensitive
>>>> messages should not be sent by e-mail and if verification is required
>>>> please request a printed version. Please note that all e-mail
>>>> communications to and from the Vontobel Group are subject to electronic
>>>> storage and review by Vontobel Group. Unless stated to the contrary and
>>>> without prejudice to any contractual agreements between you and Vontobel
>>>> Group which shall prevail in any case, e-mail-communication is for
>>>> informational purposes only and is not intended as an offer or solicitation
>>>> for the purchase or sale of any financial instrument or as an official
>>>> confirmation of any transaction.
>>>> The legal basis for the processing of your personal data is the
>>>> legitimate interest to develop a commercial relationship with you, as well
>>>> as your consent to forward you commercial communications. You can exercise,
>>>> at any time and under the terms established under current regulation, your
>>>> rights. If you prefer not to receive any further communications, please
>>>> contact your client relationship manager if you are a client of Vontobel
>>>> Group or notify the sender. Please note for an exact reference to the
>>>> affected group entity the corporate e-mail signature. For further
>>>> information about data privacy at Vontobel Group please consult
>>>> www.vontobel.com.
>>>>
>>>
>>> Important Notice
>>> This message is intended only for the individual named. It may contain
>>> confidential or privileged information. If you are not the named addressee
>>> you should in particular not disseminate, distribute, modify or copy this
>>> e-mail. Please notify the sender immediately by e-mail, if you have
>>> received this message by mistake and delete it from your system.
>>> Without prejudice to any contractual agreements between you and us which
>>> shall prevail in any case, we take it as your authorization to correspond
>>> with you by e-mail if you send us messages by e-mail. However, we reserve
>>> the right not to execute orders and instructions transmitted by e-mail at
>>> any time and without further explanation.
>>> E-mail transmission may not be secure or error-free as information could
>>> be intercepted, corrupted, lost, destroyed, arrive late or incomplete. Also
>>> processing of incoming e-mails cannot be guaranteed. All liability of
>>> Vontobel Holding Ltd. and any of its affiliates (hereinafter collectively
>>> referred to as "Vontobel Group") for any damages resulting from e-mail use
>>> is excluded. You are advised that urgent and time sensitive messages should
>>> not be sent by e-mail and if verification is required please request a
>>> printed version. Please note that all e-mail communications to and from the
>>> Vontobel Group are subject to electronic storage and review by Vontobel
>>> Group. Unless stated to the contrary and without prejudice to any
>>> contractual agreements between you and Vontobel Group which shall prevail
>>> in any case, e-mail-communication is for informational purposes only and is
>>> not intended as an offer or solicitation for the purchase or sale of any
>>> financial instrument or as an official confirmation of any transaction.
>>> The legal basis for the processing of your personal data is the
>>> legitimate interest to develop a commercial relationship with you, as well
>>> as your consent to forward you commercial communications. You can exercise,
>>> at any time and under the terms established under current regulation, your
>>> rights. If you prefer not to receive any further communications, please
>>> contact your client relationship manager if you are a client of Vontobel
>>> Group or notify the sender. Please note for an exact reference to the
>>> affected group entity the corporate e-mail signature. For further
>>> information about data privacy at Vontobel Group please consult
>>> www.vontobel.com.
>>>
>>
>> Important Notice
>> This message is intended only for the individual named. It may contain
>> confidential or privileged information. If you are not the named addressee
>> you should in particular not disseminate, distribute, modify or copy this
>> e-mail. Please notify the sender immediately by e-mail, if you have
>> received this message by mistake and delete it from your system.
>> Without prejudice to any contractual agreements between you and us which
>> shall prevail in any case, we take it as your authorization to correspond
>> with you by e-mail if you send us messages by e-mail. However, we reserve
>> the right not to execute orders and instructions transmitted by e-mail at
>> any time and without further explanation.
>> E-mail transmission may not be secure or error-free as information could
>> be intercepted, corrupted, lost, destroyed, arrive late or incomplete. Also
>> processing of incoming e-mails cannot be guaranteed. All liability of
>> Vontobel Holding Ltd. and any of its affiliates (hereinafter collectively
>> referred to as "Vontobel Group") for any damages resulting from e-mail use
>> is excluded. You are advised that urgent and time sensitive messages should
>> not be sent by e-mail and if verification is required please request a
>> printed version. Please note that all e-mail communications to and from the
>> Vontobel Group are subject to electronic storage and review by Vontobel
>> Group. Unless stated to the contrary and without prejudice to any
>> contractual agreements between you and Vontobel Group which shall prevail
>> in any case, e-mail-communication is for informational purposes only and is
>> not intended as an offer or solicitation for the purchase or sale of any
>> financial instrument or as an official confirmation of any transaction.
>> The legal basis for the processing of your personal data is the
>> legitimate interest to develop a commercial relationship with you, as well
>> as your consent to forward you commercial communications. You can exercise,
>> at any time and under the terms established under current regulation, your
>> rights. If you prefer not to receive any further communications, please
>> contact your client relationship manager if you are a client of Vontobel
>> Group or notify the sender. Please note for an exact reference to the
>> affected group entity the corporate e-mail signature. For further
>> information about data privacy at Vontobel Group please consult
>> www.vontobel.com.
>>
>


--
http://www.freelists.org/webpage/oracle-l
Received on Tue Jan 28 2020 - 20:39:38 CET

Original text of this message