Skip navigation.

Feed aggregator

Join me in a FREE live webcast about Real-Time Query!

The Oracle Instructor - Wed, 2014-10-01 08:50

On Thursday, 2nd October, 12:30 CET I will be doing a Live Webcast with many demonstrations about Data Guard Real-Time Query.

The shown features all work with 11g already.

Register here.

805-banner-dataguardrealtime-v1-2294812Addendum: The webcast was done already.


Tagged: Active Data Guard, Data Guard, OU Streams
Categories: DBA Blogs

Oracle APEX 5 Update from OOW

Scott Spendolini - Wed, 2014-10-01 08:18
The big news about Oracle APEX from OOW is not so much about what, but more about when.  Much to many people's disappointment, APEX 5.0 is still going to be a few months out.  The "official" release date has been updated from "calendar year 2014" to "fiscal year 2015".  For those not in the know, Oracle's fiscal year ends on May 31st, so that date represents the new high-water mark.

Despite this bit of bad news, there were a number of bits of good news as well.  First of all, there will be an EA3.  This is good because it demonstrates that the team has been hard at work fixing bugs and adding features.  Based on the live demonstrations that were presented, there are some subtle and some not-so-subtle things to look forward to.  The subtle include an even more refined UI, complete with smooth fade-through transitions.  I tweeted about the not-so-subtle the other day, but to recap here: pivot functionality in IRs, column toggle and reflow in jQuery Mobile.

After (or right before - it wasn't 100% clear) that E3 is released, the Oracle APEX team will host their first public beta program.  This will enable select customers to download and install APEX 5.0 on their own hardware.  This is an extraordinary and much-needed positive change in their release cycle, as for the first time, customers can upgrade their actual applications in their environment and see what implications APEX 5.0 will bring.  Doing a real-world upgrade on actual APEX applications is something that the EA instances could never even come close to pulling of.

After the public beta, Oracle will upgrade their internal systems to APEX 5.0 - and there's a lot of those.  At last count, I think the number of workspaces was just north of 3,000.  After the internal upgrade, apex.oracle.com will have it's turn.  And once that is complete, we can expect APEX 5.0 to be released.

No one like delays.  But in this case, it seems that the extra time required is quite justified, as APEX 5.0 still needs some work, and the upgrade path from 4.x needs to be nothing short of rock-solid.  Keep in mind that with each release, there are a larger number of customers using a larger number of applications, so ensuring that their upgrade experience is as smooth as possible is just as, if not more important than any new functionality.

In the mean time, keep kicking the tires on the EA instance and provide any feedback or bug reports!

Shrink Tablespace

Jonathan Lewis - Wed, 2014-10-01 07:55

In a comment on my previous post on shrinking tablespaces Jason Bucata and Karsten Spang both reported problems with small objects that didn’t move to the start of the tablespace. This behaviour is inevitable with dictionary managed tablespaces (regardless of the size of the object), but I don’t think it’s likely to happen with locally managed tablespaces if they’ve been defined with uniform extent sizes. Jason’s comment made me realise, though, that I’d overlooked a feature of system allocated tablespaces that made it much harder to move objects towards the start of file. I’ve created a little demo to illustrate the point.

I created a new tablespace as locally managed, ASSM, and auto-allocate, then created a few tables or various sizes. The following minimal SQL query reports the resulting extents in block_id order, adding in a “boundary_1m” column which subtracts 128 blocks (1MB) from the block_id, then divides by 128 and truncates to show which “User Megabyte” in the file the extent starts in.  (Older versions of Oracle typically have an 8 block space management header, recent versions expanded this from 64KB to 1MB – possibly as a little performance aid to Exadata).


select
        segment_name, block_id, blocks , trunc((block_id - 128)/128) boundary_1M
from
        dba_extents where owner = 'TEST_USER'
order by
        block_id
;

SEGMENT_NAME               BLOCK_ID     BLOCKS BOUNDARY_1M
------------------------ ---------- ---------- -----------
T1                              128       1024           0
T1                             1152       1024           8
T2                             2176       1024          16
T2                             3200       1024          24
T3                             4224          8          32
T4                             4232          8          32
T5                             4352        128          33

As you can see t3 and t4 are small tables – 1 extent of 64KB each – and t5, which I created after t4, starts on the next 1MB boundary. This is a feature of auto-allocate: not only are extents (nearly) fixed to a small number of possible extent sizes, the larger extents are restricted to starting on 1MB boundaries and the 64KB extents are used preferentially to fill in odd-sized” holes. To show the impact of this I’m going to drop table t1 (at the start of file) to make some space.


SEGMENT_NAME               BLOCK_ID     BLOCKS BOUNDARY_1M
------------------------ ---------- ---------- -----------
T2                             2176       1024          16
T2                             3200       1024          24
T3                             4224          8          32
T4                             4232          8          32
T5                             4352        128          33

Now I’ll move table t3 – hoping that it will move to the start of file and use up some of the space left by t1. However there’s a 1MB area (at boundary 32) which is partially used,  so t3 moves into that space rather than creating a new “partly used” megabyte.


SEGMENT_NAME               BLOCK_ID     BLOCKS BOUNDARY_1M
------------------------ ---------- ---------- -----------
T2                             2176       1024          16
T2                             3200       1024          24
T4                             4232          8          32
T3                             4240          8          32
T5                             4352        128          33

It’s a little messy trying to clear up the tiny fragments and make them do what you want. In this case you could, for example, create a dummy table with storage(initial 64K next 64K minextents 14) to use up all the space in the partly used megabyte, then move t3 – which should go to the start of file – then move table t4 – which should go into the first partly-used MB (i.e. start of file) rather than taking up the hole left by t3.

Even for a trivial example it’s messy – imagine how difficult it can get to cycle through building and dropping suitable dummy tables and move objects in the right order when you’ve got objects with several small extents scattered through the file, and objects with a mixture of small extents and large extents.


OCP 12C – Backup, Recovery and Flashback for a CDB/PDB

DBA Scripts and Articles - Wed, 2014-10-01 07:43

Backup a CDB/PDB To make a database backup you need the SYSBACKUP or SYSDBA privilege. You can backup the CDB and all the PDBs independantly, all together, or by specifying a list. You can backup a PDB by connecting directly to it and use: RMAN> BACKUP DATABASE: You can backup a PDB by connecting to [...]

The post OCP 12C – Backup, Recovery and Flashback for a CDB/PDB appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Solving customer issues at OOW14: Dbvisit Replicate can replicate tables without primary key

Yann Neuhaus - Wed, 2014-10-01 06:39

Usually, the logical replication of changes uses the primary key. Each row updated or deleted generate a statement to be applied on the target, which affects only one row because it accesses with the primary key. If there is no primary key, we need to have something unique and at worst it is the whole row. But sometimes old applications were designed before being implemented into relational database and have no unicity. It it a problem for logical replication? We will see that Dbvisit replicate can address that.

Here is the case I encountered at a customer. The application has a master-detail table design, and the detail tables are inserted/deleted all together for the same master key. And there is no primary key, and even nothing unique. The only value that may help is a timestamp but sometimes timestamps do not have the sufficient precision to be unique. And anyway, imagine what happens if we change back the system time, or during daylight saving changes.

At dbi services we have very good contact with our partner Dbvisit and it's the kind of question that can be addressed quickly by the support. Anyway, I was at the Oracle Open World and then was able to discuss directly with the Dbvisit replicate developers. There is a solution and it is even documented.

The basic issue is that when the delete occurs, a redo entry is generated for each row that is deleted and then Dbvisit replicate generates an update statement to do the same on the target. But when there are duplicates the first statement will affect several rows and the next statement will affect no rows.

This is the kind of replication complexity that is addressed with conflict resolution. It can be addressed manually: the replication stops when a conflict is detected and continues once we have decided what to do. But we can also set rules to address it automatically when the problem occurs again so that the replication never stops.

Here is the demo about that as I tested it before providing the solution to my customer. 

Note that it concerns only deletes here but the same can be done with updates.

1. I create a table with 4 identical rows for each value of N:

  create table TESTNOPK as select n,'x' x from (select rownum n from dual connect by level
SQL> connect repoe/repoe Connected.
SQL> create table TESTNOPK as select n,'x' x from (select rownum n from dual connect by level   Table created.

2. Status of replication from the Dbvisit console:


| Dbvisit Replicate 2.7.06.4485(MAX edition) - Evaluation License expires in 29 days MINE IS running. Currently at plog 35 and SCN 796568 (10/01/2014 01:08:04). APPLY IS running. Currently at plog 35 and SCN 796566 (10/01/2014 01:08:04). Progress of replication dbvrep_XE:MINE->APPLY: total/this execution -------------------------------------------------------------------------------------------------------------------------------------------- REPOE.CUSTOMERS:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ADDRESSES:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.CARD_DETAILS:           100%  Mine:1727/1727       Unrecov:0/0         Applied:1727/1727   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ORDER_ITEMS:            100%  Mine:12520/12520     Unrecov:0/0         Applied:12520/12520 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.ORDERS:                 100%  Mine:10040/10040     Unrecov:0/0         Applied:10040/10040 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.INVENTORIES:            100%  Mine:12269/12269     Unrecov:0/0         Applied:12269/12269 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.LOGON:                  100%  Mine:12831/12831     Unrecov:0/0         Applied:12831/12831 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.TESTNOPK:               100%  Mine:40/40           Unrecov:0/0         Applied:40/40       Conflicts:0/0       Last:01/10/2014 01:08:02/OK -------------------------------------------------------------------------------------------------------------------------------------------- 8 tables listed.  

3. I delete the lines with the value 10:


SQL> select * from TESTNOPK where n=10;
         N X ---------- -         10 x         10 x         10 x         10 x
SQL> delete from TESTNOPK where n=10;
4 rows deleted.
SQL> commit;
Commit complete.

5. apply is stop on a conflict: too many rows affected by the delete


MINE IS running. Currently at plog 35 and SCN 797519 (10/01/2014 01:10:56). APPLY IS running. Currently at plog 35 and SCN 796928 (10/01/2014 01:09:08) and 1 apply conflicts so far (last at 01/10/2014 01:10:57) and WAITING on manual resolve of apply conflict id 35010009996. Progress of replication dbvrep_XE:MINE->APPLY: total/this execution -------------------------------------------------------------------------------------------------------------------------------------------- REPOE.CUSTOMERS:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ADDRESSES:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.CARD_DETAILS:           100%  Mine:1727/1727       Unrecov:0/0         Applied:1727/1727   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ORDER_ITEMS:            100%  Mine:12520/12520     Unrecov:0/0         Applied:12520/12520 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.ORDERS:                 100%  Mine:10040/10040     Unrecov:0/0         Applied:10040/10040 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.INVENTORIES:            100%  Mine:12269/12269     Unrecov:0/0         Applied:12269/12269 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.LOGON:                  100%  Mine:12831/12831     Unrecov:0/0         Applied:12831/12831 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.TESTNOPK:                90%  Mine:44/44           Unrecov:0/0         Applied:40/40       Conflicts:1/1       Last:01/10/2014 01:09:17/RETRY:Command affected 4 row(s). -------------------------------------------------------------------------------------------------------------------------------------------- 8 tables listed.     dbvrep> list conflict Information for conflict 35010009996 (current conflict): Table: REPOE.TESTNOPK at transaction 0008.003.0000022b at SCN 796930 SQL text (with replaced bind values): delete from "REPOE"."TESTNOPK" where (1=1) and "N" = 10 and "X" = 'x'
Error: Command affected 4 row(s). Handled as: PAUSE Conflict repeated 22 times.

6. I resolve the conflict manually, forcing the delete of all rows

                                                                                                                                                       dbvrep> resolve conflict 35010009996 as force Conflict resolution set.   At that point, there is 3 following conflicts that I need to force as well because of the other deletes affecting no rows. I don't reproduce them here.

7. Once the conflits are resolved, the replication continues:

  MINE IS running. Currently at plog 35 and SCN 800189 (10/01/2014 01:19:16). APPLY IS running. Currently at plog 35 and SCN 800172 (10/01/2014 01:19:14). Progress of replication dbvrep_XE:MINE->APPLY: total/this execution -------------------------------------------------------------------------------------------------------------------------------------------- REPOE.CUSTOMERS:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ADDRESSES:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.CARD_DETAILS:           100%  Mine:1727/1727       Unrecov:0/0         Applied:1727/1727   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ORDER_ITEMS:            100%  Mine:12520/12520     Unrecov:0/0         Applied:12520/12520 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.ORDERS:                 100%  Mine:10040/10040     Unrecov:0/0         Applied:10040/10040 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.INVENTORIES:            100%  Mine:12269/12269     Unrecov:0/0         Applied:12269/12269 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.LOGON:                  100%  Mine:12831/12831     Unrecov:0/0         Applied:12831/12831 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.TESTNOPK:               100%  Mine:44/44           Unrecov:0/0         Applied:44/44       Conflicts:4/4       Last:01/10/2014 01:18:21/RETRY:Command affected 0 row(s). -------------------------------------------------------------------------------------------------------------------------------------------- 8 tables listed.                                                                                                                                                           dbvrep> list conflict Information for conflict 0 (current conflict): No conflict with id 0 found.  

8. Now I want to set a rule that manages that situation automatically. I add a 'too many rows' conflict rule to touch only one line for each delete:


dbvrep> SET_CONFLICT_HANDLERS FOR TABLE REPOE.TESTNOPK FOR DELETE ON TOO_MANY TO SQL s/$/ and rownum = 1/ Connecting to running apply: [The table called REPOE.TESTNOPK on source is handled on apply (APPLY) as follows: UPDATE (error): handler: RETRY logging: LOG UPDATE (no_data): handler: RETRY logging: LOG UPDATE (too_many): handler: RETRY logging: LOG DELETE (error): handler: RETRY logging: LOG DELETE (no_data): handler: RETRY logging: LOG DELETE (too_many): handler: SQL logging: LOG, regular expression: s/$/ and rownum = 1/ INSERT (error): handler: RETRY logging: LOG TRANSACTION (error): handler: RETRY logging: LOG]                                                                                                                                                        9. Now testing the automatic conflict resolution:   SQL> delete from TESTNOPK where n=9;
4 rows deleted.
SQL> commit;
Commit complete.
10.  the conflicts are automatically managed:   MINE IS running. Currently at plog 35 and SCN 800475 (10/01/2014 01:20:08). APPLY IS running. Currently at plog 35 and SCN 800473 (10/01/2014 01:20:08). Progress of replication dbvrep_XE:MINE->APPLY: total/this execution -------------------------------------------------------------------------------------------------------------------------------------------- REPOE.CUSTOMERS:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ADDRESSES:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.CARD_DETAILS:           100%  Mine:1727/1727       Unrecov:0/0         Applied:1727/1727   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ORDER_ITEMS:            100%  Mine:12520/12520     Unrecov:0/0         Applied:12520/12520 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.ORDERS:                 100%  Mine:10040/10040     Unrecov:0/0         Applied:10040/10040 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.INVENTORIES:            100%  Mine:12269/12269     Unrecov:0/0         Applied:12269/12269 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.LOGON:                  100%  Mine:12831/12831     Unrecov:0/0         Applied:12831/12831 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.TESTNOPK:               100%  Mine:48/48           Unrecov:0/0         Applied:48/48       Conflicts:7/7       Last:01/10/2014 01:19:57/OK -------------------------------------------------------------------------------------------------------------------------------------------- 8 tables listed.  

Now the replication is automatic and the situation is correctly managed.


 oow-imattending-200x200-2225057.gif  

As I already said, Dbvisit is a simple tool but is nethertheless very powerfull. And Oracle Open World is an efficient way to learn: share knowlege during the day, and test it during the night when you are too jetlagged to sleep...





 



 

 

solving customer issue at OOW14: Dbvisit replicate can even replicate tables with no primary key

Yann Neuhaus - Wed, 2014-10-01 06:39
Usually, the logical replication of changes uses the primary key. Each row updated or deleted generate a statement to be applied on the target, which affects only one row because it accesses with the primary key. If there is no primary key, we need to have something unique and at worst it is the whole row. But sometimes old applications were designed before being implemented into relational database and have no unicity. It it a problem for logical replication? We will see that Dbvisit replicate can address that.   Here is the case I encountered at a customer. The application has a master-detail table design, and the detail tables are inserted/deleted all together for the same master key. And there is no primary key, and even nothing unique. The only value that may help is a timestamp but sometimes timestamps do not have the sufficient precision to be unique. And anyway, imagine what happens if we change back the system time, or during daylight saving changes.   At dbi services we have very good contact with our partner Dbvisit and it's the kind of question that can be addressed quickly by the support. Anyway, I was at the Oracle Open World and then was able to discuss directly with the Dbvisit replicate developers. There is a solution and it is even documented.

The basic issue is that when the delete occurs, a redo entry is generated for each row that is deleted and then Dbvisit replicate generates an update statement to do the same on the target. But when there are duplicates the first statement will affect several rows and the next statement will affect no rows.

This is the kind of replication complexity that is addressed with conflict resolution. It can be addressed manually: the replication stops when a conflict is detected and continues once we have decided what to do. But we can also set rules to address it automatically when the problem occurs again so that the replication never stops.

Here is the demo about that as I tested it before providing the solution to my customer. 

Note that it concerns only deletes here but the same can be done with updates.

1. I create a table with 4 identical rows for each value of N:

  create table TESTNOPK as select n,'x' x from (select rownum n from dual connect by level
SQL> connect repoe/repoe Connected.
SQL> create table TESTNOPK as select n,'x' x from (select rownum n from dual connect by level   Table created.

2. Status of replication from the Dbvisit console:


| Dbvisit Replicate 2.7.06.4485(MAX edition) - Evaluation License expires in 29 days MINE IS running. Currently at plog 35 and SCN 796568 (10/01/2014 01:08:04). APPLY IS running. Currently at plog 35 and SCN 796566 (10/01/2014 01:08:04). Progress of replication dbvrep_XE:MINE->APPLY: total/this execution -------------------------------------------------------------------------------------------------------------------------------------------- REPOE.CUSTOMERS:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ADDRESSES:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.CARD_DETAILS:           100%  Mine:1727/1727       Unrecov:0/0         Applied:1727/1727   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ORDER_ITEMS:            100%  Mine:12520/12520     Unrecov:0/0         Applied:12520/12520 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.ORDERS:                 100%  Mine:10040/10040     Unrecov:0/0         Applied:10040/10040 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.INVENTORIES:            100%  Mine:12269/12269     Unrecov:0/0         Applied:12269/12269 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.LOGON:                  100%  Mine:12831/12831     Unrecov:0/0         Applied:12831/12831 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.TESTNOPK:               100%  Mine:40/40           Unrecov:0/0         Applied:40/40       Conflicts:0/0       Last:01/10/2014 01:08:02/OK -------------------------------------------------------------------------------------------------------------------------------------------- 8 tables listed.  

3. I delete the lines with the value 10:


SQL> select * from TESTNOPK where n=10;
         N X ---------- -         10 x         10 x         10 x         10 x
SQL> delete from TESTNOPK where n=10;
4 rows deleted.
SQL> commit;
Commit complete.

5. apply is stop on a conflict: too many rows affected by the delete


MINE IS running. Currently at plog 35 and SCN 797519 (10/01/2014 01:10:56). APPLY IS running. Currently at plog 35 and SCN 796928 (10/01/2014 01:09:08) and 1 apply conflicts so far (last at 01/10/2014 01:10:57) and WAITING on manual resolve of apply conflict id 35010009996. Progress of replication dbvrep_XE:MINE->APPLY: total/this execution -------------------------------------------------------------------------------------------------------------------------------------------- REPOE.CUSTOMERS:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ADDRESSES:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.CARD_DETAILS:           100%  Mine:1727/1727       Unrecov:0/0         Applied:1727/1727   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ORDER_ITEMS:            100%  Mine:12520/12520     Unrecov:0/0         Applied:12520/12520 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.ORDERS:                 100%  Mine:10040/10040     Unrecov:0/0         Applied:10040/10040 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.INVENTORIES:            100%  Mine:12269/12269     Unrecov:0/0         Applied:12269/12269 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.LOGON:                  100%  Mine:12831/12831     Unrecov:0/0         Applied:12831/12831 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.TESTNOPK:                90%  Mine:44/44           Unrecov:0/0         Applied:40/40       Conflicts:1/1       Last:01/10/2014 01:09:17/RETRY:Command affected 4 row(s). -------------------------------------------------------------------------------------------------------------------------------------------- 8 tables listed.     dbvrep> list conflict Information for conflict 35010009996 (current conflict): Table: REPOE.TESTNOPK at transaction 0008.003.0000022b at SCN 796930 SQL text (with replaced bind values): delete from "REPOE"."TESTNOPK" where (1=1) and "N" = 10 and "X" = 'x'
Error: Command affected 4 row(s). Handled as: PAUSE Conflict repeated 22 times.

6. I resolve the conflict manually, forcing the delete of all rows

                                                                                                                                                       dbvrep> resolve conflict 35010009996 as force Conflict resolution set.   At that point, there is 3 following conflicts that I need to force as well because of the other deletes affecting no rows. I don't reproduce them here.

7. Once the conflits are resolved, the replication continues:

  MINE IS running. Currently at plog 35 and SCN 800189 (10/01/2014 01:19:16). APPLY IS running. Currently at plog 35 and SCN 800172 (10/01/2014 01:19:14). Progress of replication dbvrep_XE:MINE->APPLY: total/this execution -------------------------------------------------------------------------------------------------------------------------------------------- REPOE.CUSTOMERS:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ADDRESSES:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.CARD_DETAILS:           100%  Mine:1727/1727       Unrecov:0/0         Applied:1727/1727   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ORDER_ITEMS:            100%  Mine:12520/12520     Unrecov:0/0         Applied:12520/12520 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.ORDERS:                 100%  Mine:10040/10040     Unrecov:0/0         Applied:10040/10040 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.INVENTORIES:            100%  Mine:12269/12269     Unrecov:0/0         Applied:12269/12269 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.LOGON:                  100%  Mine:12831/12831     Unrecov:0/0         Applied:12831/12831 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.TESTNOPK:               100%  Mine:44/44           Unrecov:0/0         Applied:44/44       Conflicts:4/4       Last:01/10/2014 01:18:21/RETRY:Command affected 0 row(s). -------------------------------------------------------------------------------------------------------------------------------------------- 8 tables listed.                                                                                                                                                           dbvrep> list conflict Information for conflict 0 (current conflict): No conflict with id 0 found.  

8. Now I want to set a rule that manages that situation automatically. I add a 'too many rows' conflict rule to touch only one line for each delete:


dbvrep> SET_CONFLICT_HANDLERS FOR TABLE REPOE.TESTNOPK FOR DELETE ON TOO_MANY TO SQL s/$/ and rownum = 1/ Connecting to running apply: [The table called REPOE.TESTNOPK on source is handled on apply (APPLY) as follows: UPDATE (error): handler: RETRY logging: LOG UPDATE (no_data): handler: RETRY logging: LOG UPDATE (too_many): handler: RETRY logging: LOG DELETE (error): handler: RETRY logging: LOG DELETE (no_data): handler: RETRY logging: LOG DELETE (too_many): handler: SQL logging: LOG, regular expression: s/$/ and rownum = 1/ INSERT (error): handler: RETRY logging: LOG TRANSACTION (error): handler: RETRY logging: LOG]                                                                                                                                                        9. Now testing the automatic conflict resolution:   SQL> delete from TESTNOPK where n=9;
4 rows deleted.
SQL> commit;
Commit complete.
10.  the conflicts are automatically managed:   MINE IS running. Currently at plog 35 and SCN 800475 (10/01/2014 01:20:08). APPLY IS running. Currently at plog 35 and SCN 800473 (10/01/2014 01:20:08). Progress of replication dbvrep_XE:MINE->APPLY: total/this execution -------------------------------------------------------------------------------------------------------------------------------------------- REPOE.CUSTOMERS:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ADDRESSES:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.CARD_DETAILS:           100%  Mine:1727/1727       Unrecov:0/0         Applied:1727/1727   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ORDER_ITEMS:            100%  Mine:12520/12520     Unrecov:0/0         Applied:12520/12520 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.ORDERS:                 100%  Mine:10040/10040     Unrecov:0/0         Applied:10040/10040 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.INVENTORIES:            100%  Mine:12269/12269     Unrecov:0/0         Applied:12269/12269 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.LOGON:                  100%  Mine:12831/12831     Unrecov:0/0         Applied:12831/12831 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.TESTNOPK:               100%  Mine:48/48           Unrecov:0/0         Applied:48/48       Conflicts:7/7       Last:01/10/2014 01:19:57/OK -------------------------------------------------------------------------------------------------------------------------------------------- 8 tables listed.  

Now the replication is automatic and the situation is correctly managed.


 oow-imattending-200x200-2225057.gif  

As I already said, Dbvisit is a simple tool but is nethertheless very powerfull. And Oracle Open World is an efficient way to learn: share knowlege during the day, and test it during the night when you are too jetlagged to sleep...





 



 

 

Oracle #GoldenGate Parameter File Templates

DBASolved - Wed, 2014-10-01 01:27

This week I’ve been enjoying spending some time at Oracle Open World in San Francisco, CA.  While here, I’ve been talking with everyone, friends old and new, and it came to my attention that it would be a good idea to have some useful templates for Oracle GoldenGate parameter files.  With this in mind, I decided to create some generic templates with comments for Oracle GoldenGate processes.  These templates can be found on my Scripts page under “Oracle GoldenGate Parameter Templates”.  These files are in a small zip file that can be downloaded, unzipped and used in creating a basic uni-directional configuration.

By using these templates, you should be able to do:

  1. Review useful examples for each Oracle GoldenGate process (Manager, Extract, Pump, Replicat)
  2. With minor changes, quickly get uni-directional replication going
  3. Gain a base understanding of what how simple Oracle GoldenGate parameter files work

Enjoy!

about.me: http://about.me/dbasolved

 


Filed under: Golden Gate
Categories: DBA Blogs

2016 Mazda6 Interior Review

Ameed Taylor - Wed, 2014-10-01 01:15
If there's an ethical to the absolutely redesigned 2016 Mazda6 Interior Review  story, it's that there may be extra to lifestyles than "zoom-zoom." The previous-generation Mazda 6 had a whole lot of it, yet Mazda had a satan of a time getting the shopping for public to notice. which is as a result of American drivers usually do not care a lot about how their domestic sedans behave on winding united states of america roads. What they want is space, security, reliability, energy and magnificence -- and while the outdated 6 looked nice sufficient, its tight dimensions and lackluster acceleration averted it from succeeding in an increasingly more competitive marketplace.

the new 6 demonstrates that Mazda wasn't going to make the identical mistake twice. want room to chill out? It bargains one of the vital accommodating cabins of any midsize sedan, with Texas-sized front seats and a backseat match for 6-footers. acquired baggage, or in all probability 4 units of golf golf equipment? The midsize Mazda now options an impressively massive 16.6 cubic toes of trunk space. need energy? Mazda's received you covered with the biggest and strongest V6 on this worth bracket, though its gasoline financial system is disappointing. need to make a way commentary? then you definately shouldn't be shopping for a family sedan within the first position. but as such cars go, we think the brand new 6 manages to be tautly horny, in particular from the front, despite its additional 6.1 inches of length and 2.three inches of width. in contrast to the previous model, the 2016 Mazda6 Interior Review  used to be engineered namely for the North American market -- and it shows.

It additionally method introduced weight, as the enlarged 6 is roughly a hundred and fifty pounds heavier than its predecessor. yet you can nonetheless discover the sporty reflexes that consumers have come to expect from this efficiency-oriented brand. physique roll is minimal through domestic-sedan requirements, and the steerage is light however responsive. The 6 by no means allows you to omit its really extensive dimension, but its smartly-sorted chassis and light-on-its-ft character at pace places it 2nd most effective to the Nissan Altima within the fun-to-force category.

Wait a minute, you might be announcing -- how might the "zoom-zoom" household sedan fail to be the sportiest automobile in its class? Three causes: The Altima's significantly lighter, its physique motions are higher managed and its steering is tighter and more communicative. however so what? because the previous 6 validated (and the Toyota Camry continues to reveal), prime-notch riding dynamics don't a very best-promoting household sedan make. What the brand new 6 offers is the coveted combination of american-fashion measurement and energy, along with above-average handling for individuals who care about that kind of thing. authentic fans won't in finding the 6 to their liking -- however hiya, that is what the Mazdaspeed three is for.

in short, the 2009 Mazda 6 is at or close to the top of its class in most respects. As universal, although, we suggest check-riding as many rival models as conceivable earlier than making your resolution, including the Chevrolet Malibu, Honda Accord, Hyundai Sonata, Nissan Altima, Subaru Legacy and Toyota Camry. each has its own strengths and weaknesses, but few can suit the 6's all-round competence and attraction.
battery for mazda 6 key fob
The 2009 Mazda 6 is a entrance-wheel-pressure midsize sedan. There are seven trim ranges: i SV, i activity, i journeying, i Grand visiting, s recreation, s journeying and s Grand visiting. fashions with the i prefix include the 4-cylinder engine, whereas fashions with the s prefix come with the V6.

the worth leader i SV comes usual with sixteen-inch metal wheels, a manually top-adjustable driver seat, full power accessories, a six-speaker CD stereo machine with guidance-wheel-installed audio controls, air-conditioning and a tilt/telescoping steerage column. The i recreation adds cruise keep an eye on, an auxiliary enter jack and keyless entry. The i traveling edition steps up the function content material with 17-inch alloy wheels, foglights, a commute pc, a power driver seat, keyless ignition/entry, electroluminescent gauges, black patterned accent pieces and a leather-based-wrapped steerage wheel and shift knob. The i Grand travelling model provides xenon headlights, heated leather seats with a reminiscence perform for the driving force seat, Bluetooth connectivity, twin-zone automatic climate regulate, auto-dimming mirrors and an auditory blind-spot monitoring device.

The s fashions feature the corresponding i models' usual gear together with just a few enhancements -- the s sport comes with 17-inch alloy wheels and dual exhaust retailers, and the s traveling and s Grand travelling add 18-inch alloy wheels.

A comfort package for visiting models tacks on most of the Grand visiting's standard luxuries, while the Moonroof and Bose bundle adds a sunroof and an upgraded audio system to journeying and Grand journeying fashions. A navigation system is to be had most effective on the Grand journeying. other choices include faraway start, an in-sprint six-CD changer and satellite tv for pc radio.
mazda 6 check engine light
The front-wheel-pressure 2009 Mazda 6 is powered through either a 2.5-liter 4-cylinder engine or a three.7-liter V6. The 4 generates 170 horsepower and 167 pound-feet of torque, while the V6 pumps out 272 hp and 269 lb-ft. A six-velocity manual transmission is standard on 4-cylinder fashions, with a five-velocity automated non-compulsory on all however the i SV. A six-velocity computerized is necessary on V6-powered models.

gasoline-economy estimates stand at 20 mpg metropolis/29 freeway and 23 mixed for 4-cylinder fashions with the manual transmission, while the five-speed auto improves the 4's numbers to 21 mpg metropolis/30 freeway and 24 blended. These are classification-competitive numbers. however, if you go for the V6, estimates drop to 17 mpg city/25 freeway and 20 mixed, which is ready as bad because it gets on this phase.
common problems with mazda 6
Antilock disc brakes, balance control, front-seat facet airbags and entire-length facet curtain airbags are all same old on the aftermarket mazda 6 parts.
difference between mazda 3 and mazda 6
The mazda 6 bluetooth issues control structure is in most cases intuitive, with all major knobs and buttons naturally labeled and simply manipulated. it's sexy, too, with crimson backlighting for the gauges and a swish middle stack sweeping ahead towards the windshield, despite the fact that the atypical black-and-silver patterned plastic trim in journeying models and above is not going to strike everyone's fancy. materials high quality is hit-or-pass over, as the wealthy-feeling soft-contact subject material on the passenger side of the dashboard contrasts with low-cost arduous stuff on the driving force aspect. The emergency brake additionally feels a bit chintzy for this value point. The generously proportioned seats are somewhat at ease, then again, with abundant leg- and headroom all around. On the downside, energy-adjustable lumbar beef up is unavailable, and the not obligatory manually adjustable driver-facet lumbar toughen operates by the use of a labor-intensive knob.

in the audio department, the 6's usual stereo is just enough, and while the optional Bose gadget sounds markedly better, it lacks the clarity and wealthy bass response of the best stereos on this category. there may be better information on the cargo-carrying entrance, the place the sixteen.6-cubic-foot trunk units a new same old for domestic sedans. furthermore, it can be more desirable with the aid of upscale strut helps that do not impinge on the cargo area, and the 60/40-cut up-folding rear seats add to the 6's impressive hauling capabilities.
mazda 6 extended warranty
A awesome quantity of highway noise filters into the 6's cabin at velocity. Pavement imperfections barely ruffle the 6's composure, although, even when it is equipped with the non-compulsory 18-inch wheels. the bottom 2.5-liter engine produces wheezy noises and tepid acceleration, though the slick-shifting six-speed handbook shifter livens things up a little. The 5-pace automated is much less engaging but offers remarkably refined shifts. the large 3.7-liter V6 feels and sounds muscular, but it's a clean operator, even at greater engine speeds. unluckily, the six-pace computerized is not tuned for enthusiastic using -- downshifts are delayed, even in guide mode. handling is spectacular for a big household sedan, however the 6 would not really feel as tossable in corners as the Altima, and its guidance is lighter and looser than the nimble Nissan's. there may be most likely sufficient zoom on this chassis to placate folks that like to pressure, whereas the average client will relish the 6's reasonably compliant journey.
Categories: DBA Blogs

2016 Mazda6 Interior Review

EBIZ SIG BLOG - Wed, 2014-10-01 01:15
If there's an ethical to the absolutely redesigned 2016 Mazda6 Interior Review  story, it's that there may be extra to lifestyles than "zoom-zoom." The previous-generation Mazda 6 had a whole lot of it, yet Mazda had a satan of a time getting the shopping for public to notice. which is as a result of American drivers usually do not care a lot about how their domestic sedans behave on winding united states of america roads. What they want is space, security, reliability, energy and magnificence -- and while the outdated 6 looked nice sufficient, its tight dimensions and lackluster acceleration averted it from succeeding in an increasingly more competitive marketplace.

the new 6 demonstrates that Mazda wasn't going to make the identical mistake twice. want room to chill out? It bargains one of the vital accommodating cabins of any midsize sedan, with Texas-sized front seats and a backseat match for 6-footers. acquired baggage, or in all probability 4 units of golf golf equipment? The midsize Mazda now options an impressively massive 16.6 cubic toes of trunk space. need energy? Mazda's received you covered with the biggest and strongest V6 on this worth bracket, though its gasoline financial system is disappointing. need to make a way commentary? then you definately shouldn't be shopping for a family sedan within the first position. but as such cars go, we think the brand new 6 manages to be tautly horny, in particular from the front, despite its additional 6.1 inches of length and 2.three inches of width. in contrast to the previous model, the 2016 Mazda6 Interior Review  used to be engineered namely for the North American market -- and it shows.

It additionally method introduced weight, as the enlarged 6 is roughly a hundred and fifty pounds heavier than its predecessor. yet you can nonetheless discover the sporty reflexes that consumers have come to expect from this efficiency-oriented brand. physique roll is minimal through domestic-sedan requirements, and the steerage is light however responsive. The 6 by no means allows you to omit its really extensive dimension, but its smartly-sorted chassis and light-on-its-ft character at pace places it 2nd most effective to the Nissan Altima within the fun-to-force category.

Wait a minute, you might be announcing -- how might the "zoom-zoom" household sedan fail to be the sportiest automobile in its class? Three causes: The Altima's significantly lighter, its physique motions are higher managed and its steering is tighter and more communicative. however so what? because the previous 6 validated (and the Toyota Camry continues to reveal), prime-notch riding dynamics don't a very best-promoting household sedan make. What the brand new 6 offers is the coveted combination of american-fashion measurement and energy, along with above-average handling for individuals who care about that kind of thing. authentic fans won't in finding the 6 to their liking -- however hiya, that is what the Mazdaspeed three is for.

in short, the 2009 Mazda 6 is at or close to the top of its class in most respects. As universal, although, we suggest check-riding as many rival models as conceivable earlier than making your resolution, including the Chevrolet Malibu, Honda Accord, Hyundai Sonata, Nissan Altima, Subaru Legacy and Toyota Camry. each has its own strengths and weaknesses, but few can suit the 6's all-round competence and attraction.
battery for mazda 6 key fob
The 2009 Mazda 6 is a entrance-wheel-pressure midsize sedan. There are seven trim ranges: i SV, i activity, i journeying, i Grand visiting, s recreation, s journeying and s Grand visiting. fashions with the i prefix include the 4-cylinder engine, whereas fashions with the s prefix come with the V6.

the worth leader i SV comes usual with sixteen-inch metal wheels, a manually top-adjustable driver seat, full power accessories, a six-speaker CD stereo machine with guidance-wheel-installed audio controls, air-conditioning and a tilt/telescoping steerage column. The i recreation adds cruise keep an eye on, an auxiliary enter jack and keyless entry. The i traveling edition steps up the function content material with 17-inch alloy wheels, foglights, a commute pc, a power driver seat, keyless ignition/entry, electroluminescent gauges, black patterned accent pieces and a leather-based-wrapped steerage wheel and shift knob. The i Grand travelling model provides xenon headlights, heated leather seats with a reminiscence perform for the driving force seat, Bluetooth connectivity, twin-zone automatic climate regulate, auto-dimming mirrors and an auditory blind-spot monitoring device.

The s fashions feature the corresponding i models' usual gear together with just a few enhancements -- the s sport comes with 17-inch alloy wheels and dual exhaust retailers, and the s traveling and s Grand travelling add 18-inch alloy wheels.

A comfort package for visiting models tacks on most of the Grand visiting's standard luxuries, while the Moonroof and Bose bundle adds a sunroof and an upgraded audio system to journeying and Grand journeying fashions. A navigation system is to be had most effective on the Grand journeying. other choices include faraway start, an in-sprint six-CD changer and satellite tv for pc radio.
mazda 6 check engine light
The front-wheel-pressure 2009 Mazda 6 is powered through either a 2.5-liter 4-cylinder engine or a three.7-liter V6. The 4 generates 170 horsepower and 167 pound-feet of torque, while the V6 pumps out 272 hp and 269 lb-ft. A six-velocity manual transmission is standard on 4-cylinder fashions, with a five-velocity automated non-compulsory on all however the i SV. A six-velocity computerized is necessary on V6-powered models.

gasoline-economy estimates stand at 20 mpg metropolis/29 freeway and 23 mixed for 4-cylinder fashions with the manual transmission, while the five-speed auto improves the 4's numbers to 21 mpg metropolis/30 freeway and 24 blended. These are classification-competitive numbers. however, if you go for the V6, estimates drop to 17 mpg city/25 freeway and 20 mixed, which is ready as bad because it gets on this phase.
common problems with mazda 6
Antilock disc brakes, balance control, front-seat facet airbags and entire-length facet curtain airbags are all same old on the aftermarket mazda 6 parts.
difference between mazda 3 and mazda 6
The mazda 6 bluetooth issues control structure is in most cases intuitive, with all major knobs and buttons naturally labeled and simply manipulated. it's sexy, too, with crimson backlighting for the gauges and a swish middle stack sweeping ahead towards the windshield, despite the fact that the atypical black-and-silver patterned plastic trim in journeying models and above is not going to strike everyone's fancy. materials high quality is hit-or-pass over, as the wealthy-feeling soft-contact subject material on the passenger side of the dashboard contrasts with low-cost arduous stuff on the driving force aspect. The emergency brake additionally feels a bit chintzy for this value point. The generously proportioned seats are somewhat at ease, then again, with abundant leg- and headroom all around. On the downside, energy-adjustable lumbar beef up is unavailable, and the not obligatory manually adjustable driver-facet lumbar toughen operates by the use of a labor-intensive knob.

in the audio department, the 6's usual stereo is just enough, and while the optional Bose gadget sounds markedly better, it lacks the clarity and wealthy bass response of the best stereos on this category. there may be better information on the cargo-carrying entrance, the place the sixteen.6-cubic-foot trunk units a new same old for domestic sedans. furthermore, it can be more desirable with the aid of upscale strut helps that do not impinge on the cargo area, and the 60/40-cut up-folding rear seats add to the 6's impressive hauling capabilities.
mazda 6 extended warranty
A awesome quantity of highway noise filters into the 6's cabin at velocity. Pavement imperfections barely ruffle the 6's composure, although, even when it is equipped with the non-compulsory 18-inch wheels. the bottom 2.5-liter engine produces wheezy noises and tepid acceleration, though the slick-shifting six-speed handbook shifter livens things up a little. The 5-pace automated is much less engaging but offers remarkably refined shifts. the large 3.7-liter V6 feels and sounds muscular, but it's a clean operator, even at greater engine speeds. unluckily, the six-pace computerized is not tuned for enthusiastic using -- downshifts are delayed, even in guide mode. handling is spectacular for a big household sedan, however the 6 would not really feel as tossable in corners as the Altima, and its guidance is lighter and looser than the nimble Nissan's. there may be most likely sufficient zoom on this chassis to placate folks that like to pressure, whereas the average client will relish the 6's reasonably compliant journey.
Categories: APPS Blogs

Oracle Technology Network Tuesday in Review / Wednesday Preview - Oracle OpenWorld and JavaOne

OTN TechBlog - Wed, 2014-10-01 00:47


Another Day of Oracle OpenWorld and JavaOne comes to a close.  The OTN Wearable Meetup was great thanks to the Oracle Usable Apps team and the folks who came and showed us their wearable tech. 

Special Activity in the OTN lounge, Moscone South Upper Lobby for Wednesday October 1st -

Oracle Spatial and Graph users Meetup – 4 to 5pm
Meet the product managers, developers, and other users of Oracle's Spatial, Graph, and Multimedia Database technologies in these informal meetups. Share your questions, experiences, and ideas. Experts will listen to your product feedback and answer questions about
•    Spatial and MapViewer features for location-enabled business apps and GIS systems. Hosted by the IOUG Oracle Spatial Special Interest Group
•    RDF Graph for social network, semantic, and linked data applications
•    Multimedia for image archives, medical image applications, and other media-related applications

Out of the OTN Lounge -

Annual Blogger Meetup - 5:30pm to 7pm - Jillian @ Metreon

Oracle OpenWorld 2014 – Datatype context…?!

Marco Gralike - Tue, 2014-09-30 20:26
The native JSON database functionality presentations are done. If you want to experience first hand…

OOW14 Day 2 - Delphix #cloneattack

Yann Neuhaus - Tue, 2014-09-30 19:14

Do you know Delphix? The first time I heard of it was from Jonathan Lewis. And from Kyle Hailey of course. So it's not only about agile and virtualization. It's a real DBA stuff. So as I did yesterday with Dbvisit #repattack let's install the demo.

Here is the setup:

  • one source virtual machine with an XE database
  • one target virtual machine with XE installed but no database
  • one virtual machine with Delphix
And what can we do with that? We can clone the databases instantaneously. It's:
  • a virtual appliance managing storage snapshots for instant cloning
  • this is exposed through direct NFS to be used by the database
  • totally automated database maintenance (creating, restore, changing name, etc) through a nice GUI
So what's the point? You want to clone an environment instantaneously. Chose the point in time you want and it's done. You can clone 50 databases for your 50 developers. You can rewind your test database to run unit testing in an continuous integration development environment. You can do all that stuff that requires so many IT procedures just with a few clicks on the Delphix GUI.   Just an example, here is my source database and the way I choose the point in time I want to clone:   CaptureDelphix01.PNG   It's running: CaptureDelphix02.PNG   The #cloneattack is a good way to test things and discuss with others...  

I have now @delphix on my laptop installed with @kylehhailey at #oow14. Cont. tomorrow at OTW http://t.co/QJLVhp93jg pic.twitter.com/QgoAgJPXyo

— Franck Pachot (@FranckPachot) September 30, 2014

@kylehhailey #cloneattack: finished it today now playing with clones while listening to @TanelPoder pic.twitter.com/wH3kQKBp8U

— Franck Pachot (@FranckPachot) September 30, 2014

That's some powerful multitasking - awesome @FranckPachot @TanelPoder

— Kyle Hailey (@kylehhailey) September 30, 2014

Good UX - Don't Leave Home Without It

Floyd Teter - Tue, 2014-09-30 16:56
There was a time when I asserted that User Experience would be a differentiator for Oracle in selling Fusion Applications.  Lots has changed since then, so I think it’s time to change my own thinking.  What’s changed?


  • Oracle has a cloud platform
  • Fusion Applications is now Cloud Application Services
  • We’re seeing well-designed user experiences throughout Oracle’s offerings: Simplified UI in moving into the Applications Unlimited products, and is also evident throughout Oracle’s cloud services offerings.
  • Other enterprise application software companies now see the value of a well-designed user experience.  Look at the transition at Infor.  Check ADP’s announcement from earlier today.  Even the brand-W company that cannot be named recently released software that is a straight clone of Oracle’s Simplified UI.

OpenWorld has only reinforced my opinion.  Everyone here - Oracle product teams, Oracle partners, 3rd-party product providers - everyone is talking about and offering an enhanced UX.

So, I don’t consider good user experience design as a differentiator anymore.  I now see it as a necessity.  Enterprise software applications vendors must offer well-design UI to even have a seat at the table.

But what about custom-developed applications?  Good user experience still required.  You can’t expect user adoption without it.  In fact, I see the tools that facilitate good user experience design to be value-added products in and of themselves.


Good UX.  Don’t leave home without it.

Exadata Shellshock: IB Switches Vulnerable

Jason Arneil - Tue, 2014-09-30 16:16

Andy Colvin has the lowdown on the Oracle response and fixes for the bash shellshock vulnerability.

However, when I last looked it seemed Oracle had not discussed anything regarding the IB switches being vulnerable.

The IB switches have bash running on them and Oracle have verified the IB switches are indeed vulnerable.


[root@dm01dbadm01 ~]# ssh 10.200.131.22

root@10.200.131.22's password:

Last login: Tue Sep 30 22:46:41 2014 from dm01dbadm01.e-dba.com

You are now logged in to the root shell.

It is recommended to use ILOM shell instead of root shell.

All usage should be restricted to documented commands and documented

config files.

To view the list of documented commands, use "help" at linux prompt.

[root@dm01sw-ibb0 ~]# echo $SHELL

/bin/bash

[root@dm01sw-ibb0 ~]# rpm -qf /bin/bash

bash-3.2-21.el5

We have fixed up, as instructed by Oracle, our compute nodes and the test then shows the following once you are no longer vulnerable to the exploit:

env 'x=() { :;}; echo vulnerable' 'BASH_FUNC_x()=() { :;}; echo vulnerable' bash -c "echo test"
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `BASH_FUNC_x'
test

Note the lack of “vulnerable” in the output.

Unfortunately when we come to run on the IB switches:


[root@dm01sw-ibb0 ~]# env 'x=() { :;}; echo vulnerable' 'BASH_FUNC_x()=() { :;}; echo vulnerable' bash -c "echo test"
vulnerable
bash: BASH_FUNC_x(): line 0: syntax error near unexpected token `)'
bash: BASH_FUNC_x(): line 0: `BASH_FUNC_x() () { :;}; echo vulnerable'
bash: error importing function definition for `BASH_FUNC_x'
test
[root@dm01sw-ibb0 ~]# bash: warning: x: ignoring function definition attempt
-bash: bash:: command not found
[root@dm01sw-ibb0 ~]# bash: error importing function definition for `BASH_FUNC_x'
> test
> 

It’s vulnerable. As apparently is the iLOM. There are as yet no fixes available for either of these.


Day 2 at Oracle Open World - best practices for WebLogic & Cloud Control

Yann Neuhaus - Tue, 2014-09-30 15:03

Today, in this post I will describe some Oracle WebLogic and Cloud Control best practices I have learned in the last sessions. It's always good to see what is advised by other people that are confronted with other or the same challenges.

 

Managing Oracle WebLogic Server with Oracle Enterprise Manager 12c

One session was related to the best pratices for managing WebLogic with Cloud Control 12c.

  • Use the administration functions:

Now you can, with Cloud Control 12c, do the WebLogic administration using its console. Starting and stopping the managed servers and applications were already possible but now you can do more like configuring the resources, deploying applications and so on.
As you are using the Cloud Control console you can sign in to several targets WLS servers. This means you have to enter each time the required password. By providing the credentials and saving them as the preferred ones (in Preferred Credentials) you avoid to enter the password each time.

  • Automate Tasks accross domains with predefined jobs:

Predefined jobs can be used to start automatically WLST scripts and this against one or more domains. Like with the WLS console you can register your actions into a .py script, update it for your new targets, create the job and set the schedule. This can obviously be a script for configuration but also for monitoring or creating statistics.  

  • Automatic response to issue via corrective action:

By including corrective actions in templates you can apply them to managed servers. If the corrective action fails, by using rules you can send email in a second step to inform that there is an issue which need to be solved.

  • Use EMCLI to manage the credentials
  • use APEX to query the Management Repository for reporting

 

Troubleshooting Performance Issues

An other session where best practices were explained was the session on "Oracle WebLogic Server: Best Practices for Troubleshooting Performance Issues". A very helpfull session, all chairs in the room were occupied and some people had to stand, meaning the session was expected.

Some general tips:  

  •  verbose:gc to find out if the performance issues are related to the garbage collection behaviour  
  •  Dweblogic.log.RedirectStdoutToServerLogEnabled=true  
  •  use the Java Flight Recorder (JFR)  
  •  use Remote Diagnostic Agent (RDA)  
  •  use WLDF to create an image of your system  
  •  Thread/heap dumps to see how your application is working

One of the first action you have to do is to read the log files as they can show you which kind of errors are logged; stuck threads, too many open files aso.

The same application can behave differently whether it is deployed on WebLogic running on Linux or on Windows. For instance a socket can remain in TIME_WAIT 4 minutes in Linux but only 1 minute under Windows.

In case you encounter OutOfMemory errors, log the garbage collector information

-verbose:gc -XX+PrintGCDetails -XX:PrintGCDateStamps -XX:-PrintGCTimeStamps

More information can be found in the document referred by ID 877172.1

Thread Dump
To analyze your application you can create a thread dump

  •  under Unix/Linux: kill -3
  •  jstack
  •  WLST threadDump()
  •  jcmdprint_thread (for Java HotSpot)
  •  jcmdThread.print (for Java 7)

More information can be found in the document referred by ID 1098691.1

Once the thread dump has been created you have to analyze it.
For that, several tools are available

  •  Samurai
  •  Thread Dump Analyzer (TDA)
  •  ThreadLogic

Some best practices I already know; one tool I want to test now is ThreadLogic to be trained in case I have to use it in a real case.

Let's see what will happen in the next days.

What to expect at this year’s Gartner Symposium [VIDEO]

Chris Foot - Tue, 2014-09-30 15:01

Transcript

Unsure of how IT will impact enterprises in the near future?

Hi, welcome back to RDX! CIOs will probably encounter a number of challenges in the years ahead. The Gartner Symposium will feature presentations on strategic IT procurement, critical industry trends and how businesses can gain value from the latest technologies.

The conference will be held at the Dolphin Hotel in Orlando, Florida from October 5th to the 9th. Notable speakers will be Microsoft CEO Satya Nadella and Lyft Inc. President and Co-Founder John Zimmer.

As you can imagine, we'll be informing attendees about our database monitoring and optimization services. If you want to find us, we'll be located at Booth 206 during show floor hours.

Thanks for watching! Can't wait to see you in Florida!

The post What to expect at this year’s Gartner Symposium [VIDEO] appeared first on Remote DBA Experts.

RDX IT Process Automation

Chris Foot - Tue, 2014-09-30 14:11

RDX’s IT Process Automation Strategy

Remote DBA Experts (RDX) is the largest pure-play provider of remote data infrastructure services. We have been providing remote services for over 20 years, which also makes us one of the pioneers in this space. We currently support hundreds of customers and thousands of database implementations.

Remote data infrastructure services is an extremely competitive market arena. Our competitors range from “2 guys in a garage” to major outsourcing providers like IBM and Oracle. Improving and enhancing our support architecture isn’t something beneficial to RDX; it is critical to our competitive survival.

One of our primary responsibilities at RDX is to research, and evaluate, leading-edge OS, database and application support technologies. The goal of these efforts is to ensure that RDX customers continue to receive the highest level of value from RDX’s support services. RDX’s strategy is to continue to be pioneers in the remote services space – just as we were 20 years ago. One of the key technologies that RDX is implementing to ensure our continued leadership as a remote services provider is IT Process Automation.

What is IT Process Automation?

Process automation, because of its wide range of application, takes many forms. Manufacturing companies have been using industrial robots to replace activities traditionally performed by humans for some time. Business process automation shares the same goal: to replace business functions performed by humans with software applications. Work activities that are repetitive in nature and require little intelligent analysis and decision making to complete are prime candidates for process automation.

Business software applications, by their essence, are designed to automate processes. Software programmers create intelligent decision trees to evaluate and refine stored data elements and display that processed data for human interaction or automate the decision making process entirely.

Automation products are designed to act upon stored data or capture it for processing. The data is analyzed using workflows (decision trees) and embedded rules. The automation product then performs a prescribed set of actions. The automation product can continue processing by executing additional workflows, prompt for human intervention or complete the process by performing an activity.

For the context of this article, IT Process automation is the implementation of software to programmatically automate routine (little decision making required), repetitive workflows and tasks performed by IT knowledge workers.

The Automation Tool Marketplace

A highly competitive market forces all automation vendors to accelerate the release of new products as well as enhancements to existing offerings. Automation vendors know that new features and functionalities are not a requirement for competitive advantage; they are a requirement for competitive survival. The more competitive the space, the greater the benefit to the consumer. Vendor competition will ensure that automation products become more intelligent, more cost effective and easier to implement and administer.

As the number of features provided by automation products grows, so does the importance of taking advantage of those new features. Automation product licensing and vendor maintenance contracts command a premium price in the marketplace. To gain the most return on their investment, companies must ensure that they are completely leveraging the benefits of the particular automation product being used. Understanding all of the inherent features is important, but selecting the features that bring each individual implementation the most benefit is the key to success.

The endless array of automation offerings add complexity to product selection. IT automation product features and functionality range the spectrum from niche offerings that focus on automating a very well-defined, specific set of tasks to products that provide a complete framework and set of tools designed to generate more global efficiencies by automating a wide range of activities. More traditional software vendors including database and monitoring tool providers realize that automation features provide their offerings with an advantage over competitors’ products.

RDX’s Automation Strategy

Process automation products have been on RDX’s technological radar for years. Various products provided bits and pieces of the functionality we required, but we were unable to identify an offering that provided a total automation solution.

Like many shops, RDX inter-weaved various scripts, programs and third-party products to automate repetitive tasks. Automation was done in an AD-HOC, opportunistic manner as the tasks were identified. RDX’s challenge was to select and implement a product that would provide a framework, architecture and set of tools that RDX could utilize to implement a company-wide automation architecture. The goal was to transform RDX’s automation activities from opportunistic and AD-HOC to a strategic initiative with a well-defined mission statement, clear set of achievable goals and detailed project plans with deliverables to obtain them.

RDX’s Process Automation Goals

RDX has two primary sources of repetitive tasks:

  • Customer event data collection, diagnosis and resolution
  • Internal support activities

Our goals for our automation strategy can be summarized into the following main points:

  • Improve the quality and speed of problem event analysis and resolution. Faster and higher quality problem resolution equals happy RDX customers.
  • Increase staff productivity by reducing the number of mundane, repetitive tasks the RDX staff is required to perform
  • Reduce operating costs through automation

Our environment is not entirely unique. Our service architecture can be compared to any IT shop that supports a large number of disparate environments. The resulting challenges we face are fairly common to any IT service provider:

  • RDX‘s desire to provide immediate resolutions to all performance and availability issues (reduce Mean Time to Resolution)
  • RDX looking to respond to client events with more accuracy
  • Implement a software solution that allows RDX to capture and record pockets of tribal knowledge and leverage that subject matter expertise by transforming it into automated processes to foster a culture of continuous process improvement
  • Reduce the amount of time RDX spends on both customer-facing and internal repetitive tasks to allow our support professionals to focus on higher ROI support activities
  • Provide the ability to quickly prove audit and compliance standards through report logs capturing the results of each automation task
  • RDX’s rapid growth requires us to process an exponentially increasing number of event alerts and administrative activities. The continuous hiring of additional resources to manage processes and data is not a scalable or cost-effective solution

RDX’s Automation Product Selection

RDX performed a traditional vendor analysis using a standardized evaluation methodology. A methodology can be loosely defined as a body of best practices, processes and rules used to accomplish a given task. The task in this case is to evaluate and select an automation product provider.

A needs analysis was performed to generate a weighted set of functional and technical requirements. The focus of the analysis was on selecting a product that would help us achieve our goal of implementing a strategic automation solution, as opposed to just buying a product. If we were unable to identify a solution that met our requirements, we were willing to delay the vendor selection process until we found one that did.

RDX selected GEN-E Resolve as our automation tool provider. GEN-E Resolve was able to provide the “end-to-end” architecture we required to automate both customer event resolution and RDX internal processes. GEN-E Resolve’s primary focus is on the automation of complex incident resolution and is a popular product with large telecommunication providers that support thousands of remote devices. What RDX found most beneficial was that the product did not require the installation of any software on our customers’ servers. All processing is performed on RDX’s Resolve servers running at our data center.

RDX’s First Step – Automatic Event Data Collection

The primary service we provide to our customers is ensuring their database systems are available at all times and performing as expected. Database administrators, by the very essence of our job descriptions, are the protectors of the organization’s core data assets. We are tasked with ensuring key data stores are continuously available. However, ensuring that data is available on a 24 x 7 basis is a wonderfully complex task.

When a mission-critical database application becomes unavailable, it can threaten the survivability of the organization. The financial impact of downtime is not the only issue that faces companies that have critical applications that are offline. Loss of customer goodwill, bad press, idle employees and legal penalties (lawsuits, fines, etc.) must also be considered.

It is up to the database administrator to recommend and implement technical solutions that deal with these unforeseen “technology disruptions.” When they do occur, it is our responsibility as DBAs to restore the operational functionality of the failed systems as quickly as possible.

RDX’s initial goal was to automate the collection of information required to perform problem analysis. The key to faster problem resolution is to reduce the amount of time collecting diagnostic data and spend that time analyzing it.

RDX prioritized customer events using the following criteria:

  • Frequency the event occurs
  • Severity of customer impact
  • Amount of time required to manually collect diagnostic data (reduce Mean Time to Resolution)
  • Complexity of the diagnostic data collection process (increase resolution accuracy)
  • Amount of human interaction required to collect diagnostic data (cost reduction)

RDX deployed a team of in-house automation specialists to collect the operational knowledge required to create the decision trees, workflows and data collection activities traditionally performed by RDX personnel. Our implementation, although still in its infancy, has met our initial expectations.

RDX has automated the diagnostic data collection process for several events and has proven that the automation tool can perform the tasks quickly, consistently and with high quality. RDX has also successfully implemented automatic problem resolution tasks for simple events. Subsequent enhancements to our automation capabilities are to leverage RDX’s collective operational knowledge to quickly resolve more complex issues.

Although our initial goal was to improve the speed and quality of our problem resolution process, our intent is to also move forward with the automation of our internal support processes. One of the key facets of the project’s success was to keep RDX personnel informed about the automation project and the benefits the implementation would provide to both RDX customers and internal support technicians. Promoting the product was crucial, as we found that it led to the generation of a veritable groundswell of internal process automation recommendations. Our intent is to formalize the internal process automation project by appointing RDX personnel as project owners and soliciting recommendations through company surveys (as opposed to an AD-HOC manner). Once the recommendations are collected, RDX will perform the same type of prioritization as we did during the initial stages of product implementation.

The Future of Automation

Although we will continue to see the greatest advances in automation in the traditional manufacturing spaces, IT process automation will continue to grow and mature until it becomes integrated into the fabric of most IT organizations. Larger shops will be the early adopters of IT automation, as they will be able to more quickly realize the benefits the solution provides than their smaller counterparts. As stated previously, a very competitive market arena will continue to accelerate the features and functionality provided by vendor products. As the offerings mature, they will become more robust, more intelligent and more cost effective. As a result, the adoption rate will continue to grow, as it would with any technology possessing these traits.

In the remote services space, it is how RDX intends to differentiate ourselves from our competitors. Outsourcing providers that manage large numbers of remote targets will be required to automate, or they will quickly lose market share to those competitors that do. It is RDX’s intention to be an innovator and not a “close follower” of automation technologies.

The post RDX IT Process Automation appeared first on Remote DBA Experts.

Plea For Tight Messages - OOW14

Floyd Teter - Tue, 2014-09-30 14:02
It’s so easy to lose track of time at Oracle OpenWorld.  I think I’m writing this on Tuesday, but can’t say for sure…

Lots of information being shared here:  incremental development of Simplified UI, a myriad of new cloud services announced (including a very cool Integration Cloud Service), new features for MySQL, new mobile applications for the E-Business Suite, Eloqua services for Higher Education, a visualization-oriented UI for OBIEE (and saw a very cool new visualization UI from the UX team, but I can’t talk about that yet), some interesting uses of Beacons…it’s like drinking from a firehose and darn near drowning in the attempt.  Info overload.

One of the cool things one gets to see at OOW: the rise of new third-party applications that improve and enhance Oracle products..  On Monday, I had the opportunity to sit down with the brain trust behind Xprtly!  What impressed me the most is the focus of their message - they’ve got it down to four slides (including a title).  Take a look and see if you get it.







So why do I bring this up?  Go back and read the second paragraph.  We’re all on information overload here.  The virtual noise level is incredible.  Tight, focused messages cut through the noise and get the point across.  Wish we saw more of this approach here…

Microsoft Hadoop: Taming the Big Challenge of Big Data – Part One

Pythian Group - Tue, 2014-09-30 11:12

Today’s blog post is the first in a three-part series with excerpts from our latest white paper, Microsoft Hadoop: Taming the Big Challenge of Big Data.

As companies increasingly rely on big data to steer decisions, they also find themselves looking for ways to simplify its storage, management, and analysis. The need to quickly access large amounts of data and use them competitively poses a technological challenge to organizations of all sizes.

Every minute, about two terabytes of data are being generated globally. That’s twice the amount from three years ago and half the amount predicted for three years from now.

Volume aside, the sources of data and the shape they take vary broadly. From government records, business transactions and social media, to scientific research and weather tracking, today’s data come in text, graphics, audio, video, and maps.

Download our full white paper which explores the impact of big data on today’s organizations and its challenges.

Categories: DBA Blogs

Packt Publishing - ALL eBooks and Videos are just $10 each or less until the 2nd of October

Surachart Opun - Tue, 2014-09-30 10:36
Just spread good campaign from Packt Publishing - It's a good news for people who love to learn something new - ALL eBooks and Videos are just $10 or less -- the more you choose to learn, the more you save:
  • Any 1 or 2 eBooks/Videos -- $10 each
  • Any 3-5 eBooks/Videos -- $8 each
  • Any 6 or more eBooks/Videos -- $6 each


Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs