Skip navigation.

Feed aggregator

DML Operations On Partitioned Tables Can Restart On Invalidation

Randolf Geist - Sun, 2016-01-17 13:12
It's probably not that well known that Oracle can actually rollback / re-start the execution of a DML statement should the cursor become invalidated. By rollback / re-start I mean that Oracle actually performs a statement level rollback (so any modification already performed by that statement until that point gets rolled back), performs another optimization phase of the statement on re-start (due to the invalidation) and begins the execution of the statement from scratch. Note that this can happen multiple times - actually it's possible to end up in a kind of infinite loop when this happens, leading to statements that can run for very, very long (I've seen statements on Production environments executing for several days although a single execution would only take minutes).

The pre-requisites to meet for this to happen are not that complex or exotic:

- The target table to manipulate needs to be partitioned

- The cursor currently executing gets invalidated - either by running DDL (typically think of partition related operations) - or simply by gathering statistics on one of the objects involved in the statement

- The DML statement hasn't touched yet one of the partitions of the target table but attempts to do so after the cursor got invalidated

When the last condition is met, the statement performs a rollback, and since it got invalidated - which is one of the conditions to be met - another optimization phase happens, meaning that it's also possible to get different execution plans for the different execution attempts. When the execution plan is ready the execution begins from scratch.

According to my tests the issue described here applies to both conventional and direct-path inserts, merge statements (insert / update / delete) as well as serial and parallel execution. I haven't explicitly tested UPDATE and DELETE statements, but the assumption is that they are affected, too.

The behaviour is documented in the following note on MOS: "Insert Statement On Partitioned Tables Is RE-Started After Invalidation (Doc ID 1462003.1)" which links to Bug "14102209 : INSERT STATEMENT' IS RESTARTING BY ITSELF AFTER INVALIDATION" where you can also find some more comments on this behaviour. The issue seems to be that Oracle at that point is no longer sure if the partition information compiled into the cursor for the partitioned target table is still correct or not (and internally raises and catches a corresponding error, like "ORA-14403: Cursor invalidation detected after getting DML partition lock", leading to the re-try), so it needs to refresh that information, hence the re-optimization and re-start of the cursor.

Note that this also means that the DML statement might already have performed modifications to other partitions but after being invalidated attempts to modify another partition it hasn't touched yet - it just needs an attempt to modify a partition not touched into yet by that statement.

It's also kind of nasty that the statement keeps running the potentially lengthy query part after being invalidated only to find out it needs to re-start after the first row is attempted to be applied to a target table partition not touched yet.

Note that applications typically run into this problem, when they behave like the following:

- There are longer running DML statements that take typically several seconds / minutes until they attempt to actually perform an modification to a partitioned target table

- They either use DBMS_STATS to gather stats on one of the involved tables, typically using NO_INVALIDATE=>FALSE, which leads to an immediate invalidation of all affected cursors

- And/Or they perform partition related operations on one of the tables involved, like truncating, creating or exchanging partitions. Note that it is important to point out that it doesn't matter which objects gets DDL / stats applied, so it's not limited to activity on the partitioned target table being modified - any object involved in the query can cause the cursor invalidation

In principle this is another variation of the general theme "Don't mix concurrent DDL with DML/queries on the same objects". Doing so is something that leads to all kinds of side effects, and the way the Oracle engine is designed means that it doesn't cope very well with doing so.

Here is a simple test case for reproducing the issue, using INSERTs in this case here (either via INSERT or MERGE statement):

create table t_target (
id number(*, 0) not null,
pkey number(*, 0) not null,
filler varchar2(500)
)
--segment creation immediate
partition by range (pkey) --interval (1)
(
partition pkey_0 values less than (1)
, partition pkey_1 values less than (2)
, partition pkey_2 values less than (3)
, partition pkey_3 values less than (4)
);

create table t_source
compress
as
select 1 as id, rpad('x', 100) as filler
from
(select /*+ cardinality(1e3) */ null from dual connect by level <= 1e3),
(select /*+ cardinality(1e0) */ null from dual connect by level <= 1e0)
union all
select 1 as id, rpad('y', 100) as filler from dual;

-- Run this again once the DML statement below got started
exec dbms_stats.gather_table_stats(null, 't_source', no_invalidate=>false)

exec dbms_stats.gather_table_stats(null, 't_target', no_invalidate=>false)

----------------------------------------------------------------------------------------------------------------------------------
-- INSERT example --
-- Run above DBMS_STATS calls or any other command that invalidates the cursor during execution to force re-start of the cursor --
----------------------------------------------------------------------------------------------------------------------------------

set echo on timing on time on

-- alter session set tracefile_identifier = 'insert_restart';

-- alter session set events '10046 trace name context forever, level 12';

-- exec sys.dbms_monitor.session_trace_enable(waits => true, binds => true/*, plan_stat => 'all_executions'*/)

insert /* append */ into t_target (id, pkey, filler)
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 1 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
union all
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 2 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
union all
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 3 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
;

-- exec sys.dbms_monitor.session_trace_disable

----------------------------------------------------------------------------------------------------------------------------------
-- MERGE example --
-- Run above DBMS_STATS calls or any other command that invalidates the cursor during execution to force re-start of the cursor --
----------------------------------------------------------------------------------------------------------------------------------

set echo on timing on time on

merge /* append */ into t_target t
using (
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 1 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
union all
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 2 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
union all
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 3 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
) s
on (s.id = t.id)
when not matched then
insert (id, pkey, filler) values (s.id, s.pkey, s.filler)
;
The idea of the test case is to maximise the time until each UNION ALL branch produces data to insert by performing an inefficient HASH JOIN (that in fact generates a Cartesian product and needs to apply a costly REGEXP filter on that huge intermediate result) and forcing a sort on the join result, so rows will only be handed over to the parent operations until all rows were processed in the join operation - and each branch generates data for a different partition of the target table. Typically it should take several seconds per branch to execute (if you need more time just un-comment the additional REGEXP_REPLACE filters), so you should have plenty of time to cause the invalidation from another session.

This means during the execution of each branch invalidating the cursor (for example by executing either of the two DBMS_STATS calls on the source or target table using NO_INVALIDATE=>FALSE) will lead to a re-start of the statement at the next attempt to write into a new target partition, possibly rolling back rows already inserted into other partitions.

Diagnostics
If you run the provided INSERT or MERGE statement on newer versions of Oracle that include the SQL_EXEC_START and SQL_EXEC_ID in V$ACTIVE_SESSION_HISTORY (or V$SESSION for that matter) and invalidate the cursor during execution and before a partition of the target table gets inserted for the first time then you can see that these entries change as the statement restarts.

In such cases the INVALIDATIONS and LOADS increase in V$SQL accordingly and the OBJECT_STATUS changes from INVALID_UNAUTH to VALID again with each re-start attempt. In newer versions where you can configure the "plan_stat" information for SQL trace to "all_executions" you'll find STAT lines for each execution attempt dumped to the trace file, but only a single final EXEC line, where the elapsed time covers all execution attempts.

The oldest version I've tested was 10.2.0.4, and that one showed already the re-start behaviour, although I would be inclined to think that this wasn't the case with older versions. So if anybody still runs older versions than 10.2.0.4 I would be interested to hear whether the behaviour reproduces or not.

Oracle Access Manager (OAM 11g) Interview Questions.. Do you know enough about OAM ???

Online Apps DBA - Sat, 2016-01-16 19:47
This entry is part 3 of 5 in the series Oracle Access Manager

Oracle Access Manager (OAM) is Oracle’s recommended Single Sign-On (SSO) solution that not only provides heterogeneous platform support but is also Integrated with Oracle Fusion Applications . OAM is also recommended SSO solution for Oracle Fusion MiddleWare (Webcenter, OBIEE , SOA) and Oracle E-Business Suite, Peoplesoft  or Siebel CRM.

Lot of time in our Oracle Access Manager Training(next batch starts on 31st January, 2016 – Register now and get discount of 200 USD, Apply coupon code A2OFF) trainees ask about interview questions related to Oracle Identity and Access Management Suite.

These questions also help them to understand concepts and important things helpful in actual implementation of OAM

Can you answer these basic questions related to Oracle Identity & Access Manager ? (leave answers under comments to see how many you get right)

1) What is the name of main OAM Configuration File and where it is located (DB or File System)?

2) What is the name of main Weblogic Domain Configuration File and where it is located?

3) What is the location of OAM Admin Server logs?

4) What is the Oracle Instance in OID ?

5) Where the start/stop script for Admin and Managed servers are located?

6) What is difference between OPEN, SIMPLE, CERT communication mode in WebGate to OAM communication ?

7) What is Proxy Port in OAM Server and which component of OAM connect to Proxy Port of OAM ?

Click on the button below and get the Cheat Sheet on Oracle Access Manager including the answers for above questions.

The cheat sheet also contains the Basic information on Oracle Internet Directory (OID).

If you want to learn more or wish to discuss challenges you are hitting in Oracle Access Manager Implementation or OAM Integration with Oracle E-Business Suite (R12.1/12.2), register for our Oracle Access Manager Training (next batch starts on 31st January, 2016 – Register now and get discount of 200 USD,  Apply coupon code A2OFF, Discounts won’t stay longer and prices will go up soon).

We are so confident on quality and value of our training that We provide 100% Money back guarantee so in unlikely case of you being not happy after 2 sessions, just drop us a mail before third session and We’ll refund FULL money.

We provide dedicated machine on cloud to practice OAM Implementation including integration with E-Business Suite and recording of live interactive trainings for life time access.

 

Click Here to Subscribe with us to get OAM-OID Cheat Sheet

Stay tuned for more Interview Questions on OAM-EBS 12.2/12.1 Integration in our next post.

The post Oracle Access Manager (OAM 11g) Interview Questions.. Do you know enough about OAM ??? appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Automatic ADF Logout on Browser Close with WebSocket

Andrejus Baranovski - Sat, 2016-01-16 12:15
Every ADF project could have a requirement to handle browser close event effectively. Differently than desktop applications where we could handle such events, browser doesn't send any event to the server, when browser page is closed. This is especially important for transactional data, when user locks data row and lock must be released automatically, in case if user is closing browser without unlocking. Besides transactional data management, it is important for performance improvement - Web session will be closed instantly and WebLogic resources will be released. There was no reliable solution to handle this, now we can do it with WebLogic 12c and WebSockets. WebLogic 12c supports WebSockets natively, there is no need to configure anything or add libraries.

When WebSocket channel is closed, it delivers event to the server on browser close. We could use this event to release locked resources, we need to logout ADF session to force ROLLBACK. Sample application implements HTTP client invoked by WebSocket server endpoint. This HTTP client simulates browser activity and redirects to adfAuthentication with logout request, using original JSESSION ID from closed browser session. This works also during accidental client computer power off. Download sample application where I have described all important implementation steps - ADFSessionHandlingWebSocket.zip.

Sample application is protected with ADF Security, you can login with redsam/welcome1 user. It includes Oracle JET libraries, one of the dashboard tiles implements JET fragment. JET is not required for the sample to work, it is used only to implement dashboard UI.

You can observe from browser log when WebSocket connection is opened. Connection is established on initial page load, immediatelly after login. This opens WebSocket communication chanel, as soon as this chanel will be closed - WebSocket server endpoint will force logout for ADF session from closed browser:


As soon as WebSocket channel is established, ADF web session ID (JSESSIONID is retrieved from the cookie) is sent to WebSocket server endpoint. Sample logs ID of ADF web session:


I'm going to lock one of the records, I'm invoking ADF BC lock method (DB pool is disabled, this will keep DB connection assigned for AM):


Lock action for row data is visible in the log, SELECT FOR UPDATE is executed for ID 102:


Let's close a browser now without issuing ROLLBACK, invoke Quit action in the browser:


WebSocket channel will be closed and this will trigger ADF Authentication servlet request to logout from ADF web session. As logout happens, ADF resources are released, ADF BC is triggering ROLLBACK in DB:


Session is closed based on JSESSIONID. With HTTP Client we simulate user logout after browser was closed:


Now we should overview technical implementation. WebSocket channel is opened from JavaScript:


This happens on page load, with clientListener triggering connectSocket method:


Method connectSocket in JavaScript is using standard WebSocket API to open connection:


WebSocket server endpoint is defined with custom configurator. Through configurator we can reference HTTP session initiated in ADF. HTTP client in WebSocket endpoint will use it later, to simulate ADF logout:


ADF HTTP session is referenced through WebSocket configurator:


Helper session listener class is defined in web.xml, here I'm copying JSESSIONID from the cookie into session attribute (to be able to reference JSESSIONID in WebSocket endpoint):


OnClose method in WebSocket endpoint is invoked, when connection channel between client and server is closed (browser is closed). Here I'm invoking custom method handleLogout method:


I'm constructing HTTP client request with the same JSESSIONID value as it was copied from ADF session. HTTP Get is executed against ADF Authentication servlet with logout request using JSESSIONID value. This is forcing ADF session to logout from simulated HTTP session by HTTP client:


Validate DG Broker Config for Switchover

Michael Dinh - Sat, 2016-01-16 12:08

Primary and Standby databases are running on the same server using OMF with listening on port 1530/1531

Note I have – TraceLevel = ‘SUPPORT’

+++ Check listener for DGMGRL service from PRIMARY and STANDBY.
oracle@arrow:hawksan:/media/sf_working/dataguard
$ lsnrctl status listener_las|grep DG -A 1
Service "hawklas_DGB" has 1 instance(s).
  Instance "hawklas", status READY, has 1 handler(s) for this service...
Service "hawklas_DGMGRL" has 1 instance(s).
  Instance "hawklas", status UNKNOWN, has 1 handler(s) for this service...

oracle@arrow:hawksan:/media/sf_working/dataguard
$ lsnrctl status listener_san|grep DG -A 1
Service "hawksan_DGB" has 1 instance(s).
  Instance "hawksan", status READY, has 1 handler(s) for this service...
Service "hawksan_DGMGRL" has 1 instance(s).
  Instance "hawksan", status UNKNOWN, has 1 handler(s) for this service...

Get into habit of using instance versus database where applicable for RAC compatibility.

DGMGRL> show database hawklas

Database - hawklas

  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    hawklas

Database Status:
SUCCESS

DGMGRL> show database hawklas DGConnectIdentifier
  DGConnectIdentifier = 'hawklas'
DGMGRL> show instance hawklas DGConnectIdentifier
  DGConnectIdentifier = 'hawklas'
DGMGRL>
+++ Check DG Configuration
oracle@arrow:hawklas:/media/sf_working/dataguard
$ ./check_dg.sh
***** Checking Data Guard Broker Configuration ....
DGMGRL for Linux: Version 11.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /
Connected.
DGMGRL> show configuration verbose

Configuration - dg_hawk

  Protection Mode: MaxPerformance
  Databases:
    hawklas - Primary database
    hawksan - Physical standby database

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'
    ExternalDestination1            = ''
    ExternalDestination2            = ''
    PrimaryLostWriteAction          = 'CONTINUE'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show configuration TraceLevel
  TraceLevel = 'SUPPORT'
DGMGRL> show database hawklas

Database - hawklas

  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    hawklas

Database Status:
SUCCESS

DGMGRL> show database hawksan

Database - hawksan

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds (computed 0 seconds ago)
  Apply Lag:       0 seconds (computed 0 seconds ago)
  Apply Rate:      45.00 KByte/s
  Real Time Query: ON
  Instance(s):
    hawksan

Database Status:
SUCCESS

DGMGRL> show instance hawklas DGConnectIdentifier
  DGConnectIdentifier = 'hawklas'
DGMGRL> show instance hawksan DGConnectIdentifier
  DGConnectIdentifier = 'hawksan'
DGMGRL> show instance hawklas StaticConnectIdentifier
  StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=arrow)(PORT=1530))(CONNECT_DATA=(SERVICE_NAME=hawklas_DGMGRL)(INSTANCE_NAME=hawklas)(SERVER=DEDICATED)))'
DGMGRL> show instance hawksan StaticConnectIdentifier
  StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=arrow)(PORT=1531))(CONNECT_DATA=(SERVICE_NAME=hawksan_DGMGRL)(INSTANCE_NAME=hawksan)(SERVER=DEDICATED)))'
DGMGRL> show instance hawklas InconsistentProperties
INCONSISTENT PROPERTIES
   INSTANCE_NAME        PROPERTY_NAME         MEMORY_VALUE         SPFILE_VALUE         BROKER_VALUE

DGMGRL> show instance hawksan InconsistentProperties
INCONSISTENT PROPERTIES
   INSTANCE_NAME        PROPERTY_NAME         MEMORY_VALUE         SPFILE_VALUE         BROKER_VALUE

DGMGRL> show instance hawklas LogArchiveMaxProcesses
  LogArchiveMaxProcesses = '4'
DGMGRL> show instance hawksan LogArchiveMaxProcesses
  LogArchiveMaxProcesses = '4'
DGMGRL> show instance hawklas DelayMins
  DelayMins = '0'
DGMGRL> show instance hawksan DelayMins
  DelayMins = '0'
DGMGRL> show instance hawklas LogArchiveTrace
  LogArchiveTrace = '0'
DGMGRL> show instance hawksan LogArchiveTrace
  LogArchiveTrace = '0'
DGMGRL> show instance hawklas statusreport
STATUS REPORT
       INSTANCE_NAME   SEVERITY ERROR_TEXT

DGMGRL> show instance hawksan statusreport
STATUS REPORT
       INSTANCE_NAME   SEVERITY ERROR_TEXT

DGMGRL> exit
oracle@arrow:hawklas:/media/sf_working/dataguard
$
+++ Test connectivity to database using StaticConnectIdentifier from DG Broker
oracle@arrow:hawksan:/media/sf_working/dataguard
$ sqlplus /nolog

SQL*Plus: Release 11.2.0.4.0 Production on Sat Jan 16 08:10:31 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

@> connect sys/oracle@'(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=arrow)(PORT=1530))(CONNECT_DATA=(SERVICE_NAME=hawklas_DGMGRL)(INSTANCE_NAME=hawklas)(SERVER=DEDICATED)))' as sysdba
Connected.
ARROW:(SYS@hawklas):PRIMARY> show parameter name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cell_offloadgroup_name               string
db_file_name_convert                 string
db_name                              string      hawk
db_unique_name                       string      hawklas
global_names                         boolean     FALSE
instance_name                        string      hawklas
lock_name_space                      string
log_file_name_convert                string
processor_group_name                 string
service_names                        string      hawk,hawklas
ARROW:(SYS@hawklas):PRIMARY> select open_mode, database_role from v$database;

OPEN_MODE            DATABASE_ROLE
-------------------- ----------------
READ WRITE           PRIMARY

ARROW:(SYS@hawklas):PRIMARY> connect sys/oracle@'(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=arrow)(PORT=1531))(CONNECT_DATA=(SERVICE_NAME=hawksan_DGMGRL)(INSTANCE_NAME=hawksan)(SERVER=DEDICATED)))' as sysdba
Connected.
ARROW:(SYS@hawksan):PHYSICAL STANDBY> show parameter name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cell_offloadgroup_name               string
db_file_name_convert                 string
db_name                              string      hawk
db_unique_name                       string      hawksan
global_names                         boolean     FALSE
instance_name                        string      hawksan
lock_name_space                      string
log_file_name_convert                string
processor_group_name                 string
service_names                        string      hawk,hawksan
ARROW:(SYS@hawksan):PHYSICAL STANDBY> select open_mode, database_role from v$database;

OPEN_MODE            DATABASE_ROLE
-------------------- ----------------
READ ONLY WITH APPLY PHYSICAL STANDBY

ARROW:(SYS@hawksan):PHYSICAL STANDBY>
+++ switchover to hawksan (STANDBY)

Must connect as sys@tns (typically same as DGConnectIdentifier) for switchover.

oracle@arrow:hawklas:/media/sf_working/dataguard
$ ./clearlog.sh
oracle@arrow:hawklas:/media/sf_working/dataguard
$ dgmgrl
DGMGRL for Linux: Version 11.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/oracle@hawklas
Connected.
DGMGRL> show configuration

Configuration - dg_hawk

  Protection Mode: MaxPerformance
  Databases:
    hawklas - Primary database
    hawksan - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show database hawklas

Database - hawklas

  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    hawklas

Database Status:
SUCCESS

DGMGRL> show database hawksan

Database - hawksan

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds (computed 1 second ago)
  Apply Lag:       0 seconds (computed 1 second ago)
  Apply Rate:      19.00 KByte/s
  Real Time Query: ON
  Instance(s):
    hawksan

Database Status:
SUCCESS

DGMGRL> switchover to hawksan
Performing switchover NOW, please wait...
Operation requires a connection to instance "hawksan" on database "hawksan"
Connecting to instance "hawksan"...
Connected.
New primary database "hawksan" is opening...
Operation requires startup of instance "hawklas" on database "hawklas"
Starting instance "hawklas"...
ORACLE instance started.
Database mounted.
Database opened.
Switchover succeeded, new primary is "hawksan"
DGMGRL> show configuration

Configuration - dg_hawk

  Protection Mode: MaxPerformance
  Databases:
    hawksan - Primary database
    hawklas - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show database hawksan

Database - hawksan

  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    hawksan

Database Status:
SUCCESS

DGMGRL> show database hawklas

Database - hawklas

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds (computed 1 second ago)
  Apply Lag:       0 seconds (computed 1 second ago)
  Apply Rate:      0 Byte/s
  Real Time Query: ON
  Instance(s):
    hawklas

Database Status:
SUCCESS

DGMGRL> exit
++++ Save logs for reference
oracle@arrow:hawklas:/media/sf_working/dataguard
$ ./savelog.sh
`/u01/app/oracle/product/11.2.0/dbhome_1/network/log/listener_las.log' -> `/tmp/listener_las.log'
`/u01/app/oracle/product/11.2.0/dbhome_1/network/log/listener_san.log' -> `/tmp/listener_san.log'
`/u01/app/oracle/diag/rdbms/hawklas/hawklas/trace/alert_hawklas.log' -> `/tmp/alert_hawklas.log'
`/u01/app/oracle/diag/rdbms/hawklas/hawklas/trace/drchawklas.log' -> `/tmp/drchawklas.log'
`/u01/app/oracle/diag/rdbms/hawksan/hawksan/trace/alert_hawksan.log' -> `/tmp/alert_hawksan.log'
`/u01/app/oracle/diag/rdbms/hawksan/hawksan/trace/drchawksan.log' -> `/tmp/drchawksan.log'
+++ switchover to hawklas (STANDBY)

Must connect as sys@tns (typically same as DGConnectIdentifier) for switchover.

oracle@arrow:hawklas:/media/sf_working/dataguard
$ dgmgrl
DGMGRL for Linux: Version 11.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/oracle@hawklas
Connected.
DGMGRL> show configuration

Configuration - dg_hawk

  Protection Mode: MaxPerformance
  Databases:
    hawksan - Primary database
    hawklas - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> switchover to hawklas
Performing switchover NOW, please wait...
New primary database "hawklas" is opening...
Operation requires startup of instance "hawksan" on database "hawksan"
Starting instance "hawksan"...
ORACLE instance started.
Database mounted.
Database opened.
Switchover succeeded, new primary is "hawklas"
DGMGRL> show configuration

Configuration - dg_hawk

  Protection Mode: MaxPerformance
  Databases:
    hawklas - Primary database
    hawksan - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show database hawklas

Database - hawklas

  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    hawklas

Database Status:
SUCCESS

DGMGRL> show database hawksan

Database - hawksan

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds (computed 1 second ago)
  Apply Lag:       0 seconds (computed 1 second ago)
  Apply Rate:      0 Byte/s
  Real Time Query: ON
  Instance(s):
    hawksan

Database Status:
SUCCESS

DGMGRL> exit
oracle@arrow:hawklas:/media/sf_working/dataguard
$

Defining Resources in #GoldenGate Studio 12c

DBASolved - Fri, 2016-01-15 16:12

As I’ve been working with the beta of GoldenGate Studio 12c, I have tried to do simple things to see what will break and what is needed to make the process work. One of the things that I lilke about the studio is that prior to creating any solutions, mappings or projects, you can define what databases and GoldenGate instances will be used during the design process. What I want to show you in this blog post is how to create the database resource and the GoldenGate instance resource.

Creating a Resource:

To create a database resource, after opening GoldenGate Studio, go to the Resource tab. On this tab, you will see that is it empty. This is because no resources have been created yet.

In the left hand corner of the Resources tab, you should see a folder with a small arrow next to it. When you click on the arrow, you are provided with a context menu that provides you with three options for resources (Databases, Global Mappings, and GoldenGate Instances).


Database Resources:

Now that you know how to select what resrouce you want to create, lets create a database resource. To do this, select the database resource from the context menu. This will open up a one page wizard/dialog for you to fill out the connection information for the database you want to use as a resource.

You will notice there are a few fields that need to be populated. Provide the relative information you need to connect to the database. Once you all the information has been provided, you can test the connection to validate that it works before clicking ok.

Once you click ok, the database resource will be added to the resrouce tab under database header.

Notice that the database is automatically connected to once it is created. This allows you to immediately start using the resource for mappings and global mappings.

GoldenGate Instance Resources:

The GoldenGate Instance resources are a little more complex to configure. This is due to the requirement that the GoldenGate environment has to have the GoldenGate Monitoring Agent (aka. JAgent (12.1.3.0)) running. This is the same JAgent that is used with the OEM plug-in. If you need more information on how to install and configure the JAgent, you can find it at this here.

Now, to create a new GoldenGate Instance resource, you follow the same approach as you would to create a database resource; instead of selecting database; select GoldenGate Instance. This will open up the GoldenGate Instance wizard/dialog for you to fill out. Provide all the information requested.

In setting up the GoldenGate Instance, there are a few things that you need to provide. In my opinion, the names of the items requested in the GoldenGate Information section are misleading. To make this a bit easier, I’m providing an explanation of what each field means.

GoldenGate Version: This is the version of GoldenGate running with the JAgent
GoldenGate Database Type: Database which GoldenGate is running against. There are multiple opptions here
GoldenGate Port: This is the port number of the manager process
Agent Username: This is the username that is defined in $GGAGENT_HOME/cfg/Config.properties
Agent Password: This is the password that is created and stored in the datastore for the JAgent
Agent Port: This is the JMX port number that is defined in $GGAGENT_HOME/cfg/Config.properties

After providing all the required information, you can then perform a test connection. If the connection is successful, then you can click “ok” to create the GoldenGate Instance resource. If the connection fails, then you need to confirm all your settings.

Once all the resources you need for designing your GoldenGate architecture is done, you will see all the rsources under the Resource tab.

Now that you know how to create resources in GoldenGate Studio, it will help you in designing your replication flows.

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

How to Migrate from On-Premises to Azure SQL Database

Pythian Group - Fri, 2016-01-15 15:14

The Azure SQL Database is improving its capabilities day-by-day. The “Cloud-first” strategy used by Microsoft is also an incentive to start using the Azure’s SQL Database as a Service (DaaS) offer.

In this article I’ll explain all the steps to move your database from on-premises to Azure, using three different approaches. You will need to choose the right one based on your migration strategy and on the database that you are migrating. Don’t forget that not all the features supported on-premises are supported on Azure, so some additional work may be needed prior to the migration.

I’ll show how to migrate a database to Azure SQL Database by using two general methods:

  • Using the SQL Server Management Studio – Recommended when there are no code compatibility issues blocking the cloud migration.
  • Using the SQL Server Data Tools – This approach is highly recommended when there are migration barriers, as the process of detecting and fixing the issues is simpler and more direct.

If you are in doubt about which one to use, the recommendation is to start by using the SQL Server Management Studio approach and, in case of failures, proceed with the SQL Server Data Tools.

Migrate Using SQL Server Management Studio

SQL Server Management Studio (SSMS) offers two direct ways to transfer a database to an Azure SQL Database. To proceed, connect to the SQL Server instance and run either the “SQL Database Deployment Wizard” or the “Export data-tier application” option from SQL Server Management Studio.

img1

If you cannot find the preferred option, you will need to update your SQL Server Management Studio (SSMS), which is now a free standalone product. You can do this by downloading the latest version.

The primary difference between the two options is that the “Deploy“ option requires an existing Database server in Azure and will directly deploy the on-premises database to that location.  The difference is that afterwards the “Export” option will create a file to be imported from the Azure portal. The exported file can be loaded straight to an Azure Blob Storage account, which will help avoid an extra step to copy the file (recommended).

NOTE: For both options, an Azure Blob Storage account with a container and an Azure SQL server are needed.

Migration Steps Using the Deployment Wizard
  1. Right-click the database and select the Deploy Database to Microsoft Azure SQL Databaseimg2
  2. Fill in the required fields.
    The server information is for the target (Azure SQL Database server). The settings to define the price tier are also configured at this stage. The bacpac file will be created locally and then applied on the Azure SQL Server, and because of this, we will need to store the bacpac file in a temporary place in the server.
  3. Click Next.

img3

  1. Review the settings and click Finish.img4
  2. Wait for the process to complete.
    At this stage the wizard will validate the database, create the DACPAC file, and apply the Azure SQL Server to create the database.

img5

  1. The database is now ready to use the server admin account to access the Azure SQL Server.

 

Migration Steps using the Export Data-Tier Application Process
  1. Right-click the database and select the Export Data-tier Application.011316_1528_HOWDOYOUMIG6.png
  2. Save the file in an Azure Blob Storage Account. You will need the account name and access key.
  3. Select the container and click Next.011316_1528_HOWDOYOUMIG7.png
  4. Click Finish, and wait for the processing to complete.
  5. Once the process completes a “Success” message is seen as shown in the screen below. Otherwise, there are items needing to be resolved to make the database capable of being converted into an Azure SQL Database.011316_1528_HOWDOYOUMIG8.png
  6. Connect to the Azure portal and choose the SQL Servers.
  7. Select the SQL Server location where the database should be created, and then click the Import Database icon as shown below.011316_1528_HOWDOYOUMIG9.png
  8. Complete the required settings, including the BACPAC file location, price tier, and server administrator’s password, and then click Create.011316_1528_HOWDOYOUMIG10.png
  9. Once the process completes, the database will be seen in the list.

011316_1528_HOWDOYOUMIG11.png

Migrate Using SQL Server Data Tools

By using the SSMS to migrate the database using a DACPAC, we don’t have the needed flexibility to properly detect and fix the found issues. For this purpose, the SQL Server Data Tools – Business Intelligence is a better option to analyze the database objects. To proceed with this option, follow the steps below.

 

Creating the Main Project
  1. Using the SQL Server Data Tools BI, click the SQL Server Object Explorer tab and connect to the on-premises instance:

011316_1528_HOWDOYOUMIG12.png

  1. Right-click the database to be migrated to Azure, and then click Create New Project.
  2. Add a name to the project and select a path to save the project files.
  3. Click next and wait for the processing to complete.

011316_1528_HOWDOYOUMIG13.png

  1. After the project is created, right-click the project root, go to properties and change the Target Platform to Azure SQL Database. Save and close.

011316_1528_HOWDOYOUMIG14.png

  1. Right-click the project and click Rebuild. If problems are detected, all the errors will be shown in the Error List.

011316_1528_HOWDOYOUMIG15.png

  1. Go to File->New->Project, give a project name (I will name it AWAzure) and in the Solution option, click Add to solution:

011316_1528_HOWDOYOUMIG16.png

 

 

Creating the New Schema

In order to filter the non-supported features and find the code to be corrected, the next step is a Schema Comparison creation. Follow the steps shown:

011316_1528_HOWDOYOUMIG17.png

  1. Now, select the options. Click the icon shown.

011316_1528_HOWDOYOUMIG18.png

  1. In the Schema Compare Options window, click to clear the following known non-supported items:
  • Aggregates
  • Application Roles
  • Assemblies
  • Asymmetric Keys
  • Broker Providers
  • Certificates
  • Contracts
  • Defaults
  • Extended Properties
  • Filegroups
  • FIleTables
  • Full-Text Stoplists
  • Full-Text Catalogs
  • Full-Text Indexes
  • Message Types
  • Partition Functions
  • Partition Schemes
  • Queues
  • Remote Service Bindings
  • Rules
  • Sequences
  • Services
  • Symmetric Keys
  • Used-Defined Types (CLR)
  • XML Indexes
  • XML Schemas Collections
  1. Click Ok and save the Schema Comparison, as it can be useful later.
  2. Select the source: The On-premises database.

011316_1528_HOWDOYOUMIG19.png

  1. Select the Target: The empty SQL Server create project.

011316_1528_HOWDOYOUMIG20.png

We will have the following:

011316_1528_HOWDOYOUMIG21.png

  1. Now, click Compare. Wait for the process to complete and then click Update (click YES in the confirmation pop-up), to update the selected target.
  1. Next, go to the AWAzure (the target) project, right-click on the root, go to properties, and change the Target Platform to Azure SQL Database.
  1. Click Save and Close the screen.

 

Resolving Problems

Now it’s time to resolve the problems. Check the errors tab and double click on each found item to open the code. Resolve the issue and save the file.

011316_1528_HOWDOYOUMIG22.png

Use the filter to ensure you are dealing with the right project.

011316_1528_HOWDOYOUMIG23.png

 

 

Deploying the Schema

After the schema revision, we can publish the database.

  1. To publish the database, right click the AWAzure project, and click Publish.
  1. Edit the target instance and connect to the Azure SQL Server:

011316_1528_HOWDOYOUMIG24.png

  1. Fill in the database name and click Publish.

img6

Moving the Data

The schema is deployed. Now it is time to move the data. To do this, use the Import and Export Wizard, from the SQL Server Management Studio.

  1. Connect to the on-premises instance, right click the database used as the data source and follow the steps shown:

img7

  1. In the wizard, confirm the Server name and the source database, and then click Next. 

img8

Now, do the same for the Azure SQL Database.

  1. In the Destination field, select SQL Server Native Client 11.0, fill in the server name, and select the target database.img9
  2. Click Next.
  3. For this step, keep the first option selected, and then click Next.img10

Select all the tables and views from the source. Notice that SQL Server will automatically map the target tables on Azure.

About data hierarchy: If foreign key constraints are being used in the database, the data migration should be made in phases to avoid failure. This needs to be analyzed prior to the final migration.

img11

  1. Make sure that all the tables are highlighted and click Edit Mappings.
  2. Select Enable Identity Insert and then click Ok.
  3. Then, in the main Wizard window click Next.

img12

  1. Make sure the Run immediately check box is selected and click Next.

img13

  1. In the following screen, review the options, and then click Finish.

img14

  1. Monitor and the data transfer and close the wizard.

img15

 

That’s it. I hope that the steps were clear and this article was useful. If you have questions, do not hesitate in post your comment or contact me using twitter (@murilocmiranda). “See” you in another article.

 

Discover more about our expertise in SQL Server

Categories: DBA Blogs

Issues with Oracle Secure External Password Stores

Pythian Group - Fri, 2016-01-15 15:09
Background

In the previous article, I covered the basics of how to remove database passwords (credentials) from Oracle monitoring or backup scripts and how to instead secure them using a “Secure External Password Store” (SEPS) and Oracle Wallet.

While this mechanism is far better than putting a plain text credential in a script file, one of the more advanced options, specifically tying the files to the local host with the “-auto_login_local” introduces bugs with Oracle 12cR1 software not present with other versions.

This article goes deeper into how to harden the approach, lock-down script access to the local server, and workaround Oracle Wallet limitations and bugs.

 

Issues with the “-auto_login_local” Option

Oracle suggests using the “-auto_login_local” option to secure an Oracle Wallet and only allow it to be used on the server on which it was created and by the user that created it. See MOS document: “How To Prevent The Secure Password Store Wallet From Being Moved to Another Host (Doc ID 1114599.1)

This is supposed to protect from a bad actor obtaining a copy of the file, say from a backup, and being able to use it (and the credentials contained within it) from another machine. Unfortunately, there’s a number of issues and problems with this option:

    1. There are ways to work around the protection it provides.
    2. The option fundamentally doesn’t work with 12.1.0.2 (while it does with 11.2.0.4 and 12.1.0.1). This is clearly an Oracle bug.

 

By-passing the “-auto_login_local” Parameter”

The “-auto_login_local” parameter is supposed to protect the Wallet from being used on another server. However testing proves that this is really easy to workaround.

The basics of SEPS and Oracle Wallets I covered in my previous article. To enable the -auto_login_local option, we simply modify the existing Wallet file using the orapki utility:

$ orapki wallet create -wallet "/u01/app/oracle/wallet" -auto_login_local
Oracle PKI Tool : Version 11.2.0.4.0 - Production
Copyright (c) 2004, 2013, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:

$ mkstore -wrl "/u01/app/oracle/wallet" -listCredential
Oracle Secret Store Tool : Version 11.2.0.4.0 - Production
Copyright (c) 2004, 2013, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
1: ORCL scott

$

 

Testing on the local machine shows that the connection using the Wallet works as expected:

$ sqlplus /@ORCL

SQL*Plus: Release 11.2.0.4.0 Production on Wed Jan 13 15:27:54 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select SYS_CONTEXT('userenv','IP_ADDRESS') IP_ADDRESS,
  2         SYS_CONTEXT('userenv','DB_NAME') DB_NAME,
  3         SYS_CONTEXT('userenv','CURRENT_USER') CURRENT_USER
  4  from dual;

IP_ADDRESS      DB_NAME      CURRENT_USER
--------------- ------------ ------------
192.168.1.123   ORCL         SCOTT

SQL>

 

It’s not overly simple for a bad actor already inside the network to obtain all of the information they’d need to access the database remotely, but it is possible. Say, for the sake of an example, that a bad actor obtained access to a backup of the OS. From that they could see the DBA scripts and how they connect, obtain the network files such as the sqlnet.ora and tnsnames.ora files, and obtain the Oracle Wallet files.

If a SEPS and Oracle Wallet was not being used, they’d presumably also be able to work out the database credentials as they’d either be hard-coded in the DBA script files or obfuscated in some other plain text file (not hard to reverse engineer).

Copying the cwallet.sso and ewallet.p12 (and maybe the tnsnames.ora) files to a secondary server simulates the actions of the “bad actor”.

But trying to make the same connection from the secondary server (which the “bad actor” controls) shows the “ORA-12578: TNS:wallet open failed” error:

$ sqlplus /@ORCL

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 15:38:50 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

ERROR:
ORA-12578: TNS:wallet open failed


Enter user-name:

 

This is the expected error when the “-auto_login_local” option is used. However it’s simple to work-around.

MOS Note 1114599.1 suggests that the /etc/hosts file may cause this error. So the first thing to try is changing the name in the hosts file to that of the legitimate DB server:

# cp /etc/hosts /etc/hosts.backup
# cat /etc/hosts.backup | sed -e "s/HACKED_OS/DBSERVER/ig" > /etc/hosts
# su - oracle -c "sqlplus /@ORCL"

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 15:59:53 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

ERROR:
ORA-12578: TNS:wallet open failed


Enter user-name:

 

Clearly that didn’t help the situation at all. Undoing that and instead trying to rename the compromised server (separately as root) gives a different error:

# cp /etc/hosts.backup /etc/hosts
# hostname
HACKED_OS

# hostname DBSERVER
# hostname
DBSERVER

# su - oracle -c "sqlplus /@ORCL"

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 15:53:02 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

ERROR:
ORA-21561: OID generation failed


Enter user-name:

 

But if we do both:

# cp /etc/hosts /etc/hosts.backup
# cat /etc/hosts.backup | sed -e "s/HACKED_OS/DBSERVER/ig" > /etc/hosts
# hostname DBSERVER
# su - oracle -c "sqlplus /@ORCL"

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 16:05:53 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Wed Jan 13 2016 16:04:45 -07:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select SYS_CONTEXT('userenv','IP_ADDRESS') IP_ADDRESS,
  2         SYS_CONTEXT('userenv','DB_NAME') DB_NAME,
  3         SYS_CONTEXT('userenv','CURRENT_USER') CURRENT_USER
  4  from dual;

IP_ADDRESS      DB_NAME      CURRENT_USER
--------------- ------------ ------------
192.168.1.200   ORCL         SCOTT

SQL>

 

So if we change both the hostname via the hostname command (or in the /etc/sysconfig/network file) and update the /etc/hosts file, then the -auto_login_local security is by-passed and we can log into the database from a compromised machine using the credentials stored in the Oracle Wallet!

Important to note there that I’m connecting to a 12.1.0.2 database but using a Wallet file that was created using the 11.2.0.4 software.

 

ORA-12578 with Oracle Database 12.1.0.2

To make matters worse, with Oracle 12.1.0.2 the -auto_login_local option doesn’t even work at all.

Back on the database server (legitimate DBA activity – not simulating a “bad actor”), creating the Oracle Wallet file using 12.1.0.2 software seems to prevent connectivity locally:

$ orapki wallet create -wallet "/u01/app/oracle/wallet" -auto_login_local
Oracle PKI Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:

$ sqlplus /@ORCL

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 16:21:05 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

ERROR:
ORA-12578: TNS:wallet open failed

 

This is unexpected behaviour and clearly shows and Oracle bug. Taking off the -auto_login_local option (by using -auto_login) shows that the Oracle Wallet does indeed work on this server:

$ orapki wallet create -wallet "/u01/app/oracle/wallet" -auto_login
Oracle PKI Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:

$ sqlplus /@ORCL

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 16:22:30 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Wed Jan 13 2016 16:20:38 -07:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select SYS_CONTEXT('userenv','IP_ADDRESS') IP_ADDRESS,
  2         SYS_CONTEXT('userenv','DB_NAME') DB_NAME,
  3         SYS_CONTEXT('userenv','CURRENT_USER') CURRENT_USER
  4  from dual;

IP_ADDRESS      DB_NAME      CURRENT_USER
--------------- ------------ ------------
192.168.1.123   ORCL         SCOTT

SQL>

 

Hence, there clearly is a bug that’s specific to the 12.1.0.2 software where as the ORA-12578 error is returned when it shouldn’t be. Repeating the same procedure using 12.1.0.1 or 11.2.0.4 software does not exhibit the same error.

And it’s important to understand that it doesn’t matter which version of the database the connection is to. The problem is specific only to which software was used to create the Wallet file. So creating the Wallet with 11.2.0.4 software just to use against a 12.1.0.2 database works without issue.

 

Harding Using Other Strategies

Due to the above mentioned issues, other strategies can be used to harden the connections and credential management for use by DBA scripts.

 

Using localhost or 127.0.0.1

The simplest way to prevent the Wallet files from being usable on another server is to change the OracleNET Service Name to an EZconnect string that uses localhost or 127.0.0.1. For example, on the DB server:

$ mkstore -wrl "/u01/app/oracle/wallet" -create -createCredential localhost:1521/ORCL scott
Oracle Secret Store Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Enter password:
Enter password again:
Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Create credential oracle.security.client.connect_string1

$ sqlplus /@localhost:1521/ORCL

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 16:33:27 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Wed Jan 13 2016 16:31:50 -07:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show user
USER is "SCOTT"
SQL>

 

Now if we try using the Oracle Wallet files on a compromised server (with the /etc/hosts and /etc/sysconfig/network spoofing as described previously), the connection attempt routes through the localhost back to the compromised server and not to the database server. Hence a connection attempt gives:

[oracle@HACKED_OS ~]$ sqlplus /@localhost:1521/ORCL

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 16:34:27 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

ERROR:
ORA-12541: TNS:no listener

 

Thus, by using an EZconnect connection string and localhost instead of the actual server’s hostname, FQDN, or IP address, we’ve avoided the 12.1.0.2 bug and provided more thorough protection than the -auto_login_local option provides anyway.

And of course we could have used 127.0.0.1 instead of localhost – the results are the same.

Finally, remember that the connection string forms the primary key of the 3DES protected data in the Wallet file which can’t be modified without knowing the Wallet’s password.

 

Connecting Through a Dedicated Listener and “Valid Node Checking”

Another way to prevent the Oracle Wallet from being used to access the database from an unauthorized server (actually any server other than the DB server) is to have the scripts connect through a dedicated listener. The dedicated listener’s port can then be restricted using either a firewall or the listener’s “valid node checking” functionality.

For example, the dedicated listener could be configured with the following in the listener.ora file:

MONITORING_LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1599)))
  )
SID_LIST_MONITORING_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)
      (SID_NAME = ORCL)
    )
  )
PASSWORDS_MONITORING_LISTENER= (F251EDED29514235)

 

Then for added (though possibly redundant due to the use of localhost) protection, the following entries could be adding to the server’s sqlnet.ora:

TCP.VALIDNODE_CHECKING = YES
TCP.INVITED_NODES = (localhost)

 

As a result, local connections using localhost:1599/ORCL will work, while connections from the compromised server, connections will receive:

ERROR:
ORA-12537: TNS:connection closed

 

 

Preventing Use by Another OS User

Another challenge is to prevent another OS user on the same server from using the Oracle Wallet to connect to the database.

Of course the wallet files should be well secured using OS directory and file security. It can further be obfuscated by making the wallet directory a hidden directory (starting with a period).

If the -auto_login_local option is used then other users on the same server will not be able to use the Oracle Wallet credentials and instead will get the “ORA-12578: TNS:wallet open failed” error. Hence, creating the Oracle Wallet using a version other than 12.1.0.2 (regardless of the database version) and enabling the -auto_login_local option is still the best solution.

Beyond OS directory and file restrictions and the -auto_login_local option, the only other method for restricting access from other OS users on the same server would be a database scoped logon trigger or secured application role.

 

Conclusions

Using an Oracle Secure External Password Store (SEPS) and Oracle Wallet files is the best way to handle database credentials and passwords in OS scripts. However, a number of significant problems exist:

  1. The -auto_login_local parameter can be bypassed on a compromised server by changing the hostname (in /etc/hosts and /etc/hosts/network).
  2. The -auto_login_local parameter doesn’t work at all when created with 12.1.0.2 software.

 

That being said, we can still harden our script’s database access by following some additional suggestions:

  • Create the Oracle Wallet using 11.2.0.4 or 12.1.0.1 software even if connecting to 12.1.0.2 databases.
  • If the Oracle Wallet files were created using 11.2.0.4 or 12.1.0.1, protect from usage by other users by using the -auto_login_local parameter.
  • Prevent use from other servers by not using an OracleNET Service Name in Oracle Wallets and instead using an EZconnect connection string using either localhost or 127.0.0.1 (not the proper DB server’s hostname, FQDN, or IP address).
  • Another strategy is to use a dedicated listener on a dedicated port with listener “valid node checking” to only permit connections from the local server.
  • As a last resort prevent non-authorized IPs or OS Users from connecting using a logon trigger or secure application role within the DB.

 

Discover more about our expertise in Oracle.

Categories: DBA Blogs

2016 AT&T Developer Summit Hackathon: SafeDrop

Oracle AppsLab - Fri, 2016-01-15 14:51

It has become tradition now for us, AppsLab, the OAUX emerging technologies team, that the first thing in a New Year is to fly to Las Vegas, not solely testing our luck on the casino floor (though some guys did get lucky), but also attending to the real business–participating in the AT&T Developer Summit Hackathon to build something meaningful, useful, and future-oriented, and hopefully get lucky and win some prizes.

Noel (@noelportugal), who participated in 2013, 2014 and last year, and his team were playing a grand VR trick–casting a virtual gate on a Real neighborhood–not exactly the VR as you know it, but rather, several steps deep into the future. Okay, I will just stop here and not spoil that story.

On the other side of desk, literately, David, Osvaldo (@vaini11a), Juan Pablo and I (@yuhuaxie) set our sights on countering Mr. Grinch with the Christmas package theft reports still fresh in people’s minds. We were team SafeDrop, and we forged carefully a safe box for receiving Christmas gift package. Unsurprisingly, we called it SafeDrop! And of course, you can use it to receive package deliveries other time of the year too, not just at Christmas time.

Team SafeDrop

Team SafeDrop (l-r), Os, David, me, Juan Pablo

In terms of the details of how the SafeDrop was built, and how you would use it, I will explain each in future posts, so stay tuned. Or shall I really explain the inner mechanism of SafeDrop and let Mr. Grinch have an upper hand?

By the way, team SafeDrop won the 2nd place of “Best Use of Intel Edison.” Considering the large scale of the AT&T Hackathon with over 1,400 participants forming 200+ teams, we felt that we were pretty lucky to win prizes. Each team member received a Basis Peak, a fitness watch you’ve read about here, and a Bluetooth headset, the SMS In-Ear Wireless Sport.

SafeDrop won 2nd place of Best Use of Intel Edison, with prizes

SafeDrop won 2nd place of Best Use of Intel Edison

We know there are other approaches to prevent Mr. Grinch from stealing gifts, such as this video story shows: Dog poop decoy to trick holiday package thief. Maybe after many tries, Mr. Grinch would just get fed up with the smell and quit.

But we think SafeDrop would do the job by just the first try.

Stay tuned for more details of what we built.Possibly Related Posts:

Securing Oracle Monitoring and Backup Scripts

Pythian Group - Fri, 2016-01-15 14:41
Background

Almost every DBA writes and uses various custom scripts to monitor and backup their Oracle databases. However, finding the optimal and most secure way to connect to the database is often not prioritized.

The short summary is that having your script put the username/password (credentials) in any sort of variables or command arguments is the “bad way”. Using an “Oracle External Password Store” (SEPS) or Oracle Wallet is the “better way”. Yet this technology which has been around since Oracle 10gR2 and which does not require the Advanced Security Option is often not used.

 

Common Mistakes

Many DBAs will store credentials in their actual scripts. Sometimes obfuscating the actual password through some custom mechanism or script. For example, the following is a simplified version of an RMAN backup script found at a client site:

#!/bin/bash
export DB_USER=backup_user
export DB_PASS=`~/.secure_pwd_extractor`

$ORACLE_HOME/bin/rman << EOF
   connect target $DB_USER/$DB_PASS
   shutdown immediate
   startup mount
   backup database;
   alter database open;
EOF

 

The client thought that it was somewhat secure as the actual password wasn’t used as a command line argument to RMAN and was stored in a Linux “hidden file” (starts with a period), which was protected by properly setting OS permissions. However, it dynamically extracted the password from a plain text file (based on the DB_USER environment variable). Another key problem was the fact that the environment variable was exported and hence was part of the environmental profile under which the database was started.

The exported environment variables in this case can be a little bit of a security risk in a couple of ways:

First of all, the complete operating environment including the exported environment variables under which the database is running are recorded by the listener when service is registered. Hence, they are visible in a listener “services” command with “set displaymode verbose“:

Picture1

Secondly, they may be recorded in OS process files. For example, the pmon process’ operating environment or even the RMAN process’ while running:

Picture2

Picture3

But most significantly the credentials can be extracted by anyone with access to the script file and/or the underlying credentials file.

 

A Better Approach

A better way to store database credentials for monitoring or backup scripts is to use a “Secure External Password Store” (SEPS) which relies on having the necessary credentials securely stored in an Oracle Wallet file.

Typically a DBA might create their own plain text password listing file for use by scripts and batch jobs. Usually with three columns: 1) DB_NAME; 2) DB_USERNAME; 3) DB_PASSWORD. The Oracle Wallet is structured exactly the same way except:

  1. The file is protected with 3DES encryption.
  2. The DB_NAME is really an OracleNET Service Name meaning you can have multiple aliases for the same database.
  3. The passwords are never exposed.
  4. A separate Wallet password is required to manipulate the file’s contents.
  5. Control on whether the Oracle Wallet file is tied to the local machine or whether it can be copied to and used on other machines.

 

The advantages of this approach include:

  • No clear text password in any scripts or files.
  • No possible exposure of passwords by the listener or process operating environments.
  • Control on whether the Oracle Wallet file can be copied and used on another machine.

 

The last point is actually a complex one. A Wallet can be created as an “auto_login” wallet (done by default). To secure it to only work on the local server, it can be changed to “auto_login_local“. However, there are various issues, limitations, and 12c bugs with the additional functionality that Oracle provides. A separate article goes into this in detail.

 

Setup

Setting up a “Secure External Password Store” and Oracle Wallet is actually quite quick and easy:

1) Adjust the sqlnet.ora file to point to an Oracle Wallet location. For example, add the following to the sqlnet.ora file (assuming that the specified directory exists):

WALLET_LOCATION =
   (SOURCE =
     (METHOD = FILE)
     (METHOD_DATA =
       (DIRECTORY = /u01/app/oracle/wallet)
     )
   )
SQLNET.WALLET_OVERRIDE = TRUE

 

2) Create the Oracle Wallet files and add a credential. Two files will actually be created in the specified directory:

  • ewallet.p12 – the actual password protected Wallet file.
  • cwallet.sso – an additional file for auto-login credentials.

 

This can be done as either two separate commands or all in a single command:

$ mkstore -wrl "/u01/app/oracle/wallet" -create
Oracle Secret Store Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Enter password:
Enter password again:

$ mkstore -wrl "/u01/app/oracle/wallet" -createCredential ORCL scott
Oracle Secret Store Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:
Create credential oracle.security.client.connect_string1

 

Or as a single command:

$ mkstore -wrl "/u01/app/oracle/wallet" -create -createCredential ORCL scott
Oracle Secret Store Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Enter password:
Enter password again:
Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Create credential oracle.security.client.connect_string1
$

 

Notice that the Wallet is secured by a password. And then the SCOTT credentials are stored within the Wallet. The Wallet password is required to manipulate contents – not for scripts to access the stored credentials.

The first parameter after the “-createCredential” argument is an OracleNET Service Name. Just like with any database connection, here we can specify an OracleNET Service Name (from the tnsnames.ora file), or a full connection string, or an EZconnect string.

Hence, we could add a second and third connection to the same database as:

$ mkstore -wrl "/u01/app/oracle/wallet" -createCredential localhost:1521/ORCL monitoring_user
Oracle Secret Store Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:
Create credential oracle.security.client.connect_string2

$ mkstore -wrl "/u01/app/oracle/wallet" -createCredential 127.0.0.1:1521/ORCL batch_reporting
Oracle Secret Store Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:
Create credential oracle.security.client.connect_string3

$

 

And to list the contents of the Oracle Wallet:

$ mkstore -wrl "/u01/app/oracle/wallet" -listCredential
Oracle Secret Store Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
3: 127.0.0.1:1521/ORCL batch_reporting
2: localhost:1521/ORCL monitoring_user
1: ORCL scott
$

 

Now any of the three can be used (from the same OS account: “oracle”) depending on which OracleNET Service Name is referenced:

$ sqlplus /@ORCL

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 08:59:12 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Wed Jan 13 2016 08:56:56 -07:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show user
USER is "SCOTT"
SQL>
$ sqlplus /@localhost:1521/ORCL

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 08:59:41 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Wed Jan 13 2016 08:57:25 -07:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show user
USER is "MONITORING_USER"
SQL>

$ sqlplus /@127.0.0.1:1521/ORCL

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 09:00:14 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Wed Jan 13 2016 08:43:44 -07:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show user
USER is "BATCH_REPORTING"
SQL>
$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 09:01:12 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show user
USER is "SYS"
SQL>

 

However, one might challenge the fact that the Oracle Wallet file itself that must be secured just as a plain text password file would need to be and that the risks of either being obtained by a “bad actor” are the same. That being said, there are still some benefits of the SEPS method:

  • Passwords are never maintained in plain text in scripts or (hidden) password files.
  • No risk of password exposure at the operating system process list.
  • No risk of password exposure in operating system process environmental files.
  • No risk of exposure from any bad actor with access to the script.
  • No risk of password exposure in OS backups maintained by sysadmins or backup vendors (though the files themselves may still be usable).

Regardless, using the SEPS and Oracle Wallet shouldn’t make anything less secure. One could argue that security and risk is equal, but definitely not worse.

However they are a few operational disadvantages:

  • The OracleNET Service Name forms the primary key of the entries in the Wallet and hence must be unique. So if another user credential is required for the same DB, an OracleNET alias will be required (as trying to add another user using the same OracleNET Service Name will generate the “Credential already exists” error based on the OracleNET Service Name not being unique).
  • Doesn’t work after connecting to SQLPLUS using the “CONNECT” command.
  • Scripts are now dependent on listener availability for establishing a TNS connection instead of a BEQ. Though a workaround may be to connect directly to a DISPATCHER port.

 

Advantages Over OS Authentication

Using a SEPS and an Oracle Wallet may seem functionally similar to just using “OS Authentication”. However it does pose a few differences or advantages.

Most DBAs operate and implement scripts under the “Oracle software owner” account which is typically called “oracle” on Unix or Linux systems. And hence most are able to connect to the database using a SYSDBA connection. So one solution would be to use a dedicated OS user account specifically for monitoring or database backup scripts. And then rely on OS authentication for database connections. However this is often not done. And if running scripts from a remote server or centralized monitoring server then the REMOTE_OS_AUTHENT=TRUE parameter would have to be set which poses other security risks.

Instead, using a SEPS allows for non-OS authenticated connections to a dedicated and minimally privileged database account even from the software owner (Oracle) account locally, or from any other account remotely.

 

Other Questions

Q: Do I need to re-create my Wallet file as part of a 12c upgrade?
A: NO.

Q: Do I need to backup the Wallet files?
A: Absolutely. Just back them up as you would other Oracle environmental files, such as the sqlnet.ora. Otherwise they’ll need to be re-created.

Q: Is SEPS compatible with “Proxy Authenticated Connections”?
A: YES. See Proxy Authenticated Database connections

Q: Can the Wallet be used with RMAN?
A: YES. It definitely can and should be used by RMAN scripts.

Q: Can the Wallet be used with JDBC thin or ODBC connections?
A: YES to both. See MOS documents 1441745.1 and 1430666.1 for further details.

Q: Are SEPS and Wallet connections compatible with the CDB architecture and PDBs?
A: Of course. PDBs connect through Oracle Services and OracleNET Service Names.

Q: Can we tell from within the database whether a connected session referenced an Oracle Wallet and used SEPS?
A: NO. There doesn’t seem to be any indication from within the database. As far as the DB is concerned, it just seems to be a password authenticated connection. A SQLNET trace shows that the Wallet files are accessed but doesn’t transpose that information into any DB metric or SYS_CONTEXT USERENV data.

 

Preventing the Secure Password Store Wallet File from Being Moved to Another Host

Oracle suggests that we can add additional security by tying the Oracle Wallet file to a specific host. See MOS document 1114599.1. However, this poses some restrictions and bugs with 12.1.0.2 specifically. As this discussion is complex, this follow-up article has been created.

A good workaround (and a key point from the follow-up article) is to simply use an EZconnect connection in the Oracle Wallet file and to specify localhost or 127.0.0.1.

 

Summarizing Best Practices
  • Continue to properly secure the directory and file permissions of the Wallet files as you would a plain text password file. Further, why not make the entire wallet directory a hidden directory starting with a period. Remember that If the wallet file can be copied to another server then potentially the credentials within it can continue to be used. See the follow-up article for suggested techniques for securing access further.
  • Have scripts connect to the database using dedicated purpose based user accounts with minimal privileges (don’t use SYS or SYSTEM).
  • Use OracleNET Service Name aliases (not duplicate entries, but aliases to an existing entry) in the tnsnames.ora file to allow multiple credentials for the same database.
  • For additional security add Wallet entries based on EZconnect strings using localhost or 127.0.0.1 instead of relying on OracleNET Service Names from the tnsnames.ora file.

 

Discover more about our expertise in the world of Oracle.

Categories: DBA Blogs

Log Buffer #457: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2016-01-15 14:19

This Log Buffer Edition begins with some great blog posts from Oracle, goes through SQL Server and then ends with MySQL.

Oracle:

  • Ruby-oci8 is a ruby interface for an Oracle Database.
  • Another python graph – one wait event.
  • This article compares FBL and HDL – two of the commonly used data loading tools in Fusion HCM to highlight key differences and similarities.
  • Better Data Modeling: Customizing Oracle Sql Developer Data Modeler (#SQLDevModeler) to Support Custom Data Types.
  • Sample code: Oracle Grid Infrastructure action script for Windows.

SQL Server:

  • Being a database administrator can be very challenging at times when you have to troubleshoot performance issues.
  • Another Reason to Use NOEXPAND hints in Enterprise Edition.
  • Error: Microsoft .NET framework 3.5 service pack 1 is Required.
  • Removing Duplicates from Strings in SQL Server.
  • .NET Core is more interesting than the name might suggest. Whereas the .NET framework provides a consistent runtime for all the applications on a machine.

MySQL:

  • OpenSSH CVE-2016-0777: Details and Mitigation.
  • MySQL Group Replication for MySQL 5.7.10.
  • MySQL 5.7 auto-generated root password.
  • MySQL Support People – Those Who Were There First.
  • Planning the defaults for MySQL 5.8.

 

Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

Categories: DBA Blogs

February 4, 2016: KEC International―Oracle Sales Cloud Customer Reference Forum

Linda Fishman Hoyle - Fri, 2016-01-15 12:33

Join us for another Oracle Customer Reference Forum on February 4, 2016. Mr. Sudip Mazumder, Head of IT - EPC, International Business and Corporate Functions, will talk about the company's need to standardize its sales processes across the globe and its desire for better insight for upsell and cross-sell opportunities. He will explain how KEC is implementing Oracle Sales Cloud integrated to Oracle JD Edwards to get a full 360-degree view of its customers.

KEC International Limited is India's second largest manufacturer of electric power transmission towers and one of the largest Power Transmission Engineering, Procurement and Construction companies in the world.

Register now to attend the live forum on Thursday, February 4, 2016, at 2:30 PM IST and learn more about KEC International’s experience with Oracle Sales Cloud and Oracle JD Edwards.

OAM 11g SAML SSO Integration with AtTask

Online Apps DBA - Fri, 2016-01-15 12:19

WorkForce AtTask is a cloud based project management solution and it is capable of talking SAML.

Recently I had integrated AtTask application with Oracle Access Manager 11gR2 for SAML SSO integration where AtTask is SP and OAM is IDP.

AtTask uses its own repository for users. OAM uses AD LDAP for authentication store. Like any general SAML SSO integration, metadata has to be exported and imported into each provider.

The AtTask documentation details how to import metadata and configure mapping attributes.

In this post, I would like to detail what is different in this product for enabling SAML integration.

After enabling SAML SSO integration and importing metadata as per above documentation, we can test the SSO setup using Test the connection. During this process, the user is redirected to IDP and user submits credentials and SAML token is generated and passed onto SP (AtTask).

The test output page contains the $$NAMEID coming from IDP and typically it would be userid and this is the Federated ID. Federated ID is an attribute of AtTask user profile that is key for mapping user based on SAML assertion.

Login to AtTask as administrator. Goto user profile and update Federated ID with $$NAMEID output value. That’s it. You’re all set.

Note that regular login will not work when SAML SSO is enabled in AtTask. Hence it is recommended to update all user profiles with Federated ID value before enabling SAML SSO.

Hope this helps.

The post OAM 11g SAML SSO Integration with AtTask appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Edit Use Case for ADF 12.2.1 Dashboard with Masonry Layout

Andrejus Baranovski - Fri, 2016-01-15 11:53
I was researching ways to implement edit functionality for dashboard created with ADF 12.2.1 masonry layout. Check my previous post, where dashboard was implemented - ADF 12.2.1 Responsive Dashboard with Masonry Layout. I have decided to use panel drawer component, which would bring editable fields in the context of data displayed in the dashboard.

Panel drawer icon is displayed in the top right corner of the dashboard:


User can click on the icon and this shows editable form for the data rendered in the dashboard:


Dashboard is implemented with masontry layout, it makes it possible to re-arrange tiles. This proves to be useful when editing data. I can move tile with the chart and change salary value. Chart is synched and new value becomes visible. Two actions are done at once - data update and change review in the chart:


One more use case - validation. While editing salary, we can check tile with minimum and maximum salary values. This could help to understand, what salary value can be accepted. For example, if too low salary is set, validation error is returned. User can cross check this in the chart with minimum and maximum values:


Panel drawer is defined in the same group, where masonry layout block is located:


Edit form is rendered through ADF region, this means it can be reusable:


Download sample application - DashboardApp_v4.zip.

Patents Rethought: Khan Academy Did the Right Thing

Michael Feldstein - Fri, 2016-01-15 10:49

By Michael FeldsteinMore Posts (1052)

To recap what’s happened so far:

Since then, I had a little more time to look at the actual legal language of the agreement and reflect on the larger edupatent problem. And I’ve come to the conclusion that Khan Academy did the right thing by adopting the agreement. We should feel good about what they’ve done. And given the realities that software patents exist and defensive patents are therefore a necessary evil, we should encourage other educational patent holders to do as Khan has done and adopt the same agreement.

The Innovator’s Agreement is actually quite clever. To recap the basic idea, companies that adopt the agreement give the inventors who are named on the patent application veto power over the patent’s assertion, except in cases where the company is acting in self-defense in response to legal action against it. More than just a pledge, it is a legally binding document. (Text of the agreement is here.)

The agreement travels with the patent, so if the company sells it then the new owner will still be bound by the agreement:

Assignee acknowledges and agrees that the above promises are intended to run with the Patents and are binding on any future owner, assignee or exclusive licensee who has been given the right to enforce any claims of the Patents against third parties. Assignee covenants with Inventors that any assignment or transfer of its right, title, or interest herein will be conveyed with the promises herein as an encumbrance.

The inventors do get to pass along assertion veto rights to their heirs:

[T]he license shall pass to the heirs of an inventor in the case that the inventor is deceased[…]

But if I’m reading the whole passage on those rights correctly, they can’t pass it along in a way that would damage the original intent (like selling it to a patent troll, for example), and there is a poison pill that basically says any protection from patent assertion that the inventor has a right to confer is invalid if it is granted under duress (for example, as a settlement payment in a threatened lawsuit):

Any sublicense granted by the Inventors under this section must be without threat or additional consideration; otherwise, the sublicense will be considered void ab initio. This license to the Inventors is not assignable, although the license shall pass to the heirs of an inventor in the case that the inventor is deceased, and the inventors, individually or jointly, may appoint a representative who may act on their behalf in granting sublicenses under this section. Assignee acknowledges and agrees that the promises in section 2 and 4 are intended to benefit third parties, except in the case of an assertion of claims of the Patents authorized under section 2.

There’s even a provision that says the company that holds the patent can assert in defense of third parties that are getting sued for patent infringement:

[The Company can assert the patent] against an Entity that has filed, maintained, or voluntarily participated in a patent infringement lawsuit against another in the past ten years, so long as the Entity has not instituted the patent infringement lawsuit defensively in response to a patent litigation threat against the Entity.

Overall, the Innovator’s Agreement is a pretty potent tool for deterring patent assertion. And while I would prefer that the power granted by the agreement be in the hands of a trusted third party, the protection of this agreement is still a big step forward, particularly if it is adopted widely enough that there are many parties holding such rights to different patents. The biggest thing that is missing is a strong motivation for the patent holders to assert their patents in the defense of a third party. For example, would Big LMS Company Patent Holder assert a patent in defense of Little Ed Tech Startup if the latter were being sued by Big Textbook Company that happened to also be a major business partner of Big LMS Company Patent Holder? I doubt it. In fact, I doubt that the third-party defense is likely to ever be invoked, for a variety of reasons. Secondarily, I’m not sure that the engineers named on the patents are always the best appointed defenders of education against patent assertion.

On the other hand, the Innovator’s Agreement has several virtues that my proposal does not. First, it already exists and has been vetted by Twitter’s undoubtedly super-expensive lawyers. Second, nobody would have to create a trust, fund it, and convince various patent holders to put their faith in it.

Under the circumstances, I think Khan Academy did the right thing by adopting the Innovator’s Agreement, and I think we should all encourage other holders of education-relevant patents to do the same. And by “encourage,” I mean both praise those that do adopt it and pressure those that don’t. Schools could even go so far as to make institution of the agreement a contractual requirement. Creation of a trust is always a possibility later down the line, using the Innovator’s Agreement as a template. (Twitter was kind enough to release the text of the agreement under a Creative Commons license.)

The post Patents Rethought: Khan Academy Did the Right Thing appeared first on e-Literate.

The Cloud Shakes Up Status Quo IT

WebCenter Team - Fri, 2016-01-15 09:33
Chris Murphy "Today, the number-one reason organizations are not moving to the cloud is security. However, tomorrow, security will be one of the most important drivers to move to the cloud." With that prediction, Oracle CIO Mark Sunday sets the tone for our 11 cloud predictions, which suggest that IT teams that remain overly wary of the cloud risk getting trampled in 2016. Cloud-based enterprise resource planning (ERP) offers another example of changes to come. In the past, even cloud-friendly CIOs have thought, "ERP's the last thing we'll move to the cloud." Now, as digital business models demand a more nimble operating foundation—think of how the video game industry's payment models have changed as gamers buy online instead of in stores—cloud ERP looks like the answer.

Other predictions revolve around rising demand for cloud-based integration, an upheaval in the tech supplier landscape, and the expected 100 percent shift of dev and test to the cloud. Do you have your own predictions? Let's talk about it on Twitter @murph_cj.
—By Chris Murphy, Oracle Director of Cloud Content

Ed Tech Patent Update: The Innovator’s Agreement

Michael Feldstein - Fri, 2016-01-15 07:09

By Michael FeldsteinMore Posts (1052)

Carl Straumsheim has a good piece out on the Khan Academy patent Inside Higher Ed today. Much of it is a primer on the uses and limitations of defensive patents, but there is a piece on the specific nature of the patent pledge that Khan Academy has signed that I missed. The pledge, originally created by Twitter, is quite similar to my own proposal in a number of ways. It turns the decision-making regarding offensive use of the patent over to another party and, importantly, the agreement travels with the patent, even if it changes hands:

The IPA is a new way to do patent assignment that keeps control in the hands of engineers and designers. It is a commitment from Twitter to our employees that patents can only be used for defensive purposes. We will not use the patents from employees’ inventions in offensive litigation without their permission. What’s more, this control flows with the patents, so if we sold them to others, they could only use them as the inventor intended.

Shame on me for not doing my homework.

The big difference between this pledge and the one I propose is that I am suggesting that the third party be a trust rather than the inventing engineer. This has several virtues. First, engineers die, and not all of them are going to be equally vigilant in protecting education. Can the engineer sell this right to somebody else? Can the right be inherited? If it isn’t inherited, is the patent then unencumbered? Giving the rights to a trust lays this concern to rest. It also creates a proactive deterrent because the trust could sue anybody that is asserting an ed tech patent.

What I take from the details of Twitter’s pledge is that my proposal is probably legally viable. The original pledge just needs to be adapted to serve the specific needs of education.

The post Ed Tech Patent Update: The Innovator’s Agreement appeared first on e-Literate.

Another python graph – one wait event

Bobby Durrett's DBA Blog - Thu, 2016-01-14 17:28

Here is another graph that I created in Python with Pyplot:

onewait_medium

This is onewait.py on my github repository. plotwait.py has the plotting code. perfq.py has the query. db.py has the database access code.

I blanked out the database name in the example graph to hide it.

This is a graphical version of my onewaitevent.sql script. It queries the AWR looking at a particular wait event per hour. You look at the number of wait events in an hour to see how busy the system was and then the average elapsed time for that hour. Also, you set the smallest number of waits to include so you can drop hours where nothing is going on.

In the example graph you can find times where the average time for a db file sequential read is high and the system is busy. You use the top graph to see how busy the system is and the bottom to see where the average time spikes.

Still just an experiment but I thought I would pass it along. It isn’t that hard to create the graph in Python and I seem to have a lot of flexibility since I’m writing code instead of using an existing program like Excel.

Bobby

 

Categories: DBA Blogs

The Massive Decline In Larger Education Company Market Caps

Michael Feldstein - Thu, 2016-01-14 16:36

By Phil HillMore Posts (384)

After our coverage of Blackboard’s CEO change last week, we were both interviewed by the Washington Business Journal, with the following lede:

Analysts and sources I spoke with Monday, both on and off the record, said the decision to bring on Bill Ballhaus as CEO was a combination of Bhatt failing to make progress building the company’s business and lacking the experience needed to successfully run a company of that scale. And that means at least several years before Providence Equity Partners, which owns a majority of the company after paying $1.64 billion for it in July 2011, begins actively marketing the company for sale, according to industry experts.

Earlier this week I wrote about Apollo Education Group, parent of the University of Phoenix, putting itself up for sale due to its weakening financial position. I also noted that I doubt that McGraw-Hill Education is going to be able to go public in the near-term. Part of the reason for this latter observation is the dramatic fall of Pearson in the stock market, triggered by its warnings that it would miss earnings estimates. In Audrey’s excellent year-end post on the business of ed tech, she noted:

Private equity firms sure love buying ed-tech companies. Perhaps because the stock market’s sorta “meh” about them.

Despite the talk of record investments in ed tech and digital content, the reality of the business of big education companies is not so robust. In fact, there is a massive decline in market caps for many of these large companies, as investors are seeing real weakness. The weakness can be felt in private equity acquisitions as well as public market valuations. To get a sense of the latter on how dramatic the decline has been in just the past year, I combined the market capitalization of four publicly-traded companies – Pearson, Wiley, Apollo Group, and BridgePoint (Ashford University) – over the past two years.

Composite Market Value

Admittedly, this is a somewhat arbitrary combination of companies based on public markets, but I wanted to get a broader sense of market valuation than just one company at a time. Just between these four companies, they have lost $13.5 billion in market value in the past 10 months, more than half of that drop from Pearson alone. To put this in perspective, the record private investment in learning technology, as described by Ambient this month, was less than half this amount for 2015.

Ambient 2015 Summary Table

At e-Literate we seldom talk about stock prices, as we are not focused on investments. But this aggregate view is worth considering to understand that all is not rosy for technology-based companies serving the education sector.

The post The Massive Decline In Larger Education Company Market Caps appeared first on e-Literate.

YouTube : 6 Months and 1000 Subscribers!

Tim Hall - Thu, 2016-01-14 13:12

That’s right, it’s been a touch over 6 months and my YouTube channel has just hit the 1000 subscriber mark. :)

This YouTube experience has been quite odd. My plan was to try and upload a video every weekday for the first 2 months, and I came pretty close to hitting that target. Once I had got a bit of content on the channel, I was inevitably going to kick back a little. After all, there is the website, the blog, life and that annoyance they call work to consider. I think a realistic target is to aim for is 1-2 videos a week.

There will inevitably be periods (like this last 2 weeks) where I don’t hit that, but just like writing, I’m not going to beat myself up about it. It’s the normal ebb and flow of things.

So far the technical videos have almost all been based on command line examples, which is similar to my approach when doing technical presentations. That’s suited me while I’ve been finding my feet, but over time there will be a number of different formats. I’m probably going to avoid having my image in the videos. I don’t like being in front of the camera that much, as I have to suck my gut in for extended periods of time. :) I liked doing the car videos, as I was less self conscious about the camera while I was concentrating on the driving, but the sound quality was really bad, so I’m not sure if I’ll do more of those. We shall see.

The channel content will become more eclectic over time, because the DBA job is quite eclectic. There’s no point trying to bullshit about being a specialist, because I’m not one.

I would quite like to try my hand at some totally non-technical stuff, like a vlog maybe, but if I do that, it will probably end up on another channel, so it doesn’t dilute this channel too much. We’ll see what happens.

Anyway, to everyone who has subscribed so far, thank you. Thanks to all the folks that have done cameos for me so far. I’ve got a whole bunch more coming. If you’ve not already seen my channel, give it a try and see what you think.

Here’s to the next 1000 subscribers!

Cheers

Tim…

 

YouTube : 6 Months and 1000 Subscribers! was first posted on January 14, 2016 at 8:12 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Solving the Ed Tech Patent Problem

Michael Feldstein - Thu, 2016-01-14 13:04

By Michael FeldsteinMore Posts (1052)

You may have heard that Khan Academy has filed for several patents. Audrey Watters has written a really strong piece providing the details of the filings in the context of the history of ed tech patents and showing why some academics feel that the patent system clashes with the values upon which academia was built. In the process, she excavates some of my personal history in the Blackboard patent war. While I am sympathetic to arguments against ed tech or software patents on principle, my own personal reasons for getting involved with that fight were more utilitarian. I believed then, as I do now, that patents threaten to kill innovation in educational technology due to the specific characteristics of the market. The outcome of Blackboard v. Desire2Learn did not end that threat, although it did temporarily reduce it. The conversation being provoked by Khan Academy’s filings offers a new opportunity to come up with a more permanent solution.

Understanding the Threat

The Blackboard patent fight provides us with a good case study to understand the nature of the problem. Blackboard was granted a patent related to roles and permissions in an LMS. Specifically, they patented the ability of a system to have a single user set up simultaneously as a teacher in one course and a student in another. When academics hear the patent boiled down this simply, several objections usually come up. The first is obviousness. It doesn’t seem like such a simple function should be patentable. The second is prior art. It seems like other, earlier systems probably went down this path. This latter argument took on particular resonance, since many of these systems were developed by and for academics. There was a sense at the time that Blackboard effectively stole a basic concept behind the LMS from its customers and was using it to limit their alternatives. But neither of these problems were at the heart of what made the Blackboard patent so dangerous. Rather, it was the patent’s breadth of applicability. If it stood, there probably wasn’t a mainstream LMS in existence at the time that wouldn’t infringe. When Blackboard sued Desire2Learn for infringement (or “asserted” their patent, in legal parlance), they were attempting to affirm the validity of the patent. Had they won in court, or had Desire2Learn settled out of court, then Blackboard could have sued any LMS developer and, in principle, any LMS adopter.

But it was worse than that. Blackboard had other patent filings, some of which were also quite broad. It was reasonable to worry that the company was preparing to pursue a strategy known as “royalty stacking,” in which they could charge competitors multiple royalties for multiple patents. What would this do to the LMS market? In a big enough industry like, say, mobile phones, companies like Apple and Samsung can sue the pants off each other for patent infringement and still stay happily in business. The LMS business is nothing like that. Instructure, the only new entrant that has gained substantial traction in North American higher ed in the last decade, did so by spending twice as much money as they made, with most of that going to sales and marketing. If Instructure had to pay multiple royalties to Blackboard, there probably would be no Instructure today, and there almost certainly would be no publicly traded company. A successful royalty stacking strategy would have killed the possibility of new competition and given Blackboard a permanent choke hold on the market. I didn’t care to fight about which LMS company should gain or lose market share or whether software patents are good or bad in principle, but I sure did care about whether there would be long-term competition and innovation in educational technology.

Fighting Back

Once the danger became apparent, different parties took different strategies to fight the threat, depending on their resources and legal standing. Desire2Learn fought back both in court and by challenging the validity of the patent through US Patent and Trademark Office (USPTO). The Sakai, Moodle, and ATutor open source communities enlisted the help of the Software Freedom Law Center (SFLC) to file separate patent challenges on their behalf. Having no lawyers and no legal standing myself, I took a different tack. I sought to make the patent strategy a loser economically. I believed that Blackboard’s customers and prospects would reject the company if only they fully understood the patent, the strategy, and the broader implications. So I did my best to provide the necessary education. I wrote a plain English translation of the patent claim. I started a Wikipedia page to document the history of LMS development, to identify potential prior art for the legal case but also, and perhaps more importantly, to place Blackboard’s assertions of innovation in an historic context. I interviewed legal experts and explained court proceedings. Ultimately, it was up to the university decision-makers, and not me, to decide how Blackboard’s actions should affect their purchasing decisions. But I did everything I could to make sure that they had the knowledge that they needed to make informed decisions. As did other bloggers at the time.

In one important way, the strategy was far more effective than I ever imagined it could be (or than I frankly intended it to be). Blackboard’s reputation was already shaky, for a variety of reasons ranging from product bugs to poor customer service to high-pressure sales tactics to a proclivity for buying up popular alternatives and then killing them off. The patent issue crystalized their brand image, in much the same way that we sometimes see a single gaffe or incident crystalize nascent opinions of a politician—like John Kerry’s “for it before I was against it” or George Bush’s “Heckuva job, Brownie.” There is no doubt in my mind—none—that Blackboard would have substantially better share today had they done nothing different other than refraining from filing that law suit. I always argued that the goal should be to incentivize better market behaviors, and that treatment of companies should change when their behaviors change. And yet to this day, there are institutions that will not even consider evaluating Blackboard because the stakeholders think it’s a bad company, even when they can’t fully articulate why they think that. Even a decade after the patent fight started, and five years and two CEOs after it ended.

And that’s the problem. Yes, we established the principle that a company that asserts an ed tech patent would be punished in the marketplace. Some ed tech leaders learned that lesson and will remember it. Others will not. Even inside Blackboard, even among employees who were there at the time, the institutional memory is fading. Meanwhile, none of the entrepreneurs who entered the space since 2010 have any reason to have even heard of the dispute. And customers don’t remember the details either. The deterrent lesson of Blackboard v. Desire2Learn is time-limited, it is poorly targeted, and it is fading.

We need a better, more permanent solution.

A Complex Problem

So that was Blackboard, circa 2006. What about Khan Academy now? What does it mean that they are filing for patents? What does it mean that they have signed a pledge not to assert their patents except against other parties that have asserted patents? What is the motivation here? What is the effect?

I believe that Khan Academy has good reason to file for patents even if their intentions are entirely noble. Perhaps especially then. Online learning in general and code academies in particular are large and growing markets. Typically, corporate entities only get targeted by patent suits when they are rich enough to produce a big payout. But Khan Academy is a potential target for the opposite reason; they have the ability to reduce the total size of the market by satisfying demand with free offerings. Why pay for a course at Acme Code Academy when you can get it for free from Khan Academy?

Are there actors out there who are morally bankrupt enough to sue a non-profit foundation for giving away free education? If there aren’t now, there will be. Sooner or later, ed tech will get its Martin Shkreli. As long as software patents are legal, Khan Academy is vulnerable to infringement suits. And while owning patents offers far from perfect protection against this danger, it is still better than no protection at all.

I don’t worry about what Khan Academy is likely to do with its software patents. But I do worry about whoever the next owner of the patents might be. I have heard Sal Khan muse that his tutorial videos might still be around and useful to somebody 100 years from now. Unfortunately, that same possibility exists for his software patents. Patent pledges are nice, but they only survive as long as the good will of the current patent owner lasts. Sal Khan may keep his guns locked up safely in a cabinet, but the fact remains that there are now a few more guns in the world that could get out onto the streets and in the wrong hands.

We need a better solution.

A Better Solution?

Needless to say, I have thought about this problem for a long time. There may be a way for good actors in educational technology to get the defensive protection that they need from patent ownership while still reducing the long-term risk to the people that they serve, but they will have to go further than a patent pledge. What I have in mind is a legal structure that combines characteristics of a patent pool and a land conservation easement.

It would work something like this:

  • A legal trust would be formed by a third party with moral credibility, such as the Electronic Frontier Foundation or the Berkman Center.
  • Patent holders would cede their right to assert their patents to the trust, except under specific conditions of self-defense, in perpetuity.
  • In return, they would gain the right to assert other patents in the pool in specified self-defense circumstances, as overseen by the trust.
  • The trust would also be empowered to assert the patents in the pool against third parties that are asserting educational technology patents, as a general deterrent against ed tech patent assertion.

I am not a lawyer and am not sure that this would work. But it seems plausible.

If there are any patent holders out there who are potentially interested in this approach, please know that I want to do anything I can to help. I may be able to help coax other patent holders to the table, and I certainly would be happy to promote the good efforts of any company willing to make such a commitment.

 

The post Solving the Ed Tech Patent Problem appeared first on e-Literate.