Skip navigation.

Feed aggregator

Information about SSL “Poodle” vulnerability CVE-2014-3566

Oracle Security Team - Wed, 2014-10-15 12:09

Hello, this is Eric Maurice.

A security vulnerability affecting Secure Socket Layer (SSL) v3.0 was recently publicly disclosed (Padding Oracle On Downgraded Legacy Encryption, or “Poodle”). This vulnerability is the result of a design flaw in SSL v3.0. Note that this vulnerability does not affect TLS and is limited to SSL 3.0, which is generally considered an obsolete protocol. A number of organizations, including OWASP previously advised against using this protocol, as weaknesses affecting it have been known for some time.

This “Poodle” vulnerability has received the identifier CVE-2014-3566.

A number of Oracle products do not support SSL v3.0 out of the box, while some Oracle products do provide for enabling SSL v3.0. Based on this vulnerability as well as the existence of other issues with this protocol, in instances when SSL v3.0 is supported but not needed, Oracle recommends permanently disabling SSL v3.0.

Normal 0 false false false EN-US X-NONE X-NONE

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

Furthermore, Oracle is assessing the use of SSL v3.0 across its corporate systems and those managed on behalf of Oracle customers (e.g., Oracle Cloud). Oracle is actively deprecating the use of this protocol. In instances where Oracle identifies a possible impact to cloud customers, Oracle will work with the affected customers to determine the best course of action. Oracle recommends that cloud customers investigate their use of SSL v3.0 and discontinue to the extent possible the use of this protocol.

For more information, see the "Poodle Vulnerability CVE-2014-3566" page located on OTN at http://www.oracle.com/technetwork/topics/security/poodlecve-2014-3566-2339408.html

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

Please look at latest Oct 2014 Oracle patching

Grumpy old DBA - Wed, 2014-10-15 11:23
This one looks like the real thing ... getting advice to "not skip" the patching process for a whole bunch of things included here.

I'm just saying ...
Categories: DBA Blogs

The Next-Generation of Accounts Payable Processing is Here

WebCenter Team - Wed, 2014-10-15 09:25

Automate 80% of invoice processing, eliminating paper, manual data entry and associated errors - while optimizing cash management, accruals and financial statement accuracy.

Learn more about Accounts Payable Processing and AP Process Automation!

Webcast: The Next-Generation of Accounts Payable Processing is Here

Learn how A/P Process Automation can deliver significant results in terms of cost savings, processing time and resource efficiency to your existing Oracle and non-Oracle ERP applications.

Patching Time

Jeremy Schneider - Wed, 2014-10-15 09:17

Just a quick note to point out that the October PSU was just released. The database has a few more vulnerabilities than usual (31), but they are mostly related to Java and the high CVSS score of 9 only applies to people running Oracle on windows. (On other operating systems, the highest score is 6.5.)

I did happen to glance at the announcement on the security blog, and I thought this short blurb was worth repeating:

In today’s Critical Patch Update Advisory, you will see a stronger than previously-used statement about the importance of applying security patches. Even though Oracle has consistently tried to encourage customers to apply Critical Patch Updates on a timely basis and recommended customers remain on actively-supported versions, Oracle continues to receive credible reports of attempts to exploit vulnerabilities for which fixes have been already published by Oracle. In many instances, these fixes were published by Oracle years ago, but their non-application by customers, particularly against Internet-facing systems, results in dangerous exposure for these customers. Keeping up with security releases is a good security practice and good IT governance.

The Oracle Database was first released in a different age than we live in today. Ordering physical parts involved navigating paper catalogs and faxing order sheets to the supplier. Physical inventory management relied heavily on notebooks and clipboards. Mainframes were processing data but manufacturing and supply chain had not yet been revolutionized by technology. Likewise, software base installs and upgrades were shipped on CDs through the mail and installed via physical consoles. The feedback cycle incorporating customer requests into software features took years.

Today, manufacturing is lean and the supply chain is digitized. Inventory is managed with the help of scanners and real-time analytics. Customer communication is more streamlined than ever before and developers respond quickly to the market. Bugs are exploited maliciously as soon as they’re discovered and the software development and delivery process has been optimized for fast response and rapid digital delivery of fixes.

Here’s the puzzle: Cell phones, web browsers and laptop operating systems all get security updates installed frequently. Even the linux OS running on your servers is easy to update with security patches. Oracle is no exception – they have streamlined delivery of database patches through the quarterly PSU program. Why do so many people simply ignore the whole area of Oracle database patches? Are we stuck in the old age of infrequent patching activity even though Oracle themselves have moved on?

Repetition

For many, it just seems overwhelming to think about patching. And honestly – it is. At first. The key is actually a little counter-intuitive: it’s painful, so you should in fact do it a lot! Believe it or not, it will actually become very easy once you get over the initial hump.

In my experience working at one small org (two dba’s), the key is doing it regularly. Lots of practice. You keep decent notes and setup scripts/tools where it makes sense and then you start to get a lot faster after several times around. By the way, my thinking has been influenced quite a bit here by the devops movement (like Jez Humble’s ’12 berlin talk and John Allspaw’s ’09 velocity talk). I think they have a nice articulation of this basic repetition principle. And it is very relevant to people who have Oracle databases.

So with all that said, happy patching! I know that I’ll be working with these PSUs over the next week or two. I hope that you’ll be working with them too!

Oracle fanboy and blind to the truth?

Tim Hall - Wed, 2014-10-15 02:46

I had a little exchange with someone on Twitter last night, which was initiated by him complaining about the cost of Oracle and predicting their demise. Once that was over I spent a little time thinking about my “fanboy status”.

If you know anything about me, you will know I’m an Oracle fanboy. I’ve spent nearly 20 years doing this stuff and the last 14+ years writing about it on the internet. If I wasn’t into it, it would be a pretty sorry state of affairs. So does that mean I’m totally blinded like all those Apple fanboys and fangirls? No. I just don’t choose to dwell on a lot of the negative and instead focus on the positive, like the cool bits of tech. The common topics I hear are:

  • Oracle costs too much : I could bleat on about the cost of Oracle and what features are missing from specific editions, but quite frankly that is boring. Unless you’ve been under a rock for the last 35+ years you should know the score. If it’s got the name Oracle associated with it, it’s probably going to be really expensive. That’s why people’s jaws drop when they find out Oracle Linux is free. They are just not used to hearing the words Oracle and free in the same sentence. If you want free or cheap, you can find it. What people often don’t consider is total cost of ownership. Nothing is ever free. The money just gets directed in different ways.
  • The cheap/free RDBMS products will kill Oracle : This talk has been going on since I started working with Oracle 20 years ago. It used to worry me. It doesn’t any more. So far it hasn’t materialized. Sure, different products have eaten into the market share somewhat and I’m sure that will continue to happen, but having a headstart over the competition can sometimes be a significant advantage. I work with other RDBMS products as well and it is sometimes infuriating how much is missing. I’m not talking about those headline Oracle features that 3 people in the world use. I’m talking about really simple stuff that is missing that makes being a DBA a total pain in the ass. Typically, these gaps have to be filled in by separate products or tools, which just complicates your environment.
  • It’s just a bit bucket : If your company is just using the database as a bit bucket and you do all the “cool” stuff in the middle tier, then Oracle databases are probably not the way to go for you. Your intellectual and financial focus will be on the middle tier. Good luck!
  • But company X use product Y, not Oracle : I’m so bored of this type of argument. Facebook use MySQL and PHP. Yes, but they wrote their own source code transformer (HipHop) to turn PHP into C++ and they use so much stuff in front of MySQL (like Memcached) that they could probably do what they do on top of flat files. Companies talk about their cool stuff and what makes them different. They are not so quick to talk about what is sitting behind the ERP that is running their business…
  • NoSQL/Hadoop/Document Stores will kill RDBMS : Have you ever had a real job in industry? Have you ever done anything other than try to write a twitter rip-off in Ruby for your school project? Do you know how long it took COBOL to die? (it still isn’t dead by the way). There is a massive investment in the I.T. industry around relational databases. I’m not saying they are the perfect solution, but they aren’t going anywhere in the near future. Good luck running your ERP on any of these non-RDBMS data stores! What has changed is that people now realise RDBMS is not the right solution for every type of data store. Using the right product for the right job is a good thing. There are still plenty of jobs where an RDBMS is the right tool.
  • The cloud will kill Oracle : The cloud could prove to be the biggest spanner in the works for many IT companies. If we start using cloud-based services for everything in the Software as a Service (SaaS) model, who cares what technology sits behind it? Provided our applications work and they meet our SLAs, who cares how many bodies are running around like headless chickens in the background to keep the thing running? For Platform as a Service (PaaS) and Infrastructure as a Service (IaaS), I don’t think cloud makes so much of a difference. In these cases, you are still picking the type of database or the type of OS you need. They are not hidden from you like in the SaaS model. I guess the impact of cloud will depend on your definition of cloud and route the market eventually takes. What people also seem to forget is the big winners in the cloud game will be the big companies. When the world is only using SaaS, you are going to have to work for Amazon, Oracle, Microsoft etc. if you want to be a techie. The ultimate goal of cloud is consolidation and centralisation, so you will have to work for one of these big players if you want to be anything other than a user. I find it interesting that people are betting on the cloud as a way of punishing the big companies, when actually it is likely to help them and put us folks out of business…

The post has got a bit long an tedious, so I’m going to sign off now.

In conclusion, yes I’m a fanboy, but I’m not oblivious to what’s going on outside Oracle. I like playing with the tech and I try to look on the positive side where my job-related technology is concerned. If I focussed on the negative I would have to assume that Oracle is doomed and we will all die of Ebola by the end of the week…

Cheers

Tim…

 

Oracle fanboy and blind to the truth? was first posted on October 15, 2014 at 9:46 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle system statistics: Display AUX_STATS$ with calculated values and formulas

Yann Neuhaus - Wed, 2014-10-15 01:13

System statistics can be gathered in NOWORKLOAD or WORKLOAD mode. Different values will be set depending on that and the others will be calculated - derived from them. We can see defined values from SYS.AUX_STATS$ but here is a script that shows the calculated ones as well.

With no system statistics or NOWORKLOAD the values of IOSEEKTIM (latency in ms) and IOTFRSPEED (transfer in bytes/ms) are set and the SREADTIM (time to read 1 block in ms) and MREADTIM (for multiblock read) are calculated from them. MBRC depends on the defaults or the db_file_multiblock_read_count settings.

With WORKLOAD statistics, the SREADTIM and MREADTIM as well as MBRC are measured and those are the ones that are used by the optimizer.

Here is my script:

set echo off
set linesize 200 pagesize 1000
column pname format a30
column sname format a20
column pval2 format a20

select pname,pval1,calculated,formula from sys.aux_stats$ where sname='SYSSTATS_MAIN'
model
  reference sga on (
    select name,value from v$sga 
        ) dimension by (name) measures(value)
  reference parameter on (
    select name,decode(type,3,to_number(value)) value from v$parameter where name='db_file_multiblock_read_count' and ismodified!='FALSE'
    union all
    select name,decode(type,3,to_number(value)) value from v$parameter where name='sessions'
    union all
    select name,decode(type,3,to_number(value)) value from v$parameter where name='db_block_size'
        ) dimension by (name) measures(value)
partition by (sname) dimension by (pname) measures (pval1,pval2,cast(null as number) as calculated,cast(null as varchar2(60)) as formula) rules(
  calculated['MBRC']=coalesce(pval1['MBRC'],parameter.value['db_file_multiblock_read_count'],parameter.value['_db_file_optimizer_read_count'],8),
  calculated['MREADTIM']=coalesce(pval1['MREADTIM'],pval1['IOSEEKTIM'] + (parameter.value['db_block_size'] * calculated['MBRC'] ) / pval1['IOTFRSPEED']),
  calculated['SREADTIM']=coalesce(pval1['SREADTIM'],pval1['IOSEEKTIM'] + parameter.value['db_block_size'] / pval1['IOTFRSPEED']),
  calculated['   multi block Cost per block']=round(1/calculated['MBRC']*calculated['MREADTIM']/calculated['SREADTIM'],4),
  calculated['   single block Cost per block']=1,
  formula['MBRC']=case when pval1['MBRC'] is not null then 'MBRC' when parameter.value['db_file_multiblock_read_count'] is not null then 'db_file_multiblock_read_count' when parameter.value['_db_file_optimizer_read_count'] is not null then '_db_file_optimizer_read_count' else '= _db_file_optimizer_read_count' end,
  formula['MREADTIM']=case when pval1['MREADTIM'] is null then '= IOSEEKTIM + db_block_size * MBRC / IOTFRSPEED' end,
  formula['SREADTIM']=case when pval1['SREADTIM'] is null then '= IOSEEKTIM + db_block_size        / IOTFRSPEED' end,
  formula['   multi block Cost per block']='= 1/MBRC * MREADTIM/SREADTIM',
  formula['   single block Cost per block']='by definition',
  calculated['   maximum mbrc']=sga.value['Database Buffers']/(parameter.value['db_block_size']*parameter.value['sessions']),
  formula['   maximum mbrc']='= buffer cache size in blocks / sessions'
);
set echo on

Here is an exemple with default statistics:

PNAME                               PVAL1 CALCULATED FORMULA
------------------------------ ---------- ---------- --------------------------------------------------
CPUSPEEDNW                           1519
IOSEEKTIM                              10
IOTFRSPEED                           4096
SREADTIM                                          12 = IOSEEKTIM + db_block_size        / IOTFRSPEED
MREADTIM                                          26 = IOSEEKTIM + db_block_size * MBRC / IOTFRSPEED
CPUSPEED
MBRC                                               8 = _db_file_optimizer_read_count
MAXTHR
SLAVETHR
   maximum mbrc                           117.152542 = buffer cache size in blocks / sessions
   single block Cost per block                     1 by definition
   multi block Cost per block                  .2708 = 1/MBRC * MREADTIM/SREADTIM

You see the calculated values for everything. Note the 'maximum mbrc' which limits the multiblock reads when the buffer cache is small. It divides the buffer cache size (at startup - can depend on ASMM and AMM settings) by the sessions parameter.

Here is an example with workload system statistics gathering:

PNAME                               PVAL1 CALCULATED FORMULA
------------------------------ ---------- ---------- --------------------------------------------------
CPUSPEEDNW                           1511
IOSEEKTIM                              15
IOTFRSPEED                           4096
SREADTIM                            1.178      1.178
MREADTIM                              .03        .03
CPUSPEED                             3004
MBRC                                    8          8 MBRC
MAXTHR                            6861824
SLAVETHR
   maximum mbrc                           114.983051 = buffer cache size in blocks / sessions
   single block Cost per block                     1 by definition
   multi block Cost per block                  .0032 = 1/MBRC * MREADTIM/SREADTIM

here all values are explicitely set

And an example with exadata system statistics that defines noworkload values and sets also the MBRC (see Chris Antognini post about it)

PNAME                               PVAL1 CALCULATED FORMULA
------------------------------ ---------- ---------- --------------------------------------------------
CPUSPEEDNW                           1539
IOSEEKTIM                              16
IOTFRSPEED                         204800
SREADTIM                                       16.04 = IOSEEKTIM + db_block_size        / IOTFRSPEED
MREADTIM                                       18.28 = IOSEEKTIM + db_block_size * MBRC / IOTFRSPEED
CPUSPEED
MBRC                                   57         57 MBRC
MAXTHR
SLAVETHR
   maximum mbrc                           114.983051 = buffer cache size in blocks / sessions
   single block Cost per block                     1 by definition
   multi block Cost per block                    .02 = 1/MBRC * MREADTIM/SREADTIM

And finaly here is a workload system statistics result but with explicitly setting the db_file_multiblock_read_count to 128:

PNAME                               PVAL1 CALCULATED FORMULA
------------------------------ ---------- ---------- --------------------------------------------------
CPUSPEEDNW                           1539
IOSEEKTIM                              15
IOTFRSPEED                           4096
SREADTIM                                          17 = IOSEEKTIM + db_block_size        / IOTFRSPEED
MREADTIM                                         271 = IOSEEKTIM + db_block_size * MBRC / IOTFRSPEED
CPUSPEED
MBRC                                             128 db_file_multiblock_read_count
MAXTHR
SLAVETHR
   maximum mbrc                           114.983051 = buffer cache size in blocks / sessions
   single block Cost per block                     1 by definition
   multi block Cost per block                  .1245 = 1/MBRC * MREADTIM/SREADTIM

Here you see that the MBRC in noworkload is coming from the value which is set by the db_file_multiblock_read_count rather from the value 8 which is used by default by the optimizer when it is not set. And the MREADTIM is calculated from that i/o size

For more historical information about system statistics and how multiblock reads are costed (index vs. full table scan choice) see my article on latest OracleScene

As usual if you find anything to improve in that script, please share.

12c: Access Objects Of A Common User Non-existent In Root

Oracle in Action - Tue, 2014-10-14 23:56

RSS content

In a multitenant environment, a common user is a database user whose identity and password are known in the root and in every existing and future pluggable database (PDB). Common users can connect to the root and perform administrative tasks specific to the root or PDBs. There are two types of common users :

  • All Oracle-supplied administrative user accounts, such as SYS and SYSTEM
  •  User created common users- Their names  must start with C## or c##.

When a PDB having a user created common user is plugged into another CDB and the target CDB does not have  a common user with the same name, the common user in a newly plugged in PDB becomes a locked account.
To access such common user’s objects, you can do one of the following:

  • Leave the user account locked and use the objects of its schema.
  • Create a common user with the same name as the locked account.

Let’s demonstrate …

Current scenario:

Source CDB : CDB1
- one PDB (PDB1)
- Two common users C##NXISTS and C##EXISTS

Destination CDB : CDB2
- No PDB
- One common user C##EXISTS

Overview:
- As user C##NXISTS, create and populate a table in PDB1@CDB1
- Unplug PDB1 from CDB1 and plug into CDB2 as PDB1_COPY
- Open PDB1_COPY and Verify that

  •  user C##NXISTS has not been created in root
  • users C##NXISTS and C##EXISTS both have been created in PDB1_COPY. Account of C##EXISTS is open whereas account of C##NXISTS is closed.

- Unlock user C##NXISTS account in PDB1_COPY.
- Try to connect to pdb1_copy as C##NXISTS  – fails with internal error.
- Create a local user  LUSER in PDB1_COPY with privileges on C##NXISTS’  table and verify that LUSER can access C##NXISTS’ table.
- Create user C##NXISTS in root with PDB1_COPY closed. Account of
C##NXISTS is automatically opened on opening PDB1_COPY.
- Try to connect as C##NXISTS to pdb1_copy – succeeds

Implementation:

– Setup –

CDB1>sho con_name

CON_NAME
------------------------------
CDB$ROOT

CDB1>sho pdbs

CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED                       READ ONLY  NO
3 PDB1                           READ WRITE NO

CDB1>select username, common from cdb_users where username like 'C##%';

no rows selected

- Create 2 common users in CDB1
    - C##NXISTS
    - C##EXISTS

CDB1>create user C##EXISTS identified by oracle container=all;
     create user C##NXISTS identified by oracle container=all;

     col username for a30
     col common for a10
     select username, common from cdb_users where   username like 'C##%';

USERNAME                       COMMON
------------------------------ ----------
C##NXISTS                      YES
C##EXISTS                      YES
C##NXISTS                      YES
C##EXISTS                      YES

- Create user C##EXISTS  in CDB2

CDB2>sho parameter db_name

NAME                                 TYPE        VALUE
------------------------------------ -----------
db_name                        string      cdb2

CDB2>sho pdbs

CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED                       READ ONLY  NO

CDB2>create user C##EXISTS identified by oracle container=all;
     col username for a30
     col common for a10

     select username, common from cdb_users where username like 'C##%';

USERNAME                       COMMON
------------------------------ ----------
C##EXISTS                      YES

- As user C##NXISTS, create and populate a table in PDB1@CDB1

CDB1>alter session set container=pdb1;
     alter user C##NXISTS quota unlimited on users;
     create table C##NXISTS.test(x number);
     insert into C##NXISTS.test values (1);
     commit;

- Unplug PDB1 from CDB1

CDB1>alter session set container=cdb$root;
     alter pluggable database pdb1 close immediate;
     alter pluggable database pdb1 unplug into '/home/oracle/pdb1.xml';

CDB1>select name from v$datafile where con_id = 3;

NAME
-----------------------------------------------------------------------
/u01/app/oracle/oradata/cdb1/pdb1/system01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/sysaux01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/SAMPLE_SCHEMA_users01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/example01.dbf

- Plug in PDB1 into CDB2 as PDB1_COPY

CDB2>create pluggable database pdb1_copy using '/home/oracle/pdb1.xml'      file_name_convert =
('/u01/app/oracle/oradata/cdb1/pdb1','/u01/app/oracle/oradata/cdb2/pdb1_copy');

sho pdbs

CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED                       READ ONLY  NO
3 PDB1_COPY                      MOUNTED

– Verify that C##NXISTS user is not visible as PDB1_COPY is closed

CDB2>col username for a30
col common for a10
select username, common from cdb_users where username like 'C##%';

USERNAME                       COMMON
------------------------------ ----------
C##EXISTS                      YES

- Open PDB1_COPY and Verify that
  . users C##NXISTS and C##EXISTS both have been created in PDB.
  . Account of C##EXISTS is open whereas account of C##NXISTS is  locked.

CDB2>alter pluggable database pdb1_copy open;
col account_status for a20
select con_id, username, common, account_status from cdb_users  where username like 'C##%' order by con_id, username;

CON_ID USERNAME                       COMMON     ACCOUNT_STATUS
---------- ------------------------------      ----------      --------------------------
1 C##EXISTS                      YES        OPEN
3 C##EXISTS                      YES        OPEN
3 C##NXISTS                      YES        LOCKED

– Unlock user C##NXISTS account on PDB1_COPY

CDB2>alter session set container = pdb1_copy;
     alter user C##NXISTS account unlock;
     col account_status for a20
     select con_id, username, common, account_status from cdb_users   where username like 'C##%' order by con_id, username;

CON_ID USERNAME                       COMMON     ACCOUNT_STATUS
---------- ------------------------------     -------------  ---------------------------
 3 C##EXISTS                      YES        OPEN
 3 C##NXISTS                      YES        OPEN

– Try to connect as C##NXISTS to pdb1_copy – fails with internal error

CDB2>conn c##nxists/oracle@localhost:1522/pdb1_copy
ERROR:
ORA-00600: internal error code, arguments: [kziaVrfyAcctStatinRootCbk: 

!user],
[C##NXISTS], [], [], [], [], [], [], [], [], [], []

- Since user C##NXISTS cannot connect pdb1_copy, we can lock the account again  

CDB2>conn sys/oracle@localhost:1522/pdb1_copy as sysdba
     alter user C##NXISTS account lock;

     col account_status for a20
     select username, common, account_status from dba_users     where username like 'C##%' order by username;

USERNAME                       COMMON     ACCOUNT_STATUS
------------------------------ ---------- --------------------
C##EXISTS                      YES        OPEN
C##NXISTS                      YES        LOCKED

– Now if C##NXISTS tries to log in to PDB1_COPY, ORA-28000 is returned    instead of internal error

CDB2>conn c##nxists/oracle@localhost:1522/pdb1_copy
ERROR:
ORA-28000: the account is locked

How to access C##NXISTS objects?

SOLUTION – I

- Create a local user in PDB1_COPY with appropriate object privileges on C##NXISTS’ table

CDB2>conn sys/oracle@localhost:1522/pdb1_copy  as sysdba

     create user luser identified by oracle;
     grant select on c##nxists.test to luser;
     grant create session to luser;

–Check that local user can access common user C##NXISTS tables

CDB2>conn luser/oracle@localhost:1522/pdb1_copy;
     select * from c##nxists.test;
X
----------
1

SOLUTION – II :  Create the common user C##NXISTS in CDB2

- Check that C##NXISTS has not been created in CDB$root

CDB2>conn sys/oracle@cdb2 as sysdba
     col account_status for a20
     select con_id, username, common, account_status from cdb_users    where username like 'C##%' order by con_id, username;

CON_ID USERNAME                       COMMON     ACCOUNT_STATUS
---------- ------------------------------   -------------     -------------------------
1 C##EXISTS                      YES        OPEN
3 C##EXISTS                      YES        OPEN
3 C##NXISTS                      YES        LOCKED

- Try to create user C##NXISTS with PDB1_COPY open – fails

CDB2>create user c##NXISTS identified by oracle;
create user c##NXISTS identified by oracle
*
ERROR at line 1:
ORA-65048: error encountered when processing the current DDL statement in pluggable database PDB1_COPY
ORA-01920: user name 'C##NXISTS' conflicts with another user or role  name

- Close PDB1_COPY and Create user C##NXISTS in root and verify that his account is automatically unlocked on opening PDB1_COPY

CDB2>alter pluggable database pdb1_copy close;
     create user c##NXISTS identified by oracle;
     alter pluggable database pdb1_copy open;

     col account_status for a20
     select con_id, username, common, account_status from cdb_users   where username like 'C##%' order by con_id, username;

CON_ID USERNAME                       COMMON     ACCOUNT_STATUS
----------   ------------------------------ ----------      --------------------
1 C##EXISTS                      YES        OPEN
1 C##NXISTS                      YES        OPEN
3 C##EXISTS                      YES        OPEN
3 C##NXISTS                      YES        OPEN

– Connect to PDB1_COPY as C##NXISTS after granting appropriate privilege – Succeeds

CDB2>conn c##nxists/oracle@localhost:1522/pdb1_copy
ERROR:
ORA-01045: user C##NXISTS lacks CREATE SESSION privilege; logon denied
Warning: You are no longer connected to ORACLE.

CDB2>conn sys/oracle@localhost:1522/pdb1_copy as sysdba
     grant create session to c##nxists;
     conn c##nxists/oracle@localhost:1522/pdb1_copy

CDB2>sho con_name

CON_NAME
------------------------------
PDB1_COPY

CDB2>sho user

USER is "C##NXISTS"

CDB2>select * from test;

X
----------
1

References:
http://docs.oracle.com/database/121/DBSEG/users.htm#DBSEG573
———————————————————————————————

Related Links:

Home

Oracle 12c Index

 

—————-



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [12c: Access Objects Of A Common User Non-existent In Root], All Right Reserved. 2014.

The post 12c: Access Objects Of A Common User Non-existent In Root appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Deploying a Private Cloud at Home — Part 3

Pythian Group - Tue, 2014-10-14 14:59

Today’s blog post is part three of seven in a series dedicated to Deploying Private Cloud at Home, where I will demonstrate how to configure OpenStack Identity service on the controller node. We have already configured the required repo in part two of the series, so let’s get started on configuring Keystone Identity Service.

  1. Install keystone on the controller node.
    yum install -y openstack-keystone python-keystoneclient

    OpenStack uses a message broker to coordinate operations and status information among services. The message broker service typically runs on the controller node. OpenStack supports several message brokers including RabbitMQ, Qpid, and ZeroMQ.I am using Qpid as it is available on most of the distros

  2. Install Qpid Messagebroker server.
    yum install -y qpid-cpp-server

    Now Modify the qpid configuration file to disable authentication by changing below line in /etc/qpidd.conf

    auth=no

    Now start and enable qpid service to start on server startup

    chkconfig qpidd on
    service qpidd start
  3. Now configure keystone to use MySQL database
    openstack-config --set /etc/keystone/keystone.conf \
       database connection mysql://keystone:YOUR_PASSWORD@controller/keystone
  4. Next create keystone database user by running below queries on your mysql prompt as root.
    CREATE DATABASE keystone;
    GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'YOUR_PASSWORD';
    GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'YOUR_PASSWORD';
  5. Now create database tables
    su -s /bin/sh -c "keystone-manage db_sync" keystone

    Currently we don’t have any user accounts that can communicate with OpenStack services and Identity service. So we will setup an authorization token to use as a shared secret between the Identity Service and other OpenStack services and store in configuration file.

    ADMIN_TOKEN=$(openssl rand -hex 10)
    echo $ADMIN_TOKEN
    openstack-config --set /etc/keystone/keystone.conf DEFAULT \
       admin_token $ADMIN_TOKEN
  6. Keystone uses PKI tokens as default. Now create the signing keys and certificates to restrict access to the generated data
    keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
    chown -R keystone:keystone /etc/keystone/ssl
    chmod -R o-rwx /etc/keystone/ssl
  7. Start and enable the keystone identity service to begin at startup
    service openstack-keystone start
    chkconfig openstack-keystone on

    Keystone Identity service stores expired tokens as well in the database. We will create below crontab entry to purge the expired tokens

    (crontab -l -u keystone 2>&1 | grep -q token_flush) || \
    echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' >> /var/spool/cron/keystone
  8. Now we will create admin user for keystone and define roles for admin user
    export OS_SERVICE_TOKEN=$ADMIN_TOKEN
    export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0
    keystone user-create --name=admin --pass=Your_Password --email=Your_Email
    keystone role-create --name=admin
    keystone tenant-create --name=admin --description="Admin Tenant"
    keystone user-role-add --user=admin --tenant=admin --role=admin
    keystone user-role-add --user=admin --role=_member_ --tenant=admin
    keystone user-create --name=pythian --pass= Your_Password --email=Your_Email
    keystone tenant-create --name=pythian --description="Pythian Tenant"
    keystone user-role-add --user=pythian --role=_member_ --tenant=pythian
    keystone tenant-create --name=service --description="Service Tenant"
  9. Now we create a service entry for the identity service
    keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
    keystone endpoint-create --service-id=$(keystone service-list | awk '/ identity / {print $2}') \
    --publicurl=http://controller:5000/v2.0 \
    --internalurl=http://controller:5000/v2.0 \
    --adminurl=http://controller:35357/v2.0
  10. Verify Identity service installation
    unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
  11. Request an authentication token by using the admin user and the password you chose for that user
    keystone --os-username=admin --os-password=Your_Password \
      --os-auth-url=http://controller:35357/v2.0 token-get
    keystone --os-username=admin --os-password=Your_Password \
      --os-tenant-name=admin --os-auth-url=http://controller:35357/v2.0 \
      token-get
  12. We will save the required parameters in admin-openrc.sh as below
    export OS_USERNAME=admin
    export OS_PASSWORD=Your_Password
    export OS_TENANT_NAME=admin
    export OS_AUTH_URL=http://controller:35357/v2.0
  13. Next Next check if everything is working fine and keystone interacts with OpenStack services. We will source the admin-openrc.sh file to load the keystone parameters
    source /root/admin-openrc.sh
  14. List Keystone tokens using:
    keystone token-get
  15. List Keystone users using
    keystone user-list

If all the above commands give you the output, that means your Keystone Identity Service is all set up, and you can proceed to the next steps—In part four, I will discuss on how to configure and set up Image Service to store images.

Categories: DBA Blogs

October 2014 Critical Patch Update Released

Oracle Security Team - Tue, 2014-10-14 13:49
Normal 0 false false false EN-US X-NONE X-NONE

Hello, this is Eric Maurice again.

Oracle today released the October 2014 Critical Patch Update. This Critical Patch Update provides fixes for 154 vulnerabilities across a number of product families including: Oracle Database, Oracle Fusion Middleware, Oracle Enterprise Manager Grid Control, Oracle E-Business Suite, Oracle Supply Chain Product Suite, Oracle PeopleSoft Enterprise, Oracle JDEdwards EnterpriseOne, Oracle Communications Industry Suite, Oracle Retail Industry Suite, Oracle Health Sciences Industry Suite, Oracle Primavera, Oracle Java SE, Oracle and Sun Systems Product Suite, Oracle Linux and Virtualization, and Oracle MySQL.

In today’s Critical Patch Update Advisory, you will see a stronger than previously-used statement about the importance of applying security patches. Even though Oracle has consistently tried to encourage customers to apply Critical Patch Updates on a timely basis and recommended customers remain on actively-supported versions, Oracle continues to receive credible reports of attempts to exploit vulnerabilities for which fixes have been already published by Oracle. In many instances, these fixes were published by Oracle years ago, but their non-application by customers, particularly against Internet-facing systems, results in dangerous exposure for these customers. Keeping up with security releases is a good security practice and good IT governance.

Out of the 154 vulnerabilities fixed with today’s Critical Patch Update release, 31 are for the Oracle Database. All but 3 of these database vulnerabilities are related to features implemented using Java in the Database, and a number of these vulnerabilities have received a CVSS Base Score of 9.0.

This CVSS 9.0 Base Score reflects instances where the user running the database has administrative privileges (as is typical with pre-12 Database versions on Windows). When the database user has limited (or non-root) privilege, then the CVSS Base Score is 6.5 to denote that a successful compromise would be limited to the database and not extend to the underlying Operating System. Regardless of this decrease in the CVSS Base Score for these vulnerabilities for most recent versions of the database on Windows and all versions on Unix and Linux, Oracle recommends that these patches be applied as soon as possible because a wide compromise of the database is possible.

The Java Virtual Machine (Java VM) was added to the database with the release of Oracle 8i in early 1999. The inclusion of Java VM in the database kernel allows Java stored procedures to be executed by the database. In other words, by running Java in the database server, Java applications can benefit from direct access to relational data. Not all customers implement Java stored procedures; however support for Java stored procedures is required for the proper operation of the Oracle Database as certain features are implemented using Java. Due to the nature of the fixes required, Oracle development was not able to produce a normal RAC-rolling fix for these issues. To help protect customers until they can apply the Oracle JavaVM component Database PSU, which requires downtime, Oracle produced a script that introduces new controls to prevent new Java classes from being deployed or new calls from being made to existing Java classes, while preserving the ability of the database to execute the existing Java stored procedures that customers may rely on.

As a mitigation measure, Oracle did consider revoking all Public Grant to Java Classes, but such approach is not feasible with a static script. Due to the dynamic nature of Java, it is not possible to identify all the classes that may be needed by an individual customer. Oracle’s script is designed to provide effective mitigation against malicious exploitation of Java in the database to customers who are not deploying new Java code or creating Java code dynamically.

Customers who regularly develop in Java in the Oracle Database can take advantage of a new feature introduced in Oracle 12.1. By running their workloads with Privilege Analysis enabled, these customers can determine which Java classes are actually needed and remove unnecessary Grants.

18 of the 154 fixes released today are for Oracle Fusion Middleware. Half of these fixes are pass-through fixes to address vulnerabilities in third-party components included in Oracle Fusion Middleware distributions. The most severe CVSS Base Score reported for these Oracle Fusion Middleware vulnerabilities is 7.5.

This Critical Patch Update also provides fixes for 25 new Java SE vulnerabilities. The highest reported CVSS Base Score for these Java SE vulnerabilities is 10.0. This score affects one Java SE vulnerability. Out of these 25 Java vulnerabilities, 20 affect client-only deployments of Java SE (and 2 of these vulnerabilities are browser-specific). 4 vulnerabilities affect client and server deployments of Java SE. One vulnerability affects client and server deployments of JSSE.

Rounding up this Critical Patch Update release are 15 fixes for Oracle and Sun Systems Product Suite, and 24 fixes for Oracle MySQL.

Note that on September 26th 2014, Oracle released Security Alert CVE-2014-7169 to deal with a number of publicly-disclosed vulnerabilities affecting GNU Bash, a popular open source command line shell incorporated into Linux and other widely used operating systems. Customers should check out this Security Alert and apply relevant security fixes for the affected systems as its publication so close to the publication of the October 2014 Critical Patch Update did not allow for inclusion on these Security Alert fixes in the Critical Patch Update release.

For More Information:

The October 2014 Critical Patch Update is located at http://www.oracle.com/technetwork/topics/security/cpuoct2014-1972960.html

Security Alert CVE-2014-7169 is located at http://www.oracle.com/technetwork/topics/security/alert-cve-2014-7169-2303276.html. Furthermore, a list of Oracle products using GNU Bash is located at http://www.oracle.com/technetwork/topics/security/bashcve-2014-7169-2317675.html.

The Oracle Software Security Assurance web site is located at http://www.oracle.com/us/support/assurance/

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

What Do You Need to Secure EHR? [VIDEO]

Chris Foot - Tue, 2014-10-14 11:29

Transcript

Electronic health records are becoming a regular part of the healthcare industry, but are organizations taking the right measures to secure them?

Hi, welcome to RDX. EHR systems can help doctors and other medical experts monumentally enhance patient treatment, but they also pose serious security risks.

SC Magazine reported an employee of Memorial Hermann Health System in Houston accessed more than 10,000 patient records over the course of six years. Social Security Numbers, dates of birth and other information was stolen.

In order to deter such incidents from occurring, health care organizations must employ active security monitoring of their databases. That way, suspicious activity can readily be identified and acted upon.

Thanks for watching! Be sure to join us next time for more security best practices and tips.

The post What Do You Need to Secure EHR? [VIDEO] appeared first on Remote DBA Experts.

Uber won't want drivers in the future

Steve Jones - Tue, 2014-10-14 09:30
I'm an Uber user, its a great service outside of cities with decent public transport.  But I have been thinking about where they will justify the $17bn valuation and give people a return on that $1.2bn investment.  At the same time I've been following the autonomous car pieces with interest and I think there is a pretty clear way this can end, especially as Uber have already said they are going
Categories: Fusion Middleware

Oracle E-Business Suite Updates From OpenWorld 2014

Pythian Group - Tue, 2014-10-14 08:29

Oracle OpenWorld has always been my most exciting conference to attend. I always see high energy levels everywhere, and it kind of revs me up to tackle new upcoming technologies. This year I concentrated on attending mostly Oracle E-Business Suite release 12.2 and Oracle 12c Database-related sessions.

On the Oracle E-Business Suite side, I started off with Oracle EBS Customer Advisory Board Meeting with great presentations on new features like the Oracle EBS 12.2.4 new iPad Touch-friendly interface. This can be enabled by setting “Self Service Personal Home Page mode” profile value to “Framework Simplified”. Also discussed some pros and cons of the new downtime mode feature of adop Online patching utility that allows  release update packs ( like 12.2.3 and 12.2.4 patch ) to be applied with out starting up a new online patching session. I will cover more details about that in a separate blog post. In the mean time take a look at the simplified home page look of my 12.2.4 sandbox instance.

Oracle EBS 12.2.4 Simplified Interface

Steven Chan’s presentation on EBS Certification Roadmap announced upcoming support for Android tablets Chrome Browser, IE11 and Oracle Unified Directory etc. Oracle did not extend any support deadlines for Oracle EBS 11i or R12 this time. So to all EBS customers on 11i: It’s time to move to R12.2. I also attended a good session on testing best practices for Oracle E-Business Suite, which had a good slide on some extra testing required during Online Patching Cycle. I am planning to do a separate blog with more details on that, as it is an important piece of information that one might ignore. Also Oracle announced a new product called Flow Builder that is part of Oracle Application Testing Suite, which helps users test functional flows in Oracle EBS.

On the 12c Database side, I attended great sessions by Christian Antognini on Adaptive Query Optimization and Markus Michalewicz sessions on 12c RAC Operational Best Practices and RAC Cache Fusion Internals. Markus Cachefusion presentation has some great recommendations on using _gc_policy_minimum instead of turning off DRM completely using _gc_policy_time=0. Also now there is a way to control DRM of a object using package DBMS_CACHEUTIL.

I also attended attended some new, upcoming technologies that are picking up in the Oracle space like Oracle NoSQL, Oracle Big Data SQL, and Oracle Data Integrator Hadoop connectors. These products seem to have great future ahead and have good chances of becoming mainstream in the data warehousing side of businesses.

Categories: DBA Blogs

Let the Data Guard Broker control LOG_ARCHIVE_* parameters!

The Oracle Instructor - Tue, 2014-10-14 08:20

When using the Data Guard Broker, you don’t need to set any LOG_ARCHIVE_* parameter for the databases that are part of your Data Guard configuration. The broker is doing that for you. Forget about what you may have heard about VALID_FOR – you don’t need that with the broker. Actually, setting any of the LOG_ARCHIVE_* parameters with an enabled broker configuration might even confuse the broker and lead to warning or error messages. Let’s look at a typical example about the redo log transport mode. There is a broker configuration enabled with one primary database prima and one physical standby physt. The broker config files are mirrored on each site and spfiles are in use that the broker (the DMON background process, to be precise) can access:

 OverviewWhen connecting to the broker, you should always connect to a DMON running on the primary site. The only exception from this rule is when you want to do a failover: That must be done connected to the standby site. I will now change the redo log transport mode to sync for the standby database. It helps when you think of the log transport mode as an attribute (respectively a property) of a certain database in your configuration, because that is how the broker sees it also.

 

[oracle@uhesse1 ~]$ dgmgrl sys/oracle@prima
DGMGRL for Linux: Version 11.2.0.3.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> edit database physt set property logxptmode=sync;
Property "logxptmode" updated

In this case, physt is a standby database that is receiving redo from primary database prima, which is why the LOG_ARCHIVE_DEST_2 parameter of that primary was changed accordingly:

[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Sep 30 17:21:41 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> show parameter log_archive_dest_2

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2		     string	 service="physt", LGWR SYNC AFF
						 IRM delay=0 optional compressi
						 on=disable max_failure=0 max_c
						 onnections=1 reopen=300 db_uni
						 que_name="physt" net_timeout=3
						 0, valid_for=(all_logfiles,pri
						 mary_role)

Configuration for physt

The mirrored broker configuration files on all involved database servers contain that logxptmode property now. There is no new entry in the spfile of physt required. The present configuration allows now to raise the protection mode:

DGMGRL> edit configuration set protection mode as maxavailability;
Succeeded.

The next broker command is done to support a switchover later on while keeping the higher protection mode:

DGMGRL> edit database prima set property logxptmode=sync;
Property "logxptmode" updated

Notice that this doesn’t lead to any spfile entry; only the broker config files store that new property. In case of a switchover, prima will then receive redo with sync.

Configuration for primaNow let’s do that switchover and see how the broker ensures automatically that the new primary physt will ship redo to prima:

 

DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Databases:
    prima - Primary database
    physt - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> switchover to physt;
Performing switchover NOW, please wait...
New primary database "physt" is opening...
Operation requires shutdown of instance "prima" on database "prima"
Shutting down instance "prima"...
ORACLE instance shut down.
Operation requires startup of instance "prima" on database "prima"
Starting instance "prima"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "physt"

All I did was the switchover command, and without me specifying any LOG_ARCHIVE* parameter, the broker did it all like this picture shows:

Configuration after switchoverEspecially, now the spfile of the physt database got the new entry:

 

[oracle@uhesse2 ~]$ sqlplus sys/oracle@physt as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Oct 14 15:43:41 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> show parameter log_archive_dest_2

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2		     string	 service="prima", LGWR SYNC AFF
						 IRM delay=0 optional compressi
						 on=disable max_failure=0 max_c
						 onnections=1 reopen=300 db_uni
						 que_name="prima" net_timeout=3
						 0, valid_for=(all_logfiles,pri
						 mary_role)

Not only is it not necessary to specify any of the LOG_ARCHIVE* parameters, it is actually a bad idea to do so. The guideline here is: Let the broker control them! Else it will at least complain about it with warning messages. So as an example what you should not do:

[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Oct 14 15:57:11 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> alter system set log_archive_trace=4096;

System altered.

Although that is the correct syntax, the broker now gets confused, because that parameter setting is not in line with what is in the broker config files. Accordingly that triggers a warning:

DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Databases:
    physt - Primary database
    prima - Physical standby database
      Warning: ORA-16792: configurable property value is inconsistent with database setting

Fast-Start Failover: DISABLED

Configuration Status:
WARNING

DGMGRL> show database prima statusreport;
STATUS REPORT
       INSTANCE_NAME   SEVERITY ERROR_TEXT
               prima    WARNING ORA-16714: the value of property LogArchiveTrace is inconsistent with the database setting

In order to resolve that inconsistency, I will do it also with a broker command – which is what I should have done instead of the alter system command in the first place:

DGMGRL> edit database prima set property LogArchiveTrace=4096;
Property "logarchivetrace" updated
DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Databases:
    physt - Primary database
    prima - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Thanks to a question from Noons (I really appreciate comments!), let me add the complete list of initialization parameters that the broker is supposed to control. Most but not all is LOG_ARCHIVE*

LOG_ARCHIVE_DEST_n
LOG_ARCHIVE_DEST_STATE_n
ARCHIVE_LAG_TARGET
DB_FILE_NAME_CONVERT
LOG_ARCHIVE_FORMAT
LOG_ARCHIVE_MAX_PROCESSES
LOG_ARCHIVE_MIN_SUCCEED_DEST
LOG_ARCHIVE_TRACE
LOG_FILE_NAME_CONVERT
STANDBY_FILE_MANAGEMENT


Tagged: Data Guard, High Availability
Categories: DBA Blogs

OOW14 : One week in a nutshell

Luc Bors - Tue, 2014-10-14 04:26

Mind Control?

Oracle AppsLab - Mon, 2014-10-13 16:37

Editor’s note: Hey look, a new author. Here’s the first post from Raymond Xie, who joined us nearly a year ago. You may remember him from such concept demos as geo-fencing or Pebble watchface. Raymond has been busy at work and wants to share the work he did with telekinesis. Or something, you decide. Enjoy.

You put on a headband, stare at a ball, tilt your head back-forth and left-right . . . the ball navigates through a simple maze, rushing, wavering, changing colors, and finally hitting the target.

That is the latest creation out of AppsLab: Muse Sphero Driver. When it was first showed at OAUX Exchange during OOW, it amused many people, as they would call it “mind control” game.

The setup consists of  Muse – a brain-sensing headband, Sphero – a robotic ball, and a tablet to bridge the two.

Technically, it is your brainwave data (Electroencephalography – EEG) driving the Sphero (adjusting speed and changing color with spectrum from RED to BLUE, where RED: fast, active;  BLUE: slow, calm);  and head gesture (3d Accelerarometer- ACC) controlling the direction of Sphero movement.  Whether or not you call that as “mind control” is up to your own interpretation.

You kind of drive the ball with your mind, but mostly brainwave noises instead of conscious thought. It is still too early to derive accurate “mind control” from EEG data out of any regular person, for the reasons:

1. For EEG at Scalp level, the noise-to-signal ratio is very poor;
2. Need to establish the correlation between EEG and mind activity.

But it does open up a dialog in HCI, such as voice-control vs mind-control (silence); or in Robotics, instead of asking machine to “see”/”understand”, we can “see”/”understand” and impersonate it with our mind and soul.

While it is difficult to read out “mind” (any mind activity) transparently, we think it is quite doable to map your mind into certain states, and use the “state” as command indirectly.

We may do something around this area. So stay tuned.

Meanwhile, you can start to practice Yoga or Zen, to get better noise-to-signal ratio, and to set your mind into certain state with ease.Possibly Related Posts:

PeopleTools 8.54 will be the last release to certify Crystal Reports

Javier Delgado - Mon, 2014-10-13 15:49
It was just a question of time. In July 2011, Oracle announced that newly acquired PeopleSoft applications would not include a Crystal Reports license. Some years before, in October 2007, Business Objects was acquired by SAP. You don't need to read Machiavelli's Il Principe to understand why the license was now not included.

In order to keep customer's investment on custom reports safe, Oracle kept updating Crystal Reports certifications for those customers who purchased PeopleSoft applications before that date. In parallel, BI Publisher was improved release after release, providing a viable replacement to Crystal Reports, and in many areas surpassing its features.

Now, as announced in My Oracle Support's document 1927865.1, PeopleTools 8.54 will be the last release for which Crystal Reports will be certified, and support for report issues will end together with the expiration of PeopleSoft 9.1 applications support.








PeopleTools 8.54 was just released a couple of months ago, so there is no need to panic, but PeopleSoft applications managers would do well if they start coming up with an strategy to convert their existing Crystal Reports into BI Publisher reports.

Extending SaaS with PaaS free eLearning lectures

Angelo Santagata - Mon, 2014-10-13 15:17

Hey all,

Over the last 4 months I've been working with some of my US friends to create a eLearning version of the PTS SaaS extending PaaS workshop I co-wrote....., Well the time has come and we've published the first 4 eLearning seminars, and I'm sure there will be more coming.

Check em out and let me know what you think and what other topics need to be covered.

https://apex.oracle.com/pls/apex/f?p=44785:24:0::::P24_CONTENT_ID,P24_PREV_PAGE:10390,24

How to measure Oracle index fragmentation

Yann Neuhaus - Mon, 2014-10-13 14:39

At Oracle Open World 2014, or rather the Oaktable World, Chris Antognini has presented 'Indexes: Structure, Splits and Free Space Management Internals'. It's not something new, but it's still something that is not always well understood: how index space is managed, block splits, fragmentation, coalesce and rebuilds. Kyle Hailey has made a video of it available here.
For me, it is the occasion to share the script I use to see if an index is fragmented or not.

First, forget about those 'analyze index validate structure' which locks the table, and the DEL_LEAF_ROWS that counts only the deletion flags that are transient. The problem is not the amount of free space. The problem is where is that free space. Because if you will insert again in the same range of values, then that space will be reused. Wasted space occurs only when lot of rows were deleted in a range where you will not insert again. For exemple, when you purge old ORDERS, then the index on the ORDER_DATE - or on the ORDER_ID coming from a sequence - will be affected. Note that the problem occurs only for sparse purges because full blocks are reclaimed when needed and can get rows from an other range of value.

I have a script that shows the number of rows per block, as well as used and free space per block, and aggregates that by range of values.

First, let's create a table with a date and an index on it:

SQL> create table DEMOTABLE as select sysdate-900+rownum/1000 order_date,decode(mod(rownum,100),0,'N','Y') delivered , dbms_random.string('U',16) cust_id
  2  from (select * from dual connect by level

My script shows 10 buckets with begin and end value and for each of them the averge number of rows per block and the free space:

SQL> @index_fragmentation 

ORDER_DAT -> ORDER_DAT rows/block bytes/block %free space     blocks free                   
--------- -- --------- ---------- ----------- ----------- ---------- -----                     
24-APR-12 -> 02-AUG-12        377        7163          11        266                        
03-AUG-12 -> 11-NOV-12        377        7163          11        266                        
11-NOV-12 -> 19-FEB-13        377        7163          11        266                        
19-FEB-13 -> 30-MAY-13        377        7163          11        265                        
30-MAY-13 -> 07-SEP-13        377        7163          11        265                        
07-SEP-13 -> 16-DEC-13        377        7163          11        265                        
16-DEC-13 -> 26-MAR-14        377        7163          11        265                        
26-MAR-14 -> 03-JUL-14        377        7163          11        265                        
04-JUL-14 -> 11-OCT-14        377        7163          11        265                        
12-OCT-14 -> 19-JAN-15        376        7150          11        265                        

Note that the script reads all the table (it can do a sample but here it is 100%). Not exactly the table but only the index. It counts the index leaf blocks with the undocumented function sys_op_lbid() which is used by oracle to estimate the clustering factor.

So here I have no fragmentation. All blocks have about 377 rows and no free space. This is because I inserted them in increasing order and the so colled '90-10' block split occured.

Let's see what I get if I delete most of the rows before the 01-JAN-2014:

SQL> delete from DEMOTABLE where order_dateSQL> @index_fragmentation 

ORDER_DAT -> ORDER_DAT rows/block bytes/block %free space     blocks free                   
--------- -- --------- ---------- ----------- ----------- ---------- -----                     
25-APR-12 -> 02-AUG-12          4          72          99        266 oooo                   
03-AUG-12 -> 11-NOV-12          4          72          99        266 oooo                   
11-NOV-12 -> 19-FEB-13          4          72          99        266 oooo                   
19-FEB-13 -> 30-MAY-13          4          72          99        265 oooo                   
30-MAY-13 -> 07-SEP-13          4          72          99        265 oooo                   
07-SEP-13 -> 16-DEC-13          4          72          99        265 oooo                   
16-DEC-13 -> 26-MAR-14          4          72          99        265 oooo                   
26-MAR-14 -> 03-JUL-14          4          72          99        265 oooo                   
04-JUL-14 -> 11-OCT-14         46         870          89        265 oooo                   
12-OCT-14 -> 19-JAN-15        376        7150          11        265                        

I have the same buckets, and same number of blocks. But blocks which are in the range below 01-JAN-2014 have only 4 rows and a lot of free space. This is exactly what I want to detect: I can check if that free space will be reused.

Here I know I will not enter any orders with a date in the past, so those blocks will never have an insert into them. I can reclaim that free space with a COALESCE:

SQL> alter index DEMOINDEX coalesce;

Index altered.
SQL> @index_fragmentation 

ORDER_DAT to ORDER_DAT rows/block bytes/block %free space     blocks free                      
--------- -- --------- ---------- ----------- ----------- ---------- -----                     
25-APR-12 -> 03-OCT-14        358        6809          15         32                        
03-OCT-14 -> 15-OCT-14        377        7163          11         32                        
15-OCT-14 -> 27-OCT-14        377        7163          11         32                        
27-OCT-14 -> 08-NOV-14        377        7163          11         32                        
08-NOV-14 -> 20-NOV-14        377        7163          11         32                        
20-NOV-14 -> 02-DEC-14        377        7163          11         32                        
02-DEC-14 -> 14-DEC-14        377        7163          11         32                        
14-DEC-14 -> 26-DEC-14        377        7163          11         32                        
27-DEC-14 -> 07-JAN-15        377        7163          11         32                        
08-JAN-15 -> 19-JAN-15        371        7056          12         32                        

I still have 10 buckets because this is defined in my script, but each bucket noew has less rows. I've defragmented the blocks and reclaimed the free blocks.

Time to share the script now. Here it is: index_fragmentation.zip

The script is quite ugly. It's SQL generated by PL/SQL. It's generated because it selects the index columns. And as I don't want to have it too large it is not indented nor commented. However, if you run it with set servertoutput on you will see the generated query.

How to use it? Just change the owner, table_name, and index name. It reads the whole index so if you have a very large index you may want to change the sample size.