Skip navigation.

Feed aggregator

Multi Sheet Excel Output

Tim Dexter - Thu, 2014-10-02 17:28

Im on a roll with posts. This blog can be rebuilt ...

I received a question today from Camilo in Colombia asking how to achieve the following.

‘What are my options to deliver excel files with multiple sheets? I know we can split 1 report in multiple sheets in with the BIP Advanced Options, but what if I want to have 1 report / sheet? Where each report in each sheet has a independent data model ….’

Well, its not going to be easy if you have to have completely separate data models for each sheet. That would require generating multiple Excel outputs and then merging them, somehow.

However, if you can live with a single data model with multiple data sets i.e. queries that connect to separate data sources. Something like this:


Then we can help. Each query is returning its own data set but they will all be presented together in a single data set that BIP can then render. Our data structure in the XML output would be:

<DS>
 <G1>
  ...
 </G1>
 <G2>
  ...
 </G2>
 <G3>
  ...
 </G3>
</DS>

Three distinct data sets within the same data output.

To get each to sit on a separate sheet within the Excel output is pretty simple. It depends on how much native Excel functionality you want.

Using an RTF template you just create the layouts for each data set on a page(s) separated by a page break (Ctrl-Enter.) At runtime, BIP will place each output onto a separate sheet in the workbook. If you want to name each sheet you can use the <?spreadsheet-sheet-name: xpath-expression?> command. More info here. That’s as sophisticated as it gets with the RTF templates. No calcs, no formulas, etc. Just put the output on a sheet, bam!

Using an Excel template you can get more sophisticated with the layout.



This time thou, you create the layout for each data model on separate sheets. In my example, sheet 1 holds the department data, sheet 2, the employee data and so on. Some conditional formatting has snuck in there.

I have zipped up the sample files here.

FIN!

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Calibri; mso-bidi-theme-font:minor-latin;}
Categories: BI & Warehousing

OOW14 Day 4 - Internals of the 12c Multitenant Architecture by Vit Spinka

Yann Neuhaus - Thu, 2014-10-02 14:47

This is the session I preferred at Oracle Open World. Well, I'm writing that before going to @ludodba one and I'm sure I'll have then two preferred sessions... So Vit Spinka has presented the internals of the new multitenant architecture. It's always good to play with some internals things. Not only for the geeky fun of it but also because it helps understand how it work and address issues later.

I had investigated the metadata/object links in my blog post (and thanks to Vit for having referenced it during his presentation). But I learned from vit about what has changed in redo logs. In his case, the research on redo log internals is not just a curiosity. It's mandatory for his job: he is the principal developer for Dbvisit Replicate and Dbvisit Replicate reads the redo logs: the MINER process reads them and transforms them to something that can be used by the APPLY process.

So I'll not repeat what is available in his slides:
http://vitspinka.cz/files/CON2282%20Internals%20of%20the%20Oracle%20Database%2012c%20Multitenant%20Architecture.pdf 

Finally the redo is quite the same except that it adds the container id (it's one byte only because we are limited to 252 PDB). Addressing the files is not very special as the pluggable is very similar to transportable tablespaces. Addressing the objects is a bit different because we can have same object id across several PDB, and this is the reason to introduce the container id in the redo. But that's not so special.

The thing to remember about the new multitenant architecture is that:

  • it is not a big change for the instance which still manages the same objects (sessions, cursors, services, buffers, etc) just adding a container id
  • it no change for the database files as transportable tablespaces already introduced 'plugging'
  • all the magic is in the dictionary in order to have a shared dictionary for oracle objects anda  private dictionary for application objects. 

OCP 12C – RMAN and Flashback Data Archive

DBA Scripts and Articles - Thu, 2014-10-02 14:36

RMAN Enhancements New Privilege A new SYSBACKUP privilege is created in Oracle 12c,  it allows the grantee to perform BACKUP and RECOVERY operations with RMAN SQL in RMAN You can now use SQL Statements in RMAN like you would do in SQL*PLUS : BEFORE : RMAN> SQL “alter system switch logfile”; NOW : RMAN> alter system switch logfile; [...]

The post OCP 12C – RMAN and Flashback Data Archive appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Oracle Priority Support Infogram for 02-OCT-2014

Oracle Infogram - Thu, 2014-10-02 13:59

What do businesses need to prepare for cloud migration?

Chris Foot - Thu, 2014-10-02 13:47

Whether to host applications or increase storage, migrating workloads to cloud environments is a consistent trend. However, many database support services are discovering that businesses unfamiliar with the technology often don't know where to begin. 

It appears more enterprises will need guidance in the near future. Business Cloud News conducted a survey of 312 IT professionals across the United Kingdom, Europe and North America, finding 40 percent of participants believe 30 to 70 percent of their IT assets will be hosted in the cloud in the next two years. 

So, what are some pain points interested parties should be cognizant of? 

1. A lack of in-house capabilities 
It's a point organizations have made in the past, but still deserves acknowledgement. Although in-house IT staff members are capable of sanctioning the transition from on-premise systems to a cloud environment, many require extensive instruction before they can do so. Even after training is completed, their lack of experience will likely cause interruptions. 

In this regard, outsourcing is a safe choice. Hiring remote DBA experts to work with existing teams to migrate all applications and storage to a cloud infrastructure will expedite the process while also ensuring long-term issues don't persist. 

2. Look at what applications are connected to 
Hybrid cloud deployments are quite common among organizations that want to host a portion of their it assets in the cloud, but retain full back-end control over critical applications. 

Suppose a company leverages that leverages a hybrid environment wants to transition its enterprise resource management solution to a hosted atmosphere. However, the ERP's file stores reside in on-premise servers. In order for the ERP solution to undergo migration, the file stores it depends on to operate must be relocated beforehand. 

3. Observe indirect connections
Some on-premise deployments may seem alienated from other implementations but encounter hindrances when operating in the cloud. TechTarget noted one example detailed by Robert Green, principal cloud strategist at IT consultancy Enfinitum, who stated one of the firm's clients migrated an application to a public cloud environment without conducting a thorough assessment prior to initiation. 

What the company failed to recognize was that on-premise firewalls that assessed and filtered Internet traffic would directly impact its employees' ability to access the cloud-hosted application. When 400 users attempted to use the software, the firewalls became overloaded. In the end, the Enfinitum client lost $10 million because its workers were unable to use the application. 

If these three points are carefully considered, enterprises will be successful in all their cloud migration endeavors. 

The post What do businesses need to prepare for cloud migration? appeared first on Remote DBA Experts.

OCP 12C – Privileges

DBA Scripts and Articles - Thu, 2014-10-02 13:27

User Task-Specific Administrative Privileges To continue with the objective of separating duties and the least privileges, Oracle 12c introduce new administratives privileges all destinated to accomplish specific duties: SYSBACKUP : Used for RMAN operations like BACKUP, RESTORE, RECOVER SYSDG : Used to administer DATAGUARD, In 12c when you use DGMGRL commandline interface your are automatically [...]

The post OCP 12C – Privileges appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Microsoft Hadoop: Taming the Big Challenge of Big Data – Part Two

Pythian Group - Thu, 2014-10-02 11:13

Today’s blog post is the second in a three-part series with excerpts from our latest white paper, Microsoft Hadoop: Taming the Big Challenge of Big Data. In our first blog post, we revealed just how much data is being generated globally every minute – and that it has doubled since 2011.

Traditional database management tools were not designed to handle the elements that make big data so much more complex—namely its key differentiators: volume, variety, and velocity.Variety Volume Velocity graphic

Volume is the quantity of data, variety refers to the type of data collected (image, audio, video, etc.), and velocity is its expected growth rate. Many people assume that big data always includes high volume and intuitively understand the
challenges that it presents. In reality, however, data variety and velocity are much more likely to prevent traditional management tools from being able to efficiently capture, store, report, analyze, and archive the data, regardless of volume.

Download our full white paper which explores the technical and business advantages of effectively managing big data, regardless of quantity, scope, or speed.

 

Categories: DBA Blogs

Oracle Technology Network Wednesday in Review / Thursday Preview - Oracle OpenWorld and JavaOne

OTN TechBlog - Thu, 2014-10-02 10:20

Annual Blogger Meetup was a hoot!  Thanks for joining us.

OTN Lounge activities come to a close today last chance to learn more about the Oracle ACE Program and the Oracle Community from the program leads.  See more below -

Oracle ACE Program – 11:30 to 1:30 - Oracle ACE Program Recognizes prominent advocates. Learn how to become an Oracle ACE, Gain community recognition for sharing your knowledge and expertise, Advance your career and network with like-minded peers

Oracle Community - 11:30 to 3:30 - Learn about the Oracle Technology Network Community Platform, and get a preview of the new badges that are coming soon! Get answers to questions, network with peers, and be rewarded for your expertise in the Oracle Community

Don't forget the OTN team has been busy shooting video and attending sessions.  See what they've been up to so far -

Blogs -
The Java Source Blog
OTN DBA/DEV Watercooler

YouTube Channels -
OTN
Java
OTN Garage
OTN ArchBeat

Follow @JavaOneConf for conference-specific announcements

See you next year!

OCP 12C – Auditing

DBA Scripts and Articles - Thu, 2014-10-02 09:57

Unified Audit Data Trail Unifed Auditing offers a consolidated approach, all the audit data is consolidated in a single place. Unified Auditing consolidate audit records for the following sources : Standard Auditing Fine-grained auditing (DBMS_FGA) RAC security auditing RMAN auditing Database Vault auditing Oracle Label Security auditing Oracle Data Mining Oracle Data Pump Oracle SQL*Loader In [...]

The post OCP 12C – Auditing appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Kuali Foundation: Clarification on future proprietary code

Michael Feldstein - Thu, 2014-10-02 08:35

Well that was an interesting session at Educause as described at Inside Higher Ed:

It took the Kuali leadership 20 minutes to address the elephant in the conference center meeting room.

“Change is ugly, and change is difficult, and the only difference here is you’re going to see all the ugliness as we go through the change because we’re completely transparent,” said John F. (Barry) Walsh, a strategic adviser for the Kuali Foundation. “We’re not going to hide any difficulty that we run into. That’s the way we operate. It’s definitely a rich environment for people who want to chuck hand grenades. Hey, have a shot — we’re wide open.” [snip]

Walsh, who has been dubbed the “father of Kuali,” issued that proclamation after a back-and-forth with higher education consultant Phil Hill, who during an early morning session asked the Kuali leadership to clarify which parts of the company’s software would remain open source.

While the article describes the communication and pushback issues with Kuali’s creation of a for-profit entity quite well (go read the whole article), I think it’s worth digging into what Carl generously describes as a “back-and-forth”. What happened was that there was a slide describing the relicensing of Kuali code as AGPL, and the last bullet caught my attention:

  • AGPL > GPL & ECL for SaaS
  • Full versions always downloadable by customers
  • Only feature “held back” is multi-tenant framework

If you need a read on the change of open source licenses and why this issue is leading to some of the pushback, go read Chuck Severance’s blog post.

Does ‘held back’ mean that the multi-tenant framework to enable cloud hosting partially existed but is not moving to AGPL, or does it mean that the framework would be AGPL but not downloadable by customers, or does it mean that the framework is not open course? That was the basis of my question.

Several Kuali Foundation representatives attempted to indirectly answer the question without addressing the license.

“I’ll be very blunt here,” Walsh said. “It’s a commercial protection — that’s all it is.”

The back-and-forth involved trying to get a clear answer, and the answer is that the multi-tenant framework to be developed / owned by KualiCo will not be open source – it will be proprietary code. I asked Joel Dehlin for additional context after the session, and he explained that all Kuali functionality will be open source, but the infrastructure to allow cloud hosting is not open source.

This is a significant clarification on the future model. While Kuali has always supported an ecosystem with commercial partners that can offer proprietary code, this is the first time that Kuali itself will offer proprietary, non open source code.[1]

What is not clear is whether any of the “multi-tenant framework” already exists and will be converted to a proprietary license or if all of this code will be created by KualiCo from the ground up. If anyone knows the answer, let me know in the comments.

From IHE:

“Unfortunately some of what we’re hearing is out of a misunderstanding or miscommunication on our part,” said Eric Denna, vice president of IT and chief information officer at the University of Maryland at College Park. “Brad [Wheeler, chair of the foundation’s board of directors,] and I routinely are on the phone saying, ‘You know, we have day jobs.’ We weren’t hired to be communications officers.”

Suggestion: Simple answers such as “What ‘held back’ means is that the framework will be owned by KualiCo and not open source and therefore not downloadable” would avoid some of the perceived need for communication officers.

  1. Kuali Foundation is partial owner and investor in KualiCo.

The post Kuali Foundation: Clarification on future proprietary code appeared first on e-Literate.

Day 3 at Oracle Open World 2014 - Cloud: Private or Public?

Yann Neuhaus - Thu, 2014-10-02 07:16

One of the main subject of this year's Oracle OpenWorld was the Cloud. In this post I will share some thoughts on this: is the Cloud a dream, a reality, fog or smoke?

 

Before

Before going to OOW 2014 I did not have a fixed position about Cloud. I had some questions, for instance about security like, I guess, other people. I saw several sessions and I started to write this blog two days ago to summarize my reflection. Fortunately I was in a session on Wednesday where the discussion was "Private Cloud? Public Cloud?" and after that I had to update, in the good way, this post.

 

Now

Now as we can choose between public, private or even mixed Cloud the sky is less .... cloudy.

I am a bit more convinced. I would use Public Cloud for the development environment as this will reduce the implementation time, flexibility for the developer. Why not also for the training as we do not have to use production data? For the QA, clone and production I would more use the Private Cloud for the time being. In both cases we will benefit from great agility, low expenses, better utilization of resources aso.

But there are still some questions:

  • Although we can install the Private Cloud like the Public one, what will be the impact of the budget? 

  • What will be the impact on our day to day work?

  • What will be our role as an integrator working more on the infrastructure layer?

  • Do we have a view and can we have some valuable discussions with the people who manage our system in the Public Cloud in case we hit issues? Can a good relationship be build?

  • Today we increase our skills while we are working also in Dev on the premise installation. We can hit sometimes, during the integration phase, issues that have to be solved. This of course avoid to have later the same problems in the other environments. How will this work in case we use a mixed environment, Public for Dev, Private for Prod?

  • Who will do the load&stress tests in case the Public Cloud is used?

  • In a validated system we have to follow strict procedures (e.g. GxP). How is this managed in the Public Cloud? Are we still compliant? Are the people managing the Public Cloud trained using the company's procedures? The last session confirmed that in that case we have to use the Private Cloud.

  • Besides the regulatory rules in the different countries, what will be the acceptance in the different countries? Will it be the same in Europe as in the USA? Do we have the same culture about this subject?

  • Another question which is not related to the technology; how the future young IT people have to be positionned? Which kind of training they have to follow? Is this already in place in the universities? Are the teacher aware of those changes?

I've got lots of information on the technical side but I am also interested on the social and human point of view. It would be interesting to have the view(s) from a sociologist or philosopher as this new approach will certainly have an impact on our life - like the Big Data will have.

Probably I missed some information in all this flow and I don't have all keys. But I think we are at the beginning of a revolution? evolution? or opportunity.

Let's see what will happen in the next few years, and think about it in the meantime...

Free Oracle Special Edition eBook - Enterprise Mobility

Oracle Mobile is about simplifying enterprise mobility giving customers a complete mobile solution and the choice and flexibility to develop their own unique enterprise mobile strategy. Whether you...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Index Compression Part V: 12c Advanced Index Compression (Little Wonder)

Richard Foote - Thu, 2014-10-02 04:28
I’ve finally managed to find some free time in the evening to write a new blog piece :) This will have to be the record for the longest time between parts in a series, having written Part IV of this Index Compression series way way back in February 2008 !! Here are the links to the previous articles […]
Categories: DBA Blogs

Free Formspider Webinar (an APEX Alternative)

Gerger Consulting - Thu, 2014-10-02 04:11
We'd like to invite you to our free Formspider webinar on October 7th. In the webinar, you'll get a chance to find out how Formspider can help you build great web applications with PL/SQL and why we think Formspider is better than APEX.

More importantly, current Formspider customers will join the webinar and talk about their experience with the product.

The webinar is free but space is limited. Sign up now.
Categories: Development

Oracle OEM Cloud Control 12c upgrade to 12.1.0.4

Yann Neuhaus - Thu, 2014-10-02 02:12

In this blog post, I will describe how to upgrade from Oracle Enterprise Manager Cloud Control 12.1.0.3 to OEM 12.1.0.4.0. I have already described the main new features of Cloud Control 12.1.0.4 version in an earlier post (Oracle OEM Cloud Control 12.1.0.4 - the new features). The first pre-requisite is to apply the patch 11061801 on the repository database in 11.2.0.3 version, using the classical opatch apply method. Then, we can begin the upgrade phase.

First, we should explicitly stop the OMS jvmd and adp engines:

 

oracle@vmtestoraem12c:/home/oracle/ [oms12c] emctl extended oms jvmd stop -allOracle Enterprise Manager Cloud Control 12c Release 3Copyright (c) 1996, 2013 Oracle Corporation. All rights reserved.Please enter the SYSMAN password:Stopping all Engines{}

No engines found for this operation

oracle@vmtestoraem12c:/home/oracle/ [oms12c] emctl extended oms adp stop -a

Oracle Enterprise Manager Cloud Control 12c Release 3

Copyright (c) 1996, 2013 Oracle Corporation. All rights reserved.

No valid registry entry found for verb jv

 

Then we stop the OMS:

 

oracle@vmtestoraem12c:/home/oracle/ [oms12c] emctl stop oms -all

Oracle Enterprise Manager Cloud Control 12c Release 3

Copyright (c) 1996, 2013 Oracle Corporation.

All rights reserved.Stopping WebTier...

WebTier Successfully Stopped

Stopping Oracle Management Server...

Oracle Management Server Successfully Stopped

AdminServer Successfully Stopped

Oracle Management Server is Down

 

We stop the management agent:

 

oracle@vmtestoraem12c:/home/oracle/ [agent12c] emctl stop agent

Oracle Enterprise Manager Cloud Control 12c Release 3

Copyright (c) 1996, 2013 Oracle Corporation.

All rights reserved.

Stopping agent ..... stopped.


Finally after unzipping the 12.1.0.4 binary files, we can run the installer:

 

cc1

 

We choose not to receive security updates:

 

cc2

cc3

 

 

We choose to skip the updates:

 

cc4

 

All the prerequisites checks have succeeded :=)

 

cc5

 

We select a One System Upgrade and the Oracle_Home where the 12.1.0.3 version is installed:

 

cc7

 

We select the new Middleware Home:

 

cc8

 

We enter the administration passwords:

 

cc9

 

The installer reminds you that you have correctly patched the repository database. Let's check if it is correct:

 

Interim patches (1) :

Patch 11061801 : applied on Mon Aug 04 16:52:51 CEST 2014

Unique Patch ID: 16493357

Created on 24 Jun 2013, 23:28:20 hrs PST8PDT

Bugs fixed: 11061801

 

cc10

 

We did not copy the emkey to the repository, so we have to run:

 

oracle@vmtestoraem12c:/u01/app/oracle/MiddleWare_12103/oms/bin/ [oms12c] emctl config emkey -copy_to_repos_from_file -repos_conndesc '"(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=vmtestoraem12c.it.dbi-services.com)(PORT=1521)))(CONNECT_DATA=(SID=OMSREP)))"' -repos_user sysman -emkey_file /u01/app/oracle/MiddleWare_12103/oms/sysman/config/emkey.ora

Oracle Enterprise Manager Cloud Control 12c Release 3

Copyright (c) 1996, 2013 Oracle Corporation.

All rights reserved.

Enter Admin User's Password :

Enter Enterprise Manager Root (SYSMAN) Password :

The EMKey has been copied to the Management Repository.

This operation will cause the EMKey to become unsecure.

 

After the required operation has been completed, secure the EMKey by running "emctl config emkey -remove_from_repos":

 

cc11

 

We select Yes to let the installer fix the isssue automatically:

 

cc12

 

We select Next:

 

cc13

 

We can select additionnal plugins:

 

cc14

 

We enter the weblogic password:

 

cc15

 

We select install:

 

cc16

 

And finally we run the allroot.sh script connected as root:

 

cc17

 

The upgrade is successfull! Let's check the OMs status:

 

oracle@vmtestoraem12c:/u01/app/oracle/MiddleWare_12cR4/oms/ [oms12c] emctl status oms -details

Oracle Enterprise Manager Cloud Control 12c Release 4

Copyright (c) 1996, 2014 Oracle Corporation.

All rights reserved.

Enter Enterprise Manager Root (SYSMAN) Password :

Console Server Host : vmtestoraem12c.it.dbi-services.com

HTTP Console Port : 7789

HTTPS Console Port : 7801

HTTP Upload Port : 4890

HTTPS Upload Port : 4901

EM Instance Home : /u01/app/oracle/gc_inst/em/EMGC_OMS1

OMS Log Directory Location : /u01/app/oracle/gc_inst/em/EMGC_OMS1/sysman/log

OMS is not configured with SLB or virtual hostname

Agent Upload is locked.

OMS Console is locked.

Active CA ID: 1

Console URL: https://vmtestoraem12c.it.dbi-services.com:7801/em

Upload URL: https://vmtestoraem12c.it.dbi-services.com:4901/empbs/upload

WLS Domain InformationDomain Name : GCDomain

Admin Server Host : vmtestoraem12c.it.dbi-services.com

Admin Server HTTPS Port: 7102

Admin Server is RUNNING

Oracle Management Server Information

Managed Server Instance Name: EMGC_OMS1

Oracle Management Server Instance Host: vmtestoraem12c.it.dbi-services.com

WebTier is Up

Oracle Management Server is Up

BI Publisher is not configured to run on this host.

 

Now we have access to the Enterprise Manager Cloud Control 12.1.0.4:

 

cc18

 

The next step consists in upgrading the management agents. From the Setup menu, we select Upgrade Agents:

 

cc19

cc20

 

The management agent is detected:

 

cc21

 

The operation is successfull:

 

cc22

 

The update to 12.1.0.4 Enterprise Manager version did not cause any problem and has a new feature which checks the correct patching of the Enterprise Manager repository database.

Run #em12c on #db12c? – Discussion from Oracle Open World (MOS Note: 1920632.1)

DBASolved - Wed, 2014-10-01 13:38

Ok Folks, I’ve been here are Oracle Open World for a few days now.  In that time, I’ve had numerous conversations about running Oracle Enterprise Manager 12c on Oracle Database 12c.  I will be honest and say that I’ve enjoyed these conversations; however, after about the fourth time I decided I need to write a quick post on the explanation discussed in these conversations.  

Early this year (August) I wrote a post about the what came out of the OEM CAB in May 2014 and how to get OEM 12c to work on DB12c.  The concept of running OEM 12c on DB12c, pluggable or not, have many people excited and looking forward to configuring OEM to do that very configuration.  Heck, I’ve even installed it for a few customers in that configuration (non-PDB).  So I’m a bit sad in having to say this:  ORACLE DATABASE 12c SUPPORT FOR THE REPOSITORY DATABASE IS TEMPORARILY SUSPENDED!  I say this due to the My Oracle Support (MOS) Note: 1920632.1.

Note 1920632.1 states the following:

Due to some recently discovered scenarios, we (Oracle) are temporarily suspending the certification of DB 12.1.0.1 and DB 12.1.0.2 as a Certified Repository version for EM 12c R4 until additional testing is complete.

Now what does this mean for those customers and clients that have already built their OEM 12c repository on DB 12c (12.1.0.1 or 12.1.0.2)?  The MOS note outlines what to do in the action section of the note:

Until testing is complete on the 12c Database, Oracle recommends using DB 11.2.0.4 as the EM 12c R4 Repository. 

If you are currently running a 12c DB repository, please be aware that additional testing is underway and there are currently no bugs or patches required; but if testing proves a patch is required, we will update this announcement.  You do not need to deinstall EM 12c or move the repository to an 11.2.0.4 database.

Sure hope Oracle quickly finishes testing and can restore support for DB 12c as the OEM repository.  In the meantime, everyone should know about this note number and be aware when making architecture changes related to their OEM 12c environment.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

OCP 12C – In Database Archiving and Temporal Validity

DBA Scripts and Articles - Wed, 2014-10-01 13:14

In Database Archiving In Database Archiving is a new feature of Oracle 12c meant to solve management of historical data inside the database. Like its name says, this functionnality leaves data into the database, so it remains accessible if you need it. With In Database Archiving, historical data remains in the database but is invisible to [...]

The post OCP 12C – In Database Archiving and Temporal Validity appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

OOW14 Day 3 - 12c Adaptive Bitmap Pruning

Yann Neuhaus - Wed, 2014-10-01 13:10

I'm currently following the session 'Real-World Performance of Star and Snowflake Schemas' with Michael Hallas, Andrew Holdsworth, John Clarke. It's really a good presentation. the Real Performance team tries to spread messages about how to design for performance. But today I'll not blog about what I've seen but about what I've not seen. Everybody talks about those nice 12c features that are useful in BI workloads, such as adaptive joins, adaptive parallel distribution, vector 'in-memory' aggregation, etc. Nobody talks about Adaptive Bitmap Pruning.

If you google for for it there is only one result (at least before my blog is published) which is the patent about all all those new CBO features that came in 12.1.0.1

And when I assist to a session that shows star transformation and execution plans with and without temporary table, I'm frustrated that I don't see anything about that great feature that stabilizes our BI reports on star schemas. I'm preparing our tuning workshop (dates here - 10% discount if you book before the end of the year) and it's something I can't skip when talking about star transformation and bitmap indexes.

So let's have a look to it. It you want a refresh about star transformation, please just wait about the next SOUG newsletter. But if you're already familiar with it, this is for you.

Let's have a look at an execution plan in 12c after running a query on a star schema. You have the same as in 11g except that we have that grayed ‘STATISTICS COLLECTOR’. Star transformation is good when the predicate is selective enough to filter few rows. Imagine that the cardinality estimation was wrong and most of FACT rows have the required value. This is what happened here, and the optimizer has chosen to stop iterating in that bitmap branch. It just ignores the predicate at that step and the join back to the dimension Cartesian join will filter it anyway.

If you check the execution plan with predicates you will see the predicate on dimension in the two table access.

12cAdaptiveBitmapPruning.png

Look at the end. When the statistics collector has seen that the threshold has been passed over, it has decided to skip that bitmap branch. This is Adaptive Bitmap Pruning. The bitmap branch is good only if it helps to filter a lot of rows. If it’s not the case, then it’s just an overhead and it is skipped, coming back - for that branch only - to the behavior we have when star transformation was disabled.

As with the other adaptive plans, the threshold is calculated at parse time.

See more details about the inflection point in a previous blog about Adaptive Joins