Skip navigation.

Feed aggregator

Rittman Mead Founder Member of the Red Expert Alliance

Rittman Mead Consulting - Thu, 2014-11-20 05:24

At Rittman Mead we’re always keen to collaborate with our peers and partners in the Oracle industry, running events such as the BI Forum in Brighton and Atlanta each year and speaking at user groups in the UK, Europe, USA and around the world. We were pleased therefore to be contacted by our friends over at Amis in the Netherlands just before this year’s Openworld with an idea they had about forming an expert-services Oracle partner network, and to see if we’d be interested in getting involved.

Fast-forward a few weeks and thanks to the organising of Amis’ Robbrecht van Amerongen we had our first meeting at San Francisco’s Thirsty Bear, where Amis and Rittman Mead were joined by the inaugural member partners Avio Consulting, E-VIta AS, Link Consulting, Opitz Consulting and Rubicon Red.

Gathering of the expert alliance partners at #oow14 @Oracle open world.

— Red Expert Alliance (@rexpertalliance) November 3, 2014

The Red Expert Alliance now has a website, news page and partner listing, and as the “about” page details, it’s all about collaborating between first-class Oracle consulting companies:


“The Red Expert Alliance is an international  network of first-class Oracle consulting companies. We are  working together to deliver the maximum return on our customers investment in Oracle technology. We do this by collaborating, sharing and challenging each other to improve ourselves and our customers.

Collaborating with other companies is a powerful way to overcome challenges of today’s fast-paced world and improve competitive advantage. Collaboration provides participants mutual benefits such as shared resources, shared expertise and enhanced creativity; it gives companies an opportunity to improve their performance and operations, achieving more flexibility thanks to shared expertise and higher capacity. Collaboration also fuels innovation by providing more diversity to the workplace which can result in better-suited solutions for customers.”

There’s also a press release, and a partner directory listing out our details along with the other partners in the alliance. We’re looking forward to collaborating with the other partners in the Red Expert Alliance, increasing our shared knowledge and collaborating together on Oracle customer projects in the future.

Categories: BI & Warehousing

In Which I (Partially) Disagree with Richard Stallman on Kuali’s AGPL Usage

Michael Feldstein - Wed, 2014-11-19 18:32

Since Michael is making this ‘follow-up blog post’ week, I guess I should jump in.

In my latest post on Kuali and the usage of the AGPL license, the key argument is that this license choice is key to understanding the Kuali 2.0 strategy – protecting KualiCo as a new for-profit entity in their future work to develop multi-tenant cloud hosting code.

What I have found interesting is that in most of my conversations with Kuali community people ,even for those who are disillusioned, they seem to think the KualiCo creation makes some sense. The real frustration and pushback has been on how decisions are made, how decisions have been communicated, and how the AGPL license choice will affect the community.

In the comments, Richard Stallman chimed in.

As the author of the GNU General Public License and the GNU Affero General Public License, and the inventor of copyleft, I would like to clear up a possible misunderstanding that could come from the following sentence:

“Any school or Kuali vendor, however, that develops its own multi-tenant cloud-hosting code would have to relicense and share this code publicly as open source.”

First of all, thinking about “open source” will give you the wrong idea about the reasons why the GNU AGPL and the GNU GPL work as they do. To see the logic, you should think of them as free software licenses; more specifically, as free software licenses with copyleft.

The idea of free software is that users of software deserve freedom. A nonfree program takes freedom away from its users, so if you want to be free, you need to avoid it. The aim of our copyleft licenses is to make sure all users of our code get freedom, and encourage release of improvements as free software. (Nonfree improvements may as well be discouraged since we’d need to avoid them anyway.) See

I don’t use the term “open source”, since it rejects these ethical ideas. ( Thus I would say that the AGPL requires servers running modified versions of the code to make the source for the running version available, under the AGPL, to their users.

The license of the modifications themselves is a different question, though related. The author of the modifications could release the modifications under the AGPL itself, or under any AGPL-compatible free software license. This includes free licenses which are pushovers, such as the Apache 2.0 license, the X11 license, and the modified BSD license (but not the original BSD license — see

Once the modifications are released, Kuali will be able to get them and use them under whatever license they carry. If it is a pushover license, Kuali will be able to incorporate those modifications even into proprietary software. (That’s what makes them pushover licenses.)

However, if the modifications carry the AGPL, and Kuali incorporates them into a version of its software, Kuali will be bound by the AGPL. If it distributes that version, it will be required to do so under the AGPL. If it installs that version on a server, it will be required by the AGPL to make the whole of the source code for that version available to the users of that server.

To avoid these requirements, Kuali would have to limit itself to Kuali’s own code, others’ code released under pushover licenses, plus code for which it gets special permission. Thus, Kuali will not have as much of a special position as some might think.

See also

Dr Richard Stallman
President, Free Software Foundation (,
Internet Hall-of-Famer (
MacArthur Fellow

I appreciate this clarification and Stallman’s participation here at e-Literate, and it is useful to understand the rationale and ethics behind AGPL. However, I disagree with the statement “Thus, Kuali will not have as much of a special position as some might think”. I do not think he is wrong, per se, but the combination of both the AGPL license and the Contributor’s License Agreement (CLA) in my view does ensure that KualiCo has a special position. In fact, that is the core of the Kuali 2.0 strategy, and their approach would not be possible without the AGPL usage.

Note: I have had several private conversations that have helped me clarify my thinking on this subject. Besides Michael with his comment to the blog, Patrick Masson and three other people have been very helpful. I also interviewed Chris Coppola from KualiCo to understand and confirm the points below. Any mistakes in this post, however, are my own.

It is important to understand two different methods of licensing at play – distributing code through an APGL license and contributing code to KualiCo through a CLA (Kuali has a separate CLA for partner institutions and a Corporate CLA for companies).

  • Distribution – Anyone can download the Kuali 2.0 code from KualiCo and make modifications as desired. If the code is used privately, there is no requirement for distributing the modified code. If, however, a server runs the modified code, the reciprocal requirements of AGPL kick in and the code must be distributed (made available publicly) with the AGPL license or a pushover license. This situation is governed by the AGPL license.
  • Contribution – Anyone who modifies the Kuali 2.0 code and contributes it to KualiCo for inclusion into future releases of the main code grants a license with special permission to KualiCo to do with the code as they see fit. This situation is governed by the CLA and not AGPL.

I am assuming that the future KualiCo multi-tenant cloud-hosting code is not separable from the Kuali code. In other words, the Kuali code would need modifications to allow multi-tenancy.

For a partner institution, their work is governed by the CLA. For a company, however, the choice on whether to contribute code is mutual between that company and KualiCo, in that both would have to agree to sign a CLA. Another company may choose to do this to ensure that bug fixes or Kuali enhancements get into the main code and do not have to be reimplemented with each new release.

For any contributed code, KualiCo can still keep their multi-tenant code proprietary as their special sauce. For distributed code under AGPL that is not contributed under the CLA, the code would be publicly available and it would be up to KualiCo whether to incorporate any such code. If KualiCo incorporated any of this modified code into the main code base, they would have to share all of the modified code as well as their multi-tenant code. For this reason, KualiCo will likely never accept any code that is not under the CLA – they do not want to share their special sauce. Chris Coppola confirmed this assumption.

This setup strongly discourages any company from directly competing with KualiCo (vendor protection) and is indeed a special situation.

The post In Which I (Partially) Disagree with Richard Stallman on Kuali’s AGPL Usage appeared first on e-Literate.

Modifying the Oracle Alta Skin

Shay Shmeltzer - Wed, 2014-11-19 14:38

In the previous blog entries I showed you how to create an ADF project that uses the new Alta UI and then showed you an example of implementing one of the design patterns for a flip card. In this blog/video I'm going to show you how you can further fine tune the look and feel of your Alta application by modifying and extending your skin with CSS.

At the end of the day, this is going to be done in a similar way to how you skinned previous ADF applications. (If you have never done this before, you might want to watch the videos in these two blog entries).

But since the skinning design time is not completely there for Alta in JDeveloper 12.1.3 there are a couple of tricks. Specifically when you create the new skin, you'll need to change the trinidad-skins.xml file to indicate it is extending the alta-v1 and not skyros-v1  - <extends>alta-v1.desktop</extends>

Then the rest of your tasks would be basically the same (although you won't see the overview tab in your skin editor).

So here we go:

Categories: Development

A Weird but True Fact about Textbook Publishers and OER

Michael Feldstein - Wed, 2014-11-19 13:44

As I was perusing David Kernohan’s notes on Larry Lessig’s keynote at the OpenEd conference, one statement leapt out at me:

Could the department of labour require that new education content commissioned ($100m) be CC-BY? There was a clause (124) that suggested that the government should check that no commercial content should exist in these spaces. Was argued down. But we were “Not important” enough to be defeated.

It is absolutely true that textbook publishers do not currently see OER as a major threat. But here’s a weird thing that is also true:

These days, many textbook publishers like OER.

Let me start with the full disclosure. For 18 months, I was an employee of Cengage Learning, one of the Big Three textbook publishers in US higher education. Since then, I have consulted for textbook publishers on and off. Pearson is a current client, and there have been others. Make of that what you will in terms of my objectivity on this subject, but I have been in the belly of the beast. I have had many conversations with textbook publisher employees at all levels about OER, and many of them truly, honestly like it. They really, really like it. As a rule, they don’t understand it. But some of them actually see it as a way out of the hole that they’re in.

This is a relatively recent thing. Not so very long ago, you’d get one of two reactions from employees at these companies, depending on the role of the person you were talking to. Editors would tend to dismiss OER immediately because they had trouble imagining that content that didn’t go through their traditional editorial vetting process could be good (fairly similarly to the way academics would dismiss Wikipedia as something that couldn’t be trusted without traditional peer review). There were occasional exceptions to this, but always for very granular content. Videos, for example. Sometimes editors saw (or still see) OER as extra bits—or “ancillary materials,” in their vernacular—that could be bundled with their professionally edited product. That’s the most that editors typically thought about OER. At the executive level, every so often they would trot out OER on their competitive threat list, look at it for a bit, and decide that no, they don’t see evidence that they are losing significant sales to OER. Then they would forget about it for another six months or so. Publishers might occasionally fight OER at a local level, or even at a state level in places like Washington or California where there was legislation. But in those cases the fight was typically driven by the sales divisions that stood to lose commissions, and they were treated like any other local or regional competition (such as home-grown content development). It wasn’t viewed as anything more than that. For the most part, OER was just not something publishers thought a lot about.

That has changed in US higher education as it has become clear that textbook profits are collapsing as student find more ways to avoid buying the new books. The traditional textbook business is clearly not viable in the long term, at least in that market, at least at the scale and margins that the bigger publishers are used to making. So these companies want to get out of the textbook business. A few of them will say that publicly, but many of them say it among themselves. They don’t want to be out of business. They just want to be out of the textbook business. They want to sell software and services that are related to educational content, like homework platforms or course redesign consulting services. But they know that somebody has to make the core curricular content in order to for them to “add value” around that content. As David Wiley puts it, content is infrastructure. Increasingly, textbook publishers are starting to think that maybe OER can be their infrastructure. This is why, for example, it makes sense for Wiley (the publisher, not the dude) to strike a licensing deal with OpenStax. They’re OK about not making a lot of money on the books as long as they can sell their WileyPlus software. Which, in turn, is why I think that Wiley (the dude, not the publisher) is not crazy at all when he predicts that “80% of all US general education courses will be using OER instead of publisher materials by 2018.” I won’t be as bold as he is to pick a number, but I think he could very well be directionally correct. I think many of the larger publishers hope to be winding down their traditional textbook businesses by 2018.

How particular OER advocates view this development will depend on why they are OER advocates. If your goal is to decrease curricular materials costs and increase the amount of open, collaboratively authored content, then the news is relatively good. Many more faculty and students are likely to be exposed to OER over the next four or five years. The textbook companies will still be looking to make their money, but they will have to do so by selling something else, and they will have to justify the value of that something else. It will no longer be the case that students buy closed textbooks because it never occurs to faculty that there is another viable option. On the other hand, if you are an OER advocate because you want big corporations to stay away from education, then Larry Lessig is right. You don’t currently register as a significant threat to them.

Whatever your own position might be on OER, George Siemens is right to argue that the significance of this coming shift demands more research. There’s a ton that we don’t know yet, even about basic attitudes of faculty, which is why the recent Babson survey that everybody has been talking about is so important. And there’s a funny thing about that survey which few people seem to have noticed:

It was sponsored by Pearson.

The post A Weird but True Fact about Textbook Publishers and OER appeared first on e-Literate.

Why You Should Create BPM Suite Business Object Using element (and not complexType)

Jan Kettenis - Wed, 2014-11-19 12:16
In this article I describe why you always should base your business object based upon an element, instead of a complexType.

With the Oracle BPM Suite your process data consists of project or process variables. Whenever the variable is based on a component, that component is either defined by some external composite (like a service), or is defined by the BPM composite itself, in which case it will be a Business Object. That Business Object is created directly or is based upon an external schema. Still with me?

When using an external schema you should define the business object based upon an element instead of a complexType. Both will be possible, but when you define it based upon a complexType, you will find that any variable using it, cannot be used in (XSLT) transformations nor can be used as input to Business Rules.

As an example, see the following schema:

The customer variable (that is based on an element) can be used in an XSLT transformation, whereas the order variable cannot:

The reason being that XSLT works on elements, and not complexTypes.

For a similar reason, the customer variable can be used as input to a Business Rule but the order variable cannot:

Of course, if you are a BPEL developer, you probably would already know, as there you can only create variables based on elements ;-)

Table TXK_TCC_RESULTS needs to be installed by running the EBS Technology Codelevel Checker (available as patch 17537119).

Vikram Das - Wed, 2014-11-19 12:06
I got this error while trying to apply a patch in R12.2:

 [EVENT]     [START 2014/11/19 09:18:39] Performing database sanity checks
   [ERROR]     Table TXK_TCC_RESULTS needs to be installed by running the EBS Technology Codelevel Checker (available as patch 17537119).

This table TXK_TCC-RESULTS is created in APPLSYS schema, by the latest version of the script delivered by 17537119.
So go ahead, download patch 17537119.  
Login as oracle user on your database node.Source environmentcd $ORACLE_HOME/appsutilunzip p17537119*$ ./ 
+===============================================================+ | Copyright (c) 2005, 2014 Oracle and/or its affiliates. | | All rights reserved. | | EBS Technology Codelevel Checker | +===============================================================+ 
Executing Technology Codelevel Checker version: 120.18 
Enter ORACLE_HOME value : /exampler122/oracle/11.2.0 
Enter ORACLE_SID value : exampler122
Bugfix XML file version: 120.0.12020000.16 
Proceeding with the checks... 
Getting the database release ... Setting database release to 
DB connectivity successful. 
The given ORACLE_HOME is RAC enabled. NOTE: For a multi-node RAC environment - run this tool on all non-shared ORACLE_HOMEs. - run this tool on one of the shared ORACLE_HOMEs. 

Created the table to store Technology Codelevel Checker results. 
STARTED Pre-req Patch Testing : Wed Nov 19 10:53:00 EST 2014 
Log file for this session : ./checkDBpatch_7044.log 
Got the list of bug fixes to be applied and the ones to be rolled back. Checking against the given ORACLE_HOME 

Opatch is at the required version. 
Found patch records in the inventory. 
All the required one-offs are present in Oracle Database Home 
Stored Technology Codelevel Checker results in the database successfully. 
FINISHED Pre-req Patch Testing : Wed Nov 19 10:53:03 EST 2014 
1 select owner,table_name from dba_tables 2* where table_name='TXK_TCC_RESULTS' SQL> / 
OWNER TABLE_NAME ------------------------------ ------------------------------ APPLSYS TXK_TCC_RESULTS 
Once you have done this, restart your patch with adop with additional parameter restart=yes
Categories: APPS Blogs

The Case for migrating your Oracle Forms applications to Formspider

Gerger Consulting - Wed, 2014-11-19 08:37
We’ve been getting a lot of inquiries asking whether we have a tool that will automatically convert Oracle Forms applications to Formspider.We don’t have an automatic converter. We don’t view this as a disadvantage at all. We are solving the modernization problem with a different approach:Formspider does not automatically converts your Forms applications to web apps but it converts your Forms developers to first rate web developers.We think this approach produces the best results and lowest cost in conversion projects. We’ve seen this many times.Formspider is an application development framework just like Oracle Forms and just like Forms its programming language is 100% PL/SQL. (You can think of it as Oracle Forms built for 21st century.) Because Formspider works very similar to Oracle Forms (event driven architecture, Formspider built-ins instead of Forms built-ins etc…) it is an order of magnitude easier to learn for Oracle Forms developers compared to any other tool.A conversion project using Formspider is not a complete rewrite where you start with a blank page. This is absolutely not true. Just to give few examples; All of your existing business logic implemented in PL/SQL can be reused in your new application. And because both Forms and Formspider are event driven and use similar API’s, code that manages the UI can be translated fairly easily. In other words, there is no impedance mismatch between the two products unlike between Forms and (say) ADF, .NET or any other popular web development framework.There are several problems with automatically converting Oracle Forms applications to another tech stack:1) The new target tech stack is not known by your team
I cannot overstate how important this is. You end up with an application that you cannot maintain. Your team, who knows the business, who knows what your customers want, who obviously can deliver a successful application to the users, needs training in the new tech stack. This means they go back to becoming junior developers for quite some time. (Even the most zealous ADF proponents admit to a year or longer learning curve.) This feeling of helplessness is very frustrating for experienced developers who know exactly what they wants to implement. It hurts the team moral and motivation during the project. It’s also very costly because, well, training costs money and the output of the developers lower significantly for months but their salaries do not.2) The output of an automatic converter is usually of low quality
Almost always the target tech stack uses the web page paradigm instead of the event driven architecture of Forms. This impedance mismatch is very very difficult to overcome automatically and the customer ends up with a low quality application design that no engineer would architect by himself. This makes the application very difficult and expensive to maintain.Moreover, if the target tech stack uses an object oriented programming language, this adds another magnitude of complexity because PL/SQL is not object oriented. This is why most automatic conversion projects start with the manual process of moving as much code to the database as possible.3) Automated converters are expensive
Best to my knowledge these converter tools are not cheap at all. These tools come with bundled services (they never get the job done 100% automatically) so you also pay for consulting services on top of the conversion fees.Formspider customers around the World have been upgrading their Forms applications with Formspider successfully for years. The same team who built and maintained the Forms applications builds the application in Formspider. They get excited and motivated because finally they have a tool that they can use to build what they want. They feel empowered instead of helpless. The cost savings we provide might be up to hundreds of thousands of dollars depending on the size of your application. I have seen this over and over again many times:Put Formspider in the hands your Forms developers and they will modernize your Forms applications with the highest return on investment.Yalim K. Gerger
Categories: Development


Jonathan Lewis - Wed, 2014-11-19 06:47

“You can’t compare apples with oranges.”

Oh, yes you can! The answer is 72,731,533,037,581,000,000,000,000,000,000,000.

SQL> create table fruit(v1 varchar2(30));
SQL> insert into fruit values('apples');
SQL> insert into fruit values('oranges');
SQL> commit;
SQL> begin
  2  	     dbms_stats.gather_table_stats(
  3  		     ownname	      => user,
  4  		     tabname	      =>'FRUIT',
  5  		     method_opt       => 'for all columns size 2'
  6  	     );
  7  end;
  8  /
SQL> select
  2  	     endpoint_number,
  3  	     endpoint_value,
  4  	     to_char(endpoint_value,'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx') hex_value
  5  from
  6  	     user_tab_histograms
  7  where
  8  	     table_name = 'FRUIT'
  9  order by
 10  	     endpoint_number
 11  ;

ENDPOINT_NUMBER                                   ENDPOINT_VALUE HEX_VALUE
--------------- ------------------------------------------------ -------------------------------
              1  505,933,332,254,715,000,000,000,000,000,000,000  6170706c65731ad171a7dca6e00000
              2  578,664,865,292,296,000,000,000,000,000,000,000  6f72616e67658acc6c9dcaf5000000
SQL> select
  2  	     max(endpoint_value) - min(endpoint_value) diff
  3  from
  4  	     user_tab_histograms
  5  where
  6  	     table_name = 'FRUIT'
  7  ;

SQL> spool off

Oracle 12c Unified Auditing - Pure Mode

Continuing our blog series on Oracle 12 Unified Auditing is a discussion of Pure  Mode. Mixed mode is intended by Oracle to introduce Unified Auditing and provide a transition from the traditional Oracle database auditing.  Migrating to PURE Unified Auditing requires the database be stopped, the Oracle binary linked to uniaud_on, and then restarted.  This operation can be reversed if auditing needs to be changed back to Mixed Mode. 

When changing from Mixed to pure Unified Audit, two key changes occur.  The first is the audit trails are no longer written to their traditional pre-12c audit locations.  Auditing is consolidated into the Unified Audit views and stored using Oracle SecureFiles.  Oracle Secured Files use a proprietary format which means that Unified Audit logs cannot be viewed using editors such vi and may preclude or affect the use of third party logging solutions such as Splunk or HP ArcSight.  As such, Syslog auditing is not possible with Pure Unified Audit.

Unified Audit Mixed vs. Pure Mode Audit Locations

System Tables

Mixed Mode

Pure Unified Audit Impact


Same as 11g

Exists, but will only have pre-unified audit records


Same as 11g

Exists, but will only have pre-unified audit records

The second change is that the traditional audit configurations are no longer used.  For example, traditional auditing is largely driven by the AUDIT_TRAIL initialization parameter.  With pure Unified Audit, the initialization parameter AUDIT_TRAIL is ignored.

Unified Audit Mixed vs. Pure Mode Audit Configurations

System Parameters

Mixed Mode

Pure Unified Audit Impact


Same as 11g

Exists, but will not have any effect


Same as 11g

Exists, but will not have any effect


Same as 11g

Exists, but will not have any effect


Same as 11g

Exists, but will not have any effect


Same as 11g


If you have questions, please contact us at

Reference Tags: AuditingOracle Database
Categories: APPS Blogs, Security Blogs

SQL Server 2014: buffer pool extension & corruption

Yann Neuhaus - Wed, 2014-11-19 01:49

I had the opportunity to attend Paul Randal’s session on advanced data recovery techniques at the Pass Summit. During this session one attendee asked Paul if a page that has just been corrupted can remain in the buffer pool extension (BPE). As you probably know, BPE only deals with clean pages. Paul hesitated a lot and asked us to test and this is exactly what I will do in the blog post.

First, let’s start by limiting the maximum memory that can be used by the buffer pool:


-- Configure SQL Server max memory to 1024 MB EXEC sp_configure'show advanced options', 1; GO RECONFIGURE; EXEC sp_configure'max server memory (MB)', 1024; GO RECONFIGURE; GO


Then we can enable the buffer pool extension feature:


ALTER SERVER CONFIGURATION SET BUFFER POOL EXTENSION ON (           -- change the path if necessary        FILENAME = N'E:SQLSERVERMSSQLSERVERDATAssdbuffer_pool.bpe',        SIZE = 4096 MB );


I configured a buffer pool extension size with 4X the max memory value for the buffer cache

At this point I need a database with a big size in order to have a chance to retrieve some data pages in the buffer pool extension part. My AdventureWorks2012 database will fit this purpose:


USE AdventureWorks2012; GO   EXEC sp_spaceused;




I have also 3 big tables in this database: dbo.bigTransactionHistory_rs1 (2.2GB), dbo.bigTransactionHistory_rs2 (2.1 GB) and BigTransactionHistory (1.2GB)




I have a good chance to find out some pages related on these tables in the BPE, if I perform a big operation like a DBCC CHECKDB on the AdventureWorks2012 database.

After performing a complete integrity check of this database and executing some queries as well, here it is the picture of my buffer pool:


SELECT        CASE is_in_bpool_extension              WHEN 1 THEN 'SSD'              ELSE 'RAM'        END AS location,        COUNT(*) AS nb_pages,        COUNT(*) * 8 / 1024 AS size_in_mb,        COUNT(*) * 100. /(SELECT COUNT(*) FROM sys.dm_os_buffer_descriptors(nolock)) AS percent_ FROM sys.dm_os_buffer_descriptors(nolock) GROUP BY is_in_bpool_extension ORDER BY location;




Is it possible to find some pages in the buffer pool extension part that concerns the table bigTransactionHistory_rs1?


SELECT        bd.page_id, da.page_type, bd.is_modified FROM sys.dm_os_buffer_descriptors AS bd        JOIN sys.dm_db_database_page_allocations(DB_ID('AdventureWorks2012'), OBJECT_ID('dbo.bigTransactionHistory_rs1'), NULL, NULL, DEFAULT) AS da              ON bd.database_id = da.database_id                     AND bd.file_id = da.allocated_page_file_id                            AND bd.page_id = da.allocated_page_page_id WHERE bd.database_id = DB_ID('AdventureWorks2012')                     AND bd.is_in_bpool_extension = 1                            AND da.page_type IS NULL




I chose the first page 195426 and I finally corrupted it


DBCC WRITEPAGE(AdventureWorks2012, 1, 195426, 0, 2, 0x0000);




Then, let's take a look at the page with ID 195426 to see if it still remains in the BPE:


SELECT        page_id,        is_in_bpool_extension,        is_modified FROM sys.dm_os_buffer_descriptors AS bd WHERE bd.page_id = 195426




Ok (fortunately) not :-) However we can notice that the page has not been tagged as "modified" by looking at the sys.dm_os_buffer_descriptors DMV. Hum my guess at this point is that using DBCC WRITEPAGE is not a classic process for modifying a page but in fact the process used by the BPE extension is not what we can imagine at the first sight.

Indeed, moving a page from BPE is almost orthogonal to the dirty nature of a page because the buffer manager will move a page into the memory because it becomes hot due to the access attempt. Modifying a page needs first access to the page (a particular thanks to Evgeny Krivosheev - SQL Server Program Manager - for this clarification).

We can verify if the page with ID 195426 is really corrupted (remember this page belongs to the bigTransactionHistory_rs1 table):






Note some other corruptions but in this context it doesn't matter because I performed some other corruption tests in this database :-)

So the next question could be the following: Do you think a corrupted page can be moved from the buffer pool into the memory? … The following test will give us the response:


CHECKPOINT GO DBCC DROPCLEANBUFFERS; GO -- Perform queries in order to full fill the buffer cache and its extension


We flush dirty pages to disk and the we clean the buffer cache. Afterward, I perform some others queries in order to populate the buffer cache (memory and BPE) with database pages. At this point we have only clean pages. A quick look at the buffer cache with the sys.dm_os_buffer_descriptor DMV give us the following picture (I recorded into a temporary table each time I found out the page ID 195426 into the buffer cache (either memory or BPE):




We can notice that a corrupted page can be part of the buffer pool extension and this is an expected behavior because the page ID 195426 is not dirty or modified but corrupted only at this point.


Oracle Database 12C Certified Professional SQL Foundations by Steve Ries; Infinite Skills

Surachart Opun - Wed, 2014-11-19 01:36
How to become Become an Oracle Certified Associate? That's a good question for some people who want to start working on Oracle Database and get first Oracle Certification for their work life. Step 1 - Pass one SQL Exam: Oracle Database 12c: SQL Fundamentals 1Z0-061 or Oracle Database 11g: SQL Fundamentals I 1Z0-051 or Oracle Database SQL Expert 1Z0-047. Step 2 - Pass Exam Oracle Database 12c: Installation and Administration 1Z0-062.
With Step 1 -  Oracle Database 12c: SQL Fundamentals 1Z0-061 exam that you should have skills SQL SELECT statement, subqueries, data manipulation, and data definition language. I was not encouraging to learn SQL SELECT statement, subqueries, data manipulation, data definition language and things like that, but in the real-world for working with Oracle Database you have to know them. You can read about them in Oracle Documents, Oracle Learning Library, Oracle University and the Internet. If you are looking for some video training that instruct you for Oracle Database SQL Foundations such as  sql-select, dml, ddl and etc. I mention Oracle Database - 12C Certified Professional SQL Foundations by Steve Ries. I watched it on O'reilly,that is very fast for streaming and downloading. The "Oracle Database - 12C Certified Professional SQL Foundations" video training, that helps learning Oracle Foundations looks easier (watch and do by own) and instructor spoke each topic very clear and easy for listening.

FYI, You can watch some free video.

Video Training: Oracle Database - 12C Certified Professional SQL Foundation
Instructor: Steve RiesWritten By: Surachart Opun
Categories: DBA Blogs

BPM Authentication On Behalf Business User from ADF

Andrejus Baranovski - Wed, 2014-11-19 01:20
This is the next post in the series of ADF/BPM integration, check previous post available here - Dynamic ADF Buttons Solution for Oracle BPM Outcomes. Here I'm going to describe how you could authenticate with BPM from ADF through a proxy user, on top you could apply only business user name, password will not be required.

There is API method available - authenticateOnBehalf(context, userName), you must have a valid connection context created and with authenticateOnBehalf method you could set to use any valid user name, instead of proxy user. Here is the example for Workflow Context:

Similar example for BPM Context, same authenticateOnBehalf method:

As a proxy user I'm using weblogic. You could set to use any different user and consider it as a proxy user.

Tasks assigned for redsam1, connected through proxy user weblogic, are retrieved and displayed in the table:

This is how you could avoid using password for each business user and simply create initial connection through a proxy user. Download sample application -

Change Parameter Value In Another Session

Oracle in Action - Tue, 2014-11-18 22:57

RSS content

The values of initialization parameters in another session can be changed  using procedures SET_BOOL_PARAM_IN_SESSION and SET_INT_PARAM_IN_SESSION provided in DBMS_SYSTEM package.

Let’s demonstrate:

SQL>conn / as sysdba

SYS> grant dba to hr;

– Currently parameter HASH_AREA_SIZE is set to 131073 in HR session

HR>> sho parameter hash_area_size

NAME                                 TYPE        VALUE
———————————— ———– ——————————
hash_area_size                       integer     131073

– Find out SID, SERIAL# for HR session

SYS> select sid, serial#, username from v$session where username=’HR’;

———- ———- ——————————
54        313 HR

– Set value of parameter HASH_AREA_SIZE to 131072 in HR session

SYS> exec dbms_system.SET_INT_PARAM_IN_SESSION(54, 313, ‘HASH_AREA_SIZE’,131072);

– Verify that the value of parameter HASH_AREA_SIZE has been changed in HR session

HR> sho parameter hash_area_size

NAME                                 TYPE        VALUE
———————————— ———– ——————————
hash_area_size                       integer     131072

Similary Boolean initialization parameters can be modified using  dbms_system.SET_BOOL_PARAM_IN_SESSION.

– Let’s find out the value of parameter SKIP_UNUSABLE_INDEXES in HR session

HR> sho parameter skip_unusable indexes

———————————— ———– ——————————
skip_unusable_indexes boolean TRUE

– Modify the  value of parameter SKIP_UNUSABLE_INDEXES to FALSE in HR session

SYS> exec dbms_system.SET_BOOL_PARAM_IN_SESSION(54, 313, ‘skip_unusable_indexes’,FALSE);

– Verify that the value of parameter SKIP_UNUSABLE_INDEXES  has been changed to FALSE in HR session

HR> sho parameter skip_unusable indexes

———————————— ———– ——————————
skip_unusable_indexes boolean FALSE



Related links:


Database Index 

Find Values Of Another Session’s Parameters




Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [Change Parameter Value In Another Session], All Right Reserved. 2014.

The post Change Parameter Value In Another Session appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Coding in PL/SQL in C style, UKOUG, OUG Ireland and more

Pete Finnigan - Tue, 2014-11-18 18:35

My favourite language is hard to pin point; is it C or is it PL/SQL? My first language was C and I love the elegance and expression of C. Our product PFCLScan has its main functionallity written in C. The....[Read More]

Posted by Pete On 23/07/14 At 08:44 PM

Categories: Security Blogs

Integrating PFCLScan and Creating SQL Reports

Pete Finnigan - Tue, 2014-11-18 18:35

We were asked by a customer whether PFCLScan can generate SQL reports instead of the normal HTML, PDF, MS Word reports so that they could potentially scan all of the databases in their estate and then insert either high level....[Read More]

Posted by Pete On 25/06/14 At 09:41 AM

Categories: Security Blogs

Automatically Add License Protection and Obfuscation to PL/SQL

Pete Finnigan - Tue, 2014-11-18 18:35

Yesterday we released the new version 2.0 of our product PFCLObfuscate . This is a tool that allows you to automatically protect the intellectual property in your PL/SQL code (your design secrets) using obfuscation and now in version 2.0 we....[Read More]

Posted by Pete On 17/04/14 At 03:56 PM

Categories: Security Blogs

Twitter Oracle Security Open Chat Thursday 6th March

Pete Finnigan - Tue, 2014-11-18 18:35

I will be co-chairing/hosting a twitter chat on Thursday 6th March at 7pm UK time with Confio. The details are here . The chat is done over twitter so it is a little like the Oracle security round table sessions....[Read More]

Posted by Pete On 05/03/14 At 10:17 AM

Categories: Security Blogs

PFCLScan Reseller Program

Pete Finnigan - Tue, 2014-11-18 18:35

We are going to start a reseller program for PFCLScan and we have started the plannng and recruitment process for this program. I have just posted a short blog on the PFCLScan website titled " PFCLScan Reseller Program ". If....[Read More]

Posted by Pete On 29/10/13 At 01:05 PM

Categories: Security Blogs

PFCLScan Version 1.3 Released

Pete Finnigan - Tue, 2014-11-18 18:35

We released version 1.3 of PFCLScan our enterprise database security scanner for Oracle a week ago. I have just posted a blog entry on the PFCLScan product site blog that describes some of the highlights of the over 220 new....[Read More]

Posted by Pete On 18/10/13 At 02:36 PM

Categories: Security Blogs

PFCLScan Updated and Powerful features

Pete Finnigan - Tue, 2014-11-18 18:35

We have just updated PFCLScan our companies database security scanner for Oracle databases to version 1.2 and added some new features and some new contents and more. We are working to release another service update also in the next couple....[Read More]

Posted by Pete On 04/09/13 At 02:45 PM

Categories: Security Blogs