Skip navigation.

Feed aggregator

SQL Tuning: Thinking in Sets / How and When to be Bushy

Dominic Brooks - Thu, 2015-11-05 11:48

Below is a SQL statement from a performance problem I was looking at the other day.

This is a real-world bit of SQL which has slightly simplified and sanitised but, I hope, without losing the real-worldliness of it and the points driving this article.

You don’t really need to be familiar with the data or table structures (I wasn’t) as this is a commentary on SQL structure and why sometimes a rewrite is the best option.

SELECT bd.trade_id
,      bdp.portfolio_id
,      bd.deal_id 
,      bd.book_id
,      pd.deal_status   prev_deal_status
FROM   deals            bd
,      portfolios       bdp
,      deals            pd
,      portfolios       pdp
-- today's data
WHERE  bd.business_date         = :t_date
AND    bd.src_bus_date          < :t_date
AND    bd.type                  = 'Trade'
AND    bdp.ref_portfolio_id     = bd.book_id
-- yesterday's data
AND    pd.business_date         = :y_date
AND    pd.type                  = 'Trade'
AND    pdp.ref_portfolio_id     = pd.book_id
-- some join columns
AND    bd.trade_id              = pd.trade_id
AND    bdp.portfolio_id         = pdp.portfolio_id;

There is no particular problem with how the SQL statement is written per se.

It is written in what seems to be a standard developer way.

Call it the “lay everything on the table” approach.

This is a common developer attitude:

“Let’s just write a flat SQL structure and let Oracle figure the best way out.”

Hmmm… Let’s look at why this can be a problem.

First, what is the essential requirement of the SQL?

Compare information (deal status) that we had yesterday for a subset of deals/trades

Something like that anyway…

So … What is the problem?

The Optimizer tends to rewrite and transform any SQL we give it anyway and tries to flatten it out.

The SQL above is already flat so isn’t that a good thing? Shouldn’t there be less work for the optimizer to do?

No, not necessarily. Flat SQL immediately restricts our permutations.

The problem comes with how Oracle can take this flat SQL and join the relevant row sources to efficiently get to the relevant data.

Driving Rowsource

Let’s assume that we should drive from today’s deal statuses (where we actually drive from will depend on what the optimizer estimates / costs).

FROM   deals            bd
,      portfolio        bdp
-- today's data
WHERE  bd.business_date         = :t_date
AND    bd.src_bus_date          < :t_date
AND    bd.type                  = 'Trade'
AND    bdp.ref_portfolio_id     = bd.book_id
Where do we go from here?

We want to join from today’s deals to yesterdays deals.

But the data for the two sets of deals data is established via the two table join (DEALS & PORTFOLIOS).


We want to join on TRADE_ID which comes from the two DEALS tables and PORTFOLIO_ID which comes from the two PORTFOLIOS tables.

FROM   ...
,      deals            pd
,      portfolios       pdp
WHERE  ...
-- yesterday's data
AND    pd.business_date         = :y_date
AND    pd.type                  = 'Trade'
AND    pdp.ref_portfolio_id     = pd.book_id

And joined to via:

AND    bd.trade_id              = pd.trade_id
AND    bdp.portfolio_id         = pdp.portfolio_id

So from our starting point of today’s business deals, we can either go to PD or to PDP, but not to both at the same time.

Hang on? What do you mean not to both at the same time?

For any multi-table join involving more than two tables, the Optimizer evaluates the different join tree permutations.

Left-Deep Tree

Oracle has a tendency to choose what is called a left-deep tree.

If you think about a join between two rowsources (left and right), a left-deep is one where the second child (the right input) is always a table.

NESTED LOOPS are always left-deep.

HASH JOINS can be left-deep or right-deep (normally left-deep as already mentioned)

Zigzags are also possible, a mixture of left-deep and right-deep.

Below is an image of a left-based tree based on the four table join above.


Here is an execution plan which that left-deep tree might represent:

| Id  | Operation                               | Name        |
|   0 | SELECT STATEMENT                      |               |
|   1 |  NESTED LOOPS                         |               |
|   2 |   NESTED LOOPS                        |               |
|   3 |    NESTED LOOPS                       |               |
|   4 |     NESTED LOOPS                      |               |
|*  5 |      TABLE ACCESS BY ROWID            | DEALS         |
|*  6 |       INDEX RANGE SCAN                | DEALS_IDX01   |
|*  8 |       INDEX UNIQUE SCAN               | PK_PORTFOLIOS |
|*  9 |     TABLE ACCESS BY INDEX ROWID       | DEALS         |
|* 10 |      INDEX RANGE SCAN                 | DEALS_IDX01   |
|* 11 |    INDEX UNIQUE SCAN                  | PK_PORTFOLIOS |

Predicate Information (identified by operation id): 
   5 - filter("BD"."TYPE"='Trade' AND "BD"."SRC_BUS_DATE"<:t_date) 
   6 - access("BD"."BUSINESS_DATE"=:t_date) 
   8 - access("BD"."BOOK_ID"="BDP"."REF_PORTFOLIO_ID") 
   9 - filter(("BD"."TYPE"='Trade' AND "BD"."TRADE_ID"="PD"."TRADE_ID")) 
  10 - access("PD"."BUSINESS_DATE"=:y_date) 
  11 - access("PD"."BOOK_ID"="PDP"."REF_PORTFOLIO_ID") 
  12 - filter("BDP"."PORTFOLIO_ID"="PDP"."PORTFOLIO_ID")

Right-Deep Tree

A right-deep tree is one where the first child, the left input, is a table.

Illustration not specific to the SQL above:


Bushy Tree

For this particular SQL, this is more what we are looking for:


The essence of the problem is that we cannot get what is called bushy join, not with the original flat SQL.

The Optimizer cannot do this by default. And this isn’t an approach that we can get at by hinting (nor would we want to if we could, of course!).

Rewrite Required

To get this bushy plan, we need to rewrite our SQL to be more explicit around the set-based approach required.

WITH subq_curr_deal AS
     (SELECT /*+ no_merge */
      ,      bd.deal_id
      ,      bd.book_id
      ,      bdp.portfolio_id
      FROM   deals      bd
      ,      portfolios bdp
      WHERE  bd.business_date         = :t_date
      AND    bd.src_bus_date          < :t_date
      AND    bd.type                  = 'Trade'
      AND    bdp.ref_portfolio_id     = bd.book_id)
,    subq_prev_deal AS
     (SELECT /*+ no_merge */
      ,      pd.deal_status
      ,      pdp.portfolio_id
      FROM   deals      pd
      ,      portfolios pdp
      WHERE  pd.business_date         = :y_date
      AND    pd.type                  = 'Trade'
      AND    pdp.ref_portfolio_id     = pd.book_id)
SELECT cd.trade_id
,      cd.portfolio_id
,      cd.deal_id
,      cd.book_id 
,      pd.deal_status prev_deal_status
FROM   subq_curr_deal cd
,      subq_prev_deal pd
WHERE  cd.trade_id             = pd.trade_id
AND    cd.portfolio_id         = pd.portfolio_id;
How exactly does the rewrite help?

By writing the SQL deliberately with this structure, by using WITH to create subqueries in conjunction with no_merge, we are deliberately forcing the bushy join.

This is an example execution plan that this bushy tree might represent.

| Id  | Operation                        | Name          |
|   0 | SELECT STATEMENT                 |               |
|*  1 |  HASH JOIN                       |               |
|   2 |   VIEW                           |               |
|   3 |    NESTED LOOPS                  |               |
|   4 |     NESTED LOOPS                 |               |
|*  5 |      TABLE ACCESS BY INDEX ROWID | DEALS         |
|*  6 |       INDEX RANGE SCAN           | DEALS_IDX01   |
|*  7 |      INDEX UNIQUE SCAN           | PK_PORTFOLIOS |
|   9 |   VIEW                           |               |
|  10 |    NESTED LOOPS                  |               |
|  11 |     NESTED LOOPS                 |               |
|* 12 |      TABLE ACCESS BY INDEX ROWID | DEALS         |
|* 13 |       INDEX RANGE SCAN           | DEALS_IDX01   |
|* 14 |      INDEX UNIQUE SCAN           | PK_PORTFOLIOS |
Predicate Information (identified by operation id): 
   5 - filter(("BD"."TYPE"='Trade' AND "BD"."SRC_BUS_DATE"<:t_date)) 
   6 - access("BD"."BUSINESS_DATE"=:t_date ) 
   7 - access("BD"."BOOK_ID"="BDP"."REF_PORTFOLIO_ID") 
  12 - filter("PD"."TYPE"='Trade') 
  13 - access("PD"."BUSINESS_DATE"=:y_date) 
  14 - access("PDP"."REF_PORTFOLIO_ID"="PD"."BOOK_ID")
Is this a recommendation to go use WITH everywhere?


What about the no_merge hint?

The no_merge hint is a tricky one. This is not necessarily a recommendation but its usage here prevents the Optimizer from flattening. I often find it goes hand-in-hand with this sort of deliberately structured SQL for that reason, and similar goes for push_pred.

Do developers need to know about left deep, right deep and bushy?

No, not at all.


It helps to think in sets and about what sets of data you are joining and recognise when SQL should be deliberately structured.

Further Reading

Links for 2015-11-04 []

Categories: DBA Blogs

Microsoft Outlook : When Bad UX Attacks!

Tim Hall - Thu, 2015-11-05 01:39

I guess there are lots of problems with the User eXperience (UX) of Microsoft Outlook, but the one that kills me is the popup menu in the folders pane.

I’m not sure how other people use this, but for me, the number one thing I do is “Delete All”, closely followed by “Mark All as Read”. I have a bunch or rules that “file” irrelevant crap, which I later scan through and typically delete en masse.

So what’s the problem?

The folder operations are higher up the menu, so I’m constantly doing “Delete Folder”, rather than “Delete All”, which drives me mad. Especially when I don’t notice and all my rules start failing.

Like I said, I don’t know how other people use this stuff, but I would hazard a guess that the clean-up operations are used more frequently than the actual folder maintenance operations. This is one situation when having the most frequently used sections of the menu being promoted to the top would be really handy.

Of course, I could just pay more attention… :)



Microsoft Outlook : When Bad UX Attacks! was first posted on November 5, 2015 at 8:39 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Column Groups

Jonathan Lewis - Thu, 2015-11-05 00:48

I think column groups can be amazingly useful in helping the optimizer to generate good execution plans because of the way they supply better details about cardinality; unfortunately we’ve already seen a few cases (don’t forget to check the updates and comments) where the feature is disabled, and another example of this appeared on OTN very recently.

Modifying the example from OTN to make a more convincing demonstration of the issue, here’s some SQL to prepare a demonstration:

create table t1 ( col1 number, col2 number, col3 date);

insert  into t1
select 1 ,1 ,to_date('03-Nov-2015') from dual
union all
select 1, 2, to_date('03-Nov-2015')  from dual
union all
select 1, 1, to_date('03-Nov-2015')  from dual
union all
select 2, 2, to_date('03-Nov-2015')  from dual
union all   
select 1 ,1 ,null  from dual
union all  
select 1, 1, null  from dual
union all  
select 1, 1, null  from dual
union all
select 1 ,1 ,to_date('04-Nov-2015')  from dual
union all  
select 1, 1, to_date('04-Nov-2015')  from dual
union all  
select 1, 1, to_date('04-Nov-2015')  from dual

                ownname         => user,
                tabname         => 'T1',
                method_opt      => 'for all columns size 1'

                ownname         => user,
                tabname         => 'T1',
                method_opt      => 'for columns (col1, col2, col3)'

I’ve collected stats in a slightly unusual fashion because I want to make it clear that I’ve got “ordinary” stats on the table, with a histogram on the column group (col1, col2, col3). You’ll notice that this combination is a bit special – of the 10 rows in the table there are three with the values (1,1,null) and three with the values (1,1,’04-Nov-2015′), so some very clear skew to the data which results in Oracle gathering a frequency histogram on the table.

These two combinations are remarkably similar, so what happens when we execute a query to find them – since there are no indexes the plan will be a tablescan, but what will we see as the cardinality estimate ? Surely it will be the same for both combinations:

select  count(*)
from    t1
        col1 = 1
and     col2 = 1
and     col3 = '04-Nov-2015'

select  count(*)
from    t1
        col1 = 1
and     col2 = 1
and     col3 is null

Brief pause for thought …

and here are the execution plans, including predicate section – in the same order (from and

| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |      |     1 |    12 |     2   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |    12 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |     3 |    36 |     2   (0)| 00:00:01 |

Predicate Information (identified by operation id):
   2 - filter("COL1"=1 AND "COL2"=1 AND "COL3"=TO_DATE(' 2015-11-04
              00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |      |     1 |    12 |     2   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |    12 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |     1 |    12 |     2   (0)| 00:00:01 |

Predicate Information (identified by operation id):
   2 - filter("COL3" IS NULL AND "COL1"=1 AND "COL2"=1)

The predictions are different – the optimizer has used the histogram on the column group for the query with “col3 = to_date()”, but not for the query with “col3 is null”. That’s a bit of a shame really because there are bound to be cases where some queries would benefit enormously from having a column group used even when some of its columns are subject to “is null” tests.


The demonstration above isn’t sufficient to prove the point, of course; it merely shows an example of a suspiciously bad estimate. Here are a few supporting details – first we show that both the NULL and the ’04-Nov-2015′ combinations do appear in the histogram. We do this by checking the column stats, the histogram stats, and the values that would be produced by the hashing function for the critical combinations:

set null "n/a"

select distinct
        mod(sys_op_combined_hash(col1, col2, col3), 9999999999)
from    t1
        col3 is null
or      col3 = to_date('04-Nov-2015')
order by

column endpoint_actual_value format a40
column column_name           format a32
column num_buckets           heading "Buckets"

        num_nulls, num_distinct, density,
        histogram, num_buckets
        table_name = 'T1'

break on column_name skip 1

        endpoint_number, endpoint_value,
        endpoint_actual_value -- , endpoint_repeat_count
        table_name = 'T1'
and     column_name not like 'COL%'
order by
        table_name, column_name, endpoint_number

(For an explanation of the sys_op_combined_hash() function see this URL).

Here’s the output from the three queries:

--------- ----------------------------------------------------
04-NOV-15                                           5347969765
n/a                                                 9928298503

COLUMN_NAME                       NUM_NULLS NUM_DISTINCT    DENSITY HISTOGRAM          Buckets
-------------------------------- ---------- ------------ ---------- --------------- ----------
COL1                                      0            2         .5 NONE                     1
COL2                                      0            2         .5 NONE                     1
COL3                                      3            2         .5 NONE                     1
SYS_STU2IZIKAO#T0YCS1GYYTTOGYE            0            5        .05 FREQUENCY                5

-------------------------------- --------------- -------------- ----------------------------------------
SYS_STU2IZIKAO#T0YCS1GYYTTOGYE                 1      465354344
                                               4     5347969765
                                               6     6892803587
                                               7     9853220028
                                              10     9928298503

As you can see, there’s a histogram only on the combination and Oracle has found 5 distinct values for the combination. At endpoint 4 you can see the combination that includes 4th Nov 2015 (with the interval 1 – 4 indicating a frequency of 3 rows) and at endpoint 10 you can see the combination that includes the null (again with an interval indicating 3 rows). The stats are perfect to get the job done, and yet the optimizer doesn’t seem to use them.

If we examine the optimizer trace file (event 10053) we can see concrete proof that this is the case when we examine the “Single Table Access Path” sections for the two queries – here’s a very short extract from each trace file, the first for the query with “col3 = to_date()”, the second for “col3 is null”:

  Col#: 1 2 3    CorStregth: 1.60
ColGroup Usage:: PredCnt: 3  Matches Full: #1  Partial:  Sel: 0.3000

  Col#: 1 2 3    CorStregth: 1.60
ColGroup Usage:: PredCnt: 2  Matches Full:  Partial:

Apparently “col3 is null” is not a predicate!

The column group can be used only if you have equality predicates on all the columns. This is a little sad – the only time that the sys_op_combined_hash() will return a null is (I think) when all its input are null, so there is one very special case for null handling with column groups – and even then the num_nulls for the column group would tell the optimizer what it needed to know. As it is, we have exactly the information we need to get a good cardinality estimate for the second query, but the optimizer is not going to use it.


If you create a column group to help the optimizer with cardinality calculations it will not be used for queries where any of the underlying columns is used in an “is null” predicate. This is coded into the optimizer, it doesn’t appear to be an accident.

Reference script: column_group_null.sql

OOW'15 Session Slides - Oracle Alta UI Patterns for Enterprise Applications and Responsive UI Support

Andrejus Baranovski - Wed, 2015-11-04 23:46
You can view slides from recent OOW'15 session (Oracle Alta UI Patterns for Enterprise Applications and Responsive UI Support [UGF2717]) on SlideShare:

I was explaining how to implement responsive layout and fast performance ADF 12c applications with Alta UI. New ADF 12.2.1 features were discussed. Session went pretty well, all demos were working without issues. I was happy with the feedback, for example a tweet from Grant Ronald:

Here you can download session demo application - This application includes high performance ADF Alta UI dashboard with WebSockets.

Flow Builder/OpenScript and Oracle Test Manager

Anthony Shorten - Wed, 2015-11-04 19:32

The Oracle Functional Testing Advanced Pack for Oracle Utilities provides the content to use Oracle Application Testing Suite to verify your business processes via flows using your test data for multiple scenarios.

Customers have asked the relationship with the tools in the Oracle Application Testing Suite. Here is the clarification:

  • The Oracle Functional Testing Advanced Pack for Oracle Utilities is loaded into the Oracle Flow Builder component of the Oracle Application Testing Suite Functional Testing product. The components represent all the functions within the product.
  • You build a flow, using drag and drop, sequencing the provided components that matches the business process you want to verify. If there is no component we ship that covers your extensions then you can use a supplied component builder to build a meta data based service component. If you want to model a user interface then OpenScript can be used to record a component to add to the component library.
  • You then can attach your test data. There are a number of techniques available for that. You can natively input the data if you wish into the component, generate a spreadsheet to fill in the data (and attach it after you filled it out) or supply a CSV data file that represents the data in the flow.
  • You generate (not code) a script. No additional coding is required. As part of the license you can code OpenScript if  you have developers for any work if you wish but it is typically not required.
  • You then have a choice to execute the script.
    • You can execute the script from the OpenScript tool in Eclipse (if you are a developer for example).
    • You can execute the script from a command line (for example you can do this from a third party test manager if you wish)
    • You can automatically execute the script from Oracle Test Manager. This will allow groups of script to be scheduled and it reports on the results for you.
    • You can load the script into the Oracle Load Testing toolset and include it in a performance test suite of tests.

Remember our testing pack is service based not user interface based. You will need to make sure that the Web Services Accelerator is installed.

Essentially Flow Builder builds the flow and script from the components in the pack, Openscript or Oracle Test Manager executes it for you.

Partner Webcast – Enterprise Managed Oracle MySQL with EM12c

Oracle recently announced the general availability of MySQL 5.7, the latest version of the world’s most popular open source database. The new version delivers greater performance, scalability and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Launches Oracle BPM & Oracle WebCenter 12.2.1

WebCenter Team - Wed, 2015-11-04 15:08

Oracle, the immense yet nimble giant that it is, launched a whole slew of innovation at this Oracle Open World. One such is the updates with the on-premise Oracle BPM and Oracle WebCenter Suites. I have summarized the gist of this big release here, also you can dive into our press release for a quick overview.

Press Release: Oracle Fusion Middleware Enables Organizations to Innovate and Consolidate On-Premises and in the Cloud

Oracle WebCenter

This release provides a refreshingly new user experience that is cohesive, responsive and device friendly across Oracle WebCenter Sites, Oracle WebCenter Portal and Oracle WebCenter Content and Oracle BPM Suite. Oracle WebCenter Portal has a modern and responsive UI, and a refreshed composer that eases page composition. This release also delivers on wizard driven data integration and out of the box custom visualization, new content management leveraging embedded UI from WebCenter Content and full access to enterprise content management.

Oracle WebCenter Sites now has a composite visitor profile across multiple profile data sources and delivers on A/B testing. It now provides marketing cloud integration with anonymous personalization thru Bluekai, Eloqua forms integration for lead-generation and Eloqua profile integration for personalization. In addition enhanced marketing insight delivers in-context analytics on pages and assets, allows setting conversion events in-context of page and anonymous user tracking.

Oracle WebCenter Content has released modern, embeddable UIs for content and process. It includes a new WebCenter Content Viewer for Imaging with annotation and redaction. This release now has a hybrid ECM integration between WebCenter Content and Document Cloud Service. Lastly, users can now leverage reduced complexity in the infrastructure as Imaging Server is merged into WebCenter Content. 

Oracle BPM Suite

We are excited to announce the new functionality coming from Oracle BPM Suite in this new release which includes modernization of the user interfaces to make it more streamlined to improve workforce productivity. This new release includes a full set of REST API that enable applications built with any technology to leverage BPM Suite as an engine for workflow management, service integration, and process orchestration. This model is particularly well suited when custom mobile apps need a process engine that offers full BPMN and Adaptive Case Management capabilities, and that delivers BAM dashboards to enable control, management, and continuous improvement of processes. 

Oracle BPM Suite now has the capacity for easier creation of hybrid applications to cross over between Oracle Process Cloud Service and Oracle BPM Suite. This reduces time to market and cost by moving new process initiatives to the cloud. Process Cloud Service can be leveraged for pilots and testing before bringing it on-premise for production. In addition line of business can now create applications on Oracle Process Cloud Service which can be brought to on-premise when the functionality reaches a state of sophistication that requires higher levels of IT commitment.

To find out more, visit us:  and

Building Oracle #GoldenGate Studio Repository…. a walk through

DBASolved - Wed, 2015-11-04 10:49

With the announcement of Oracle GoldenGate Studio at OOW this year, there has been a good bit of interest in what is can do for any Oracle GoldenGate environment. The basics of this new design tool is that it will allow the end user to quickly build out GoldenGate architectures and mappings; however, before you can build the architectures and mappings there needs to be a repository to store this information.

The Oracle GoldenGate Studio is built on the same framework that the Oracle Data Integrator is built on. With this framework, a repository database has to be created to retain all of the architectures and mappings. To do this, you use the Repository Creation Utility (RCU). Unlike ODI, the GoldenGate Studio repository can only be created in Oracle database. The RCU can be used to create the repository in any version of the Oracle Database (EE, SE, or XE).

After identifying or installing a new Oracle database for the repository; the RCU will need to be ran. The steps below will guide you through the creation of the repository needed for Oracle GoldenGate Studio.

Note: The RCU will be ran out of the Oracle GoldenGate Studio home directory.

To run the RCU, you will need to be in the Oracle GoldenGate Studio home directory. In this directory, you will need to go to oracle_common/bin as indicated below. Then execute the RCU from there.

$ cd $GGSTUDIO_HOME/oracle_common/bin
$ ./rcu &

Executing the “rcu” command, will start the RCU wizard to build the repository. The first screen of the RCU will be the welcome screen. Click Next.

Being that this is a new repository, you will want to select “Create Repository” and “System Load and Product Load” options. Click Next.

The next screen of the RCU will ask you for connection information related to the database where the repository will be built. Provide the information and click Next.

While the RCU is attempting to connect to the database, it will run a few checks to verify that the database is supported and can be used for the repository. If you get a warning, this is normal and can be ignored. Once the warning has been ignored, the prerequisites will complete. Click Ok then Next.

The next step of the RCU will allow you to select the components needed for the repository. There are only two main components needed for the repository. “Common Infrastructure Services” (selected by default) and “Oracle GoldenGate -> Repository” (selected by default). Both of these selection will have a prefix of “DEV” by default. This is something that can be changed in the “Create new prefix” box.

Note: I like changing it to GGR (GoldenGateRepository), this way I can keep different schemas in the same repository database.

Just like the database connection prerequisites, the RCU will check for all the items needed. Click OK.

The next screen will ask you for passwords that will be used with the schemas in the repository. You have the option of using a single password for all schemas or specify different passwords. Since this is mostly for testing, a single password works for my setup. Click Next.

The custom variables step will require you to create a password for the Supervisor user. Remember the Supervisior user is a layover from the ODI framework. Provide a password that you would like to use. Also notice that the “Encryption Algorithm” variable is empty. This is meant to be empty, do not place anything here. Then click Next.

Now the wizard will prompt you about information needed to create default and temp tablespaces for the schemas setup earlier in the wizard. Taking all the defaults unless there is something specific you would like to change. Click Next.

The summary page will provide you with the information on items that will be created with the RCU. Click Create and wait for the repository to be created.

Once the “create” button has been pushed, the RCU will begin building the repository.

Upon completion of the repository, the RCU will provide a Completion Summary screen with all the details of the repository build. At this point, you can close out of the RCU by clicking “close”

If you are familiar with any of the Oracle Data Integration tools, this repository wizard is very similar to other products that use a repository (example: Oracle Data Integrator). The repository is a very nice and useful with Oracle GoldenGate Studio because it will be used to keep track of projects, solutions and mapping that you will be working on.


Filed under: Golden Gate
Categories: DBA Blogs

Instructure Dodges A Data Bullet

Michael Feldstein - Wed, 2015-11-04 10:23

By Phil HillMore Posts (378)

Last week’s EDUCAUSE conference was relatively news free, which is actually a good thing as overall ed tech hype levels have come down. Near the end of the conference, however, I heard from three different sources about a growing backlash against Instructure for their developing plans for Canvas Data and real-time events. “They’re Blackboarding us”, “the honeymoon is over”, “we’re upset and that is on the record”. By all appearances, this frustration mostly by R1 institutions was likely to become the biggest PR challenge for Instructure since their 2012 outage, especially considering their impending IPO.

The first complaint centered on Instructure plans to charge for daily data exports as part of Canvas Data, which Instructure announced at InstructureCon in June as:

a hosted data solution providing fully optimized data to K-12 and higher education institutions capturing online teaching and learning activity. As a fundamental tool for education improvement, the basic version of the service will be made available to Canvas clients at no additional cost, with premium versions available for purchase.

What that last phrase meant was that monthly data access was free, but institutions had to pay for daily access. By the EDUCAUSE conference, institutions that are part of the self-organized  “Canvas R1 Peers” group were quite upset that Instructure was essentially selling their own data back to them, and arguments of additional infrastructure costs were falling flat.

Vince Kellen, CIO of the University of Kentucky, was quite adamant on the principle that vendors should not sell back institutional data – that belongs to the schools. At the most vendors should charge for infrastructure.

The second complaint involved a product under development – not yet in beta – called Live Events. This product will provide access to clickstream data and live events, ideally following IMS standards and supporting the Caliper framework. Unizin is the primary customer driving this development, but the Canvas R1 Peers group is also playing an active role. The concern is that the definition of which data to make available in real-time, and how that data is structured to allow realistic access by schools analyzing the data, has not yet been defined to a level that satisfies Unizin and the Peers group.

I contacted the company Friday mid day while also conducting interviews with the schools and with Unizin. Apparently the issues quickly escalated within the company, and Friday evening I got a call from CEO Josh Coates. He said that they had held an internal meeting and decided that their plans were wrong and had to change. They would no longer charge for daily access to Canvas Data. On Monday they posted a blog announcing this decision.

tl;dr: Canvas Data is out of beta. This means free daily data logs are available to all clients. [snip]

We just took Canvas Data out of beta. A beta is a chance to test, with actual clients, the technology, the user experience, and even possible for-cost add-on features. Some of the things we learned from the 30 beta institutions were that once-a-month updates aren’t enough (Canvas Data “Basic”), and charging extra for daily updates is kinda lame (Canvas Data “Plus”).

“Kinda lame” is not the Canvas Way. So we changed it: No more Canvas Data Basic vs. Plus; It’s now just Canvas Data, with daily updates of downloadable files, at no additional cost, for everyone.

Checking back with schools from the Canvas R1 Peers group and Unizin, I was told that Instructure really did defuse the Canvas Data issue with that one quick decision.

On the Live Events issue, the Canvas R1 Peers group put together a requirements document over the weekend that collected data needs from Berkeley, UT Austin, U Kentucky, and the University of Washington[1]. This document was shared with Instructure through Internet2 based on the Net+ contract with Instructure, and they are now working out the details.

Vince Kellen indicated that “Live Events is real minimal start in the right direction”, but that Instructure will need to figure out how to handle transactional events with no data loss and clickstream data not requiring the same fidelity within the same system.

Additional sources confirmed that the Canvas Data issue was resolved and that Instructure was on the right path with Live Events, although there is still a lot of work to be done.

Amin Qazi, CEO of Unizin, stated in an email:

Yes, Unizin had an agreement which allowed access to the daily Canvas Data files without our members paying any additional fees. My understanding of the new pricing model is all Instructure Canvas customers now have a similar arrangement.

Unizin is only beginning to explore the benefits of Live Events from Canvas. We are transporting the data from Instructure to our members via cloud-based infrastructure Unizin is building and maintaining, at no cost to our members. We have started developing some prototypes to take advantage of this data to meet our objective of increasing learner success.

Unizin has had, and plans to have, discussions with Instructure regarding the breadth of the data available (current:, the continued conformity of that data to the IMS Global standards, and certain aspects of privacy and security. Unizin believes these topics are of interest to all Instructure Canvas customers.

We understand this is a beta product from Instructure and we appreciate their willingness to engage in these discussions, and potentially dedicate time and resources. We look forward to working with Instructure to mature Live Events.

In the end, there is work remaining for Instructure to support institutions wanting to access and analyze their learning data from the LMS, but Instructure dodged a bullet by quick decision-making.

Additional Notes
  • I am still amazed that Instructure’s competitors do not understand how Instructure’s rapid and non-defensive acknowledgement and resolution of problems is a major factor in their growth. There were no excuses given this weekend, just decisions and clear communication back to customers.
  • This is the clearest demonstration of value by Unizin that I have seen. Amin’s explanation goes beyond the vague generalities that have plagued Unizin over the past 18 months and is specific and real.
  1. There might be other schools involved.

The post Instructure Dodges A Data Bullet appeared first on e-Literate.

Nagios Authentication with Active Directory.

Pythian Group - Wed, 2015-11-04 10:14


Nagios authentication with Active Directory aligns with user management consolidation policies in most organizations. This post explains how to setup Nagios authentication with Active Directory, while using Apache as web server.

mod_authz_ldap is an apache LDAP authorization module. This can be used to authorize a user based on an LDAP query.

Install mod_authz_ldap.

# yum install mod_authz_ldap

Make sure that the module is loaded in apache:

/etc/httpd/conf.d/authz_ldap.confLoadModule authz_ldap_module modules/

To query LDAP, ldapsearch can be used. Install following package:

# yum install openldap-clients

Active Directory will not allow an LDAP client to operate against it anonymously, therefore a user DN and password with minimum permission is required.

For example: CN=Nagios User,CN=Users,DC=hq,DC=CORP,DC=abc,DC=org

The CN attribute corresponds to the “Display Name” of the account in Active Directory.

ldapsearch can be used to query LDAP server. In this case Active Directory.

In this example, we will look at how to enable access to all the members in ‘Pythian’ group who in turn have membership in ‘Nagios Admins’ group.

To find the members of Pythian group, run following command:

# ldapsearch -x -LLL -D ‘CN=Nagios User,CN=Users,DC=hq,DC=CORP,DC=abc,DC=org’ -W -H ldap:// -b ‘CN=Pythian,OU=Internal Groups,DC=hq,DC=CORP,DC=abc,DC=org’
Enter LDAP Password:
dn: CN=Pythian,OU=Internal Security Groups,DC=hq,DC=CORP,DC=abc,DC=org
objectClass: top
objectClass: group
cn: pythian
description: General Pythian group.
member: CN=Joseph Minto,OU=Service Consultants,OU=Consultants,OU=User Accounts,DC=hq,DC=CORP,DC=abc,DC=org <—————
member: CN=Test All,OU=Service Consultants,OU=Consultants,OU=User Accounts,DC=hq,DC=CORP,DC=abc,DC=org <—————
distinguishedName: CN=pythian,OU=Internal Security Groups,DC=hq,DC=CORP,DC=abc,DC=org
instanceType: 4
whenCreated: 20120720203444.0Z
whenChanged: 20150611152516.0Z
uSNCreated: 11258263
memberOf: CN=OA Admins,OU=Internal Security Groups,DC=hq,DC=CORP,DC=abc,DC=org
uSNChanged: 128023795
name: pythian
objectGUID:: XY68X44xZU6KQckM3gckcw==
sAMAccountName: pythian
sAMAccountType: 268435456
groupType: -2147483646
objectCategory: CN=Group,CN=Schema,CN=Configuration,DC=CORP,DC=abc,DC=org
dSCorePropagationData: 20140718174533.0Z
dSCorePropagationData: 20121012140635.0Z
dSCorePropagationData: 20120823115415.0Z
dSCorePropagationData: 20120723133138.0Z
dSCorePropagationData: 16010714223649.0Z

To find the details of a user account, following command can be used:

# ldapsearch -x -LLL -D ‘CN=Nagios User,CN=Users,DC=hq,DC=CORP,DC=abc,DC=org’ -W -H ldap:// -b ‘CN=Pythian,OU=Internal Groups,DC=hq,DC=CORP,DC=abc,DC=org’ -s sub “sAMAccountName=jminto”
Enter LDAP Password:
dn: CN=Joseph Minto,OU=Service Consultants,OU=Consultants,OU=User Accounts,DC= hq,DC=CORP,DC=abc,DC=org
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
cn: Joseph Minto
sn: Minto
c: US
l: Arlington
st: VA
description: 09/30/15 – Consultant – Pythian
postalCode: 22314
telephoneNumber: 1 866 – 798 – 4426
givenName: Joseph
distinguishedName: CN=Joseph Minto,OU=Service Consultants,OU=Consultants,OU=User Accounts,DC=hq,DC=CORP,DC=abc,DC=org
instanceType: 4
whenCreated: 20131203160403.0Z
whenChanged: 20150811045216.0Z
displayName: Joseph Minto
uSNCreated: 62354283
info: sponsored by:
memberOf: CN=Pythian,OU=Internal Security Groups,DC=hq,DC=CORP,DC=abc,DC=org
memberOf: CN=Nagios Admins,OU=Nagios Groups,OU=AppSecurityGroups,DC=hq,DC=CORP,DC=abc,DC=org <————-
uSNChanged: 137182483
co: United States
name: Joseph Minto
objectGUID:: uh9bC/ke6Uap0/dUk9gyLw==
userAccountControl: 512
badPwdCount: 0
codePage: 0
countryCode: 840
badPasswordTime: 130360542953202075
lastLogoff: 0
lastLogon: 130844674893200195
scriptPath: callsl.bat
logonHours:: ////////////////////////////
pwdLastSet: 130305602432591455
primaryGroupID: 513
adminCount: 1
accountExpires: 130881456000000000
logonCount: 116
sAMAccountName: jminto
sAMAccountType: 805306368
objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=CORP,DC=abc,DC=org
dSCorePropagationData: 20150320162428.0Z
dSCorePropagationData: 20140718174545.0Z
dSCorePropagationData: 20131203161019.0Z
dSCorePropagationData: 16010101181632.0Z
lastLogonTimestamp: 130837423368430625

Following are the ldapsearch switches used above:

-x Use simple authentication instead of SASL.
-L Search results are display in LDAP Data Interchange Format detailed in ldif(5). A single -L restricts the output to LDIFv1.
A second -L disables comments. A third -L disables printing of the LDIF version. The default is to use an extended version of LDIF.-D binddn
Use the Distinguished Name binddn to bind to the LDAP directory. For SASL binds, the server is expected to ignore this value.-W Prompt for simple authentication. This is used instead of specifying the password on the command line.-H ldapuri
Specify URI(s) referring to the ldap server(s); a list of URI, separated by whitespace or commas is expected; only the protocol/host/port fields are
allowed. As an exception, if no host/port is specified, but a DN is, the DN is used to look up the corresponding host(s) using the DNS SRV records,
according to RFC 2782. The DN must be a non-empty sequence of AVAs whose attribute type is “dc” (domain component), and must be escaped according to RFC
2396.-b searchbase
Use searchbase as the starting point for the search instead of the default.-s {base|one|sub|children}
Specify the scope of the search to be one of base, one, sub, or children to specify a base object, one-level, subtree, or children search. The default is
sub. Note: children scope requires LDAPv3 subordinate feature extension.

In the nagios configuration in apache, parameters in mod_authz_ldap can be used to validate a user like we used in ldapsearch:

# cat /etc/httpd/conf.d/nagios.conf
# Last Modified: 11-26-2005
# This file contains examples of entries that need
# to be incorporated into your Apache web server
# configuration file. Customize the paths, etc. as
# needed to fit your system.ScriptAlias /nagios/cgi-bin/ “/usr/lib64/nagios/cgi-bin/”Options ExecCGI
AllowOverride None
Order allow,deny
Allow from all
AuthName “Nagios Access”
AuthType BasicAuthzLDAPMethod ldap
AuthzLDAPServer “”
AuthzLDAPBindDN “CN=Nagios User,CN=Users,DC=hq,DC=CORP,DC=abc,DC=org”
AuthzLDAPBindPassword “typepasswordhere”
AuthzLDAPUserKey sAMAccountName
AuthzLDAPUserBase “CN=Pythian,OU=Internal Groups,DC=hq,DC=CORP,DC=abc,DC=org”
AuthzLDAPUserScope subtree
AuthzLDAPGroupKey cn
AuthzLDAPMemberKey member
AuthzLDAPSetGroupAuth ldapdn
require group “Nagios Admins”Alias /nagios “/usr/share/nagios/html”Options None
AllowOverride None
Order allow,deny
Allow from all
AuthName “Nagios Access”
AuthType BasicAuthzLDAPMethod ldap
AuthzLDAPServer “”
AuthzLDAPBindDN “CN=Nagios User,CN=Users,DC=hq,DC=CORP,DC=abc,DC=org”
AuthzLDAPBindPassword “typepasswordhere”
AuthzLDAPUserKey sAMAccountName
AuthzLDAPUserBase “CN=Pythian,OU=Internal Groups,DC=hq,DC=CORP,DC=abc,DC=org”
AuthzLDAPUserScope subtree
AuthzLDAPGroupKey cn
AuthzLDAPMemberKey member
AuthzLDAPSetGroupAuth ldapdn
require group “WUG Admins”

In the above configuration, mod_authz_ldap uses parameters like ldapserver, binddn, bindpassword, scope, searchbase etc to see if the supplied user credentials can be found in the Active Directory. It would also check to see if the user is a member of ‘Nagios Admins’ group.

Restarting apache would start enable Active Directory based authentication for Nagios.


Discover more about our expertise in Infrastructure Management.

Categories: DBA Blogs

FREE Live Webinar: Learn Oracle Access Manager (OAM) 11g R2 from team K21

Online Apps DBA - Wed, 2015-11-04 07:54
This entry is part 1 of 2 in the series Oracle Access Manager


Oracle Access Manager (OAM) is Oracle’s recommended Single Sign-On (SSO) solution and mandatory for Oracle Fusion Applications. Apps DBA’s and Fusion Middleware Administrator with OAM skills have better chances of any opportunities in to Oracle and are always paid higher.


Join team K21 Technologies for FREE Webinar on this Saturday (7th November,2015) at 9:00  AM PST/ 12:00 PM EST /10:30 PM IST / 5:00 PM GMT where OAM expert Ganesh Kamble will cover Overview, Architecture and the important components that are part of Oracle Access Manager (OAM).

This Webinar will cover the key points that are important for beginners and those who want to pursue your career in Oracle Identity & Access Management (OAM/OID/OIM).

Oracle Access Manager provides the centralized authentication, authorization and Single sign-on to secure access across enterprise applications.

Below you can see the architecture of Oracle Access Manager 11g R2





Oracle Access Manager 11g includes:

  • Database that contains OAM metadata and schemas
  • LDAP Server where identities of users and groups are stored
  • OHS /Apache as Web Server
  • Application resource which is protected by OAM
  • OAM Admin Server
  • OAM Managed server
  • WebGate which connects and transfer request from web server to OAM server

To know more about the components of Oracle Access Manager and detailed architecture. Click on the button below to register for our Webinar


Click Here to register for the free Webinar


If you have any Question related to OAM that you want us to cover in our FREE Webinar on OAM then post it below as comment or ask in our Private Facebook Group

The post FREE Live Webinar: Learn Oracle Access Manager (OAM) 11g R2 from team K21 appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Online Resizing of ASM Disks

Pythian Group - Wed, 2015-11-04 07:01


The SAN administrator has informed us that they have extended the disks. This is the information I had from our client. The disks were labelled:



The original size of the disks were 300GB and they had been extended to 600GB. These were multipath disks belonging to the disk diskgroup ARCH, which was being used to store archive logs in ASM. The database was and was in a 2-node RAC configuration. The server was Red Hat Linux 5.9 – 2.6.18-406.el5 – 64bit.

I checked the disks using fdisk (as the root user) and got the following:

fdisk -l /dev/mpath/mpath_compellent_oraarch

Disk /dev/mpath/mpath_compellent_oraarch: 322.1 GB, 322122547200 bytes
255 heads, 63 sectors/track, 39162 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/mpath/mpath_compellent_oraarch doesn't contain a valid partition table

This confirmed that the OS was not aware of the size extension.


Firstly, I wanted to confirm that the correct disks had been extended. So the first place to look is in ASM:

     , d.path
     , d.os_mb
     , d.total_mb
from v$asm_diskgroup g
   , v$asm_disk      d
where g.group_number = d.group_number
and = 'ARCH'

NAME       PATH                           OS_MB   TOTAL_MB
---------- ------------------------- ---------- ----------
ARCH       ORCL:ASMDISK_NEW_ARCH03       307200     307200
ARCH       ORCL:ASMDISK_NEW_ARCH01       307200     307200
ARCH       ORCL:ASMDISK_NEW_ARCH02       307200     307200


Now we need to match these names to those provided by the SAN administrator.

Check the directory:

ls -l /dev/oracleasm/disks/ASMDISK_NEW_ARCH*
brw-rw---- 1 oracle dba 253, 30 Oct  6 00:35 /dev/oracleasm/disks/ASMDISK_NEW_ARCH01
brw-rw---- 1 oracle dba 253, 29 Oct  6 00:35 /dev/oracleasm/disks/ASMDISK_NEW_ARCH02
brw-rw---- 1 oracle dba 253, 32 Oct  6 00:35 /dev/oracleasm/disks/ASMDISK_NEW_ARCH03

This gives is the major and minor numbers for the disks – major number is 253 and minor numbers are 30,29 and 32.


Then compare these numbers against the devices listed in:

ls -l /dev/mapper/mpath_compellent_oraarch*
brw-rw---- 1 root disk 253, 30 Oct  6 00:34 /dev/mapper/mpath_compellent_oraarch
brw-rw---- 1 root disk 253, 29 Oct  6 00:34 /dev/mapper/mpath_compellent_oraarch02
brw-rw---- 1 root disk 253, 32 Oct  6 00:34 /dev/mapper/mpath_compellent_oraarch03

The numbers match showing that they are the same devices.


Now we need to find the actual disks that make up the multipath devices.

multipath -l
Output truncated for brevity

mpath_compellent_oraarch03 (36000d310009aa700000000000000002b) dm-32 COMPELNT,Compellent Vol
[size=300G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
 \_ 13:0:0:3  sdfm       130:128 [active][undef]
 \_ 11:0:0:3  sdgd       131:144 [active][undef]

mpath_compellent_oraarch02 (36000d310009aa700000000000000002a) dm-29 COMPELNT,Compellent Vol
[size=300G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
 \_ 12:0:0:2  sdfi       130:64  [active][undef]
 \_ 14:0:0:2  sdfk       130:96  [active][undef]

mpath_compellent_oraarch (36000d310009aa7000000000000000026) dm-30 COMPELNT,Compellent Vol
[size=300G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
 \_ 13:0:0:1  sdfj       130:80  [active][undef]
 \_ 11:0:0:1  sdgc       131:128 [active][undef]

From here we can see the disks:


We need to find this information on the other node as well, as the underlying disk names will very likely be different on the other server.


Now for each disk we need to rescan the disk to register the new size. To do this we need to the following for each disk on both nodes:

echo 1 &gt; /sys/block/sdfm/device/rescan

Then we can check each disk to make sure it has successfully been extended:

fdisk -l /dev/sdfm

Disk /dev/sdfm: 644.2 GB, 644245094400 bytes
255 heads, 63 sectors/track, 78325 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdfm doesn't contain a valid partition table

Looks good – once done on all nodes we can then extend the multipath devices for each device name on both nodes:

multipathd -k'resize map mpath_compellent_oraarch'

Then we can check the multipath device disk size:

fdisk -l /dev/mpath/mpath_compellent_oraarch

Disk /dev/mpath/mpath_compellent_oraarch: 644.2 GB, 644245094400 bytes
255 heads, 63 sectors/track, 78325 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/mpath/mpath_compellent_oraarch doesn't contain a valid partition table

Looks good – once done on both nodes we can then resize the ASM disks within ASM:

SQL&gt; select 
     , d.path
     , d.os_mb
     , d.total_mb
from v$asm_diskgroup g
   , v$asm_disk      d
where g.group_number = d.group_number
and = 'ARCH'

NAME       PATH                           OS_MB   TOTAL_MB
---------- ------------------------- ---------- ----------
ARCH       ORCL:ASMDISK_NEW_ARCH03       614400     307200
ARCH       ORCL:ASMDISK_NEW_ARCH01       614400     307200
ARCH       ORCL:ASMDISK_NEW_ARCH02       614400     307200

SQL&gt; alter diskgroup ARCH resize all;

Diskgroup altered.

SQL&gt; select 
     , d.path
     , d.os_mb
     , d.total_mb
from v$asm_diskgroup g
   , v$asm_disk      d
where g.group_number = d.group_number
and = 'ARCH'

NAME       PATH                           OS_MB   TOTAL_MB
---------- ------------------------- ---------- ----------
ARCH       ORCL:ASMDISK_NEW_ARCH03       614400     614400
ARCH       ORCL:ASMDISK_NEW_ARCH01       614400     614400
ARCH       ORCL:ASMDISK_NEW_ARCH02       614400     614400

The disks and diskgroup were successfully resized.


Discover more about our expertise in Database Management.

Categories: DBA Blogs

Oracle ISVs : Leading The Charge in Cloud Transformation

At Oracle we are offering our application solutions is the Cloud (i.e. ERP, HCM), this also means our ISV (Independent Software Vendor) ecosystem is critical in enriching the SaaS portfolio ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

SQL Needs a Sister (Broken Link Corrected)

Gerger Consulting - Wed, 2015-11-04 01:28
I've just published an article on Medium about the missing sister of SQL. I think there is a fundamental mistake we’ve been making in using SQL. We use it both to ask a question and format the answer and I think this is just wrong. You can read the article at this link. (Sorry for the broken link in the previous post and a big thank you to the person who sent us a comment about the issue. :-) )
Categories: Development

November 17: Lumosity―Oracle ERP Cloud Customer Reference Forum

Linda Fishman Hoyle - Tue, 2015-11-03 18:04

Join us for another Oracle Customer Reference Forum on November 17, 2015 at 9:00 a.m. PDT. Tyler Chapman, VP of Finance and Controller of Lumosity will explain how Oracle ERP Cloud is helping this innovative company scale and expand globally. It is also helping to provide management insightful decision-making data and advance the finance organization as a strategic business partner.

Chapman will share his views on the best time to invest in a new ERP system and how to take advantage of Oracle ERP Cloud’s embedded best practices. He will put this into the context of Lumosity’s goals to leverage Oracle’s fully integrated ecosystem, including HCM.

Register to attend the live Forum on Tuesday, November 17, 2015 at 9:00 a.m. PDT and learn more about Lumosity’s experience with Oracle ERP Cloud.

Hide and Seek

Scott Spendolini - Tue, 2015-11-03 14:30

In migrating SERT from 4.2 to 5.0, there's a number of challenges that I'm facing. This has to do with the fact that I am also migrating a custom theme to the Universal Theme, as almost 100% of the application just worked if I chose to leave it alone. I didn't. More on that journey in a longer post later.

In any case, some of the IR filters that I have on by default can get a bit... ugly. Even in the Universal Theme:

2015 11 03 15 25 18

In APEX 4.2, you could click on the little arrow, and it would collapse the region entirely, leaving only a small trace that there's a filter. That's no longer the case:

2015 11 03 15 25 31

So what to do... Enter CSS & the Universal Theme.

Simply edit the page and add the following to the Inline CSS region (or add the CSS to the Theme Roller if you want this change to impact all IRs):

.a-IRR-reportSummary-item { display: none; }

This will cause most of the region to simply not display at all - until you click on the small triangle icon, which will expand the entire set of filters for the IR. Clicking it again makes it go away. Problem solved with literally three words (and some punctuation).

OakTable video of myself and others

Bobby Durrett's DBA Blog - Tue, 2015-11-03 10:53

You can find the full length video of my Delphix talk that I did at OakTable World on Tuesday here: url

Also, the OakTable folks have updated the OakTable World agenda page with video of all the talks. This has lots of good material and for free. Scroll down to the bottom of the page to find the links to the videos.


Categories: DBA Blogs

Partner Webcast – Oracle Mobile Security Suite (OMSS): Unified Security for Mobility

Since the latest release, Oracle Mobile Security Suite (OMSS) is a fully featured Identity-Centric Enterprise Mobility Management (EMM) Platform that can address a mix of both BYOD and corporate...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Twitter : Is it a valuable community contribution? (Follow Up)

Tim Hall - Tue, 2015-11-03 08:01

There was some pretty interesting feedback on yesterday’s post, so I thought I would mention it in a follow up post, so it doesn’t get lost in the wasteland of blog comments. :)

Remember, I wasn’t saying certain types of tweets were necessarily good or bad. I was talking about how *I* rate them as far as content production and how they *might* be rated by an evangelism program…

  • Social Tweets : A few people including Martin, Oyvind, Stew and Hermant, mentioned how social tweets are good for binding the community and helping to meet other like-minded people. I agree and I personally like the more random stuff that people post. The issue was, does this constitute good content that should be considered for your inclusion in an evangelism program? I would say no.
  • Timeline : Baback, Matthew, Noons, Hermant all mentioned things about the timeline issue associated with Twitter. Twitter is a stream of conciousness, so if you tune out for a while (to go to bed) or you live in a different time zone to other people, it is easy for stuff to get lost. You don’t often come across an old tweet, but you will always stumble upon old blog posts and articles, thanks to the wonders of Google. :) The quick “disappearance” of information is one of the reasons I don’t rate Twitter as a good community contribution.
  • Notifications : There was much love for notification posts. These days I quite often find things via Twitter before I notice them sitting in my RSS reader. I always post notifications and like the fact others do too, but as I said yesterday, it is the thing you are pointing too that is adding the most value, not the notification tweet. The tweet is useful to direct people to the content, but it in itself does not seem like valuable community participation to me, just a byproduct of being on Twitter.
  • Content Aggregation : Stew said an important point where content aggregation is concerned. If you tweet a link to someone else’s content, you are effectively endorsing that content. You need to be selective.
  • Audience : Noons mentioned the audience issue. Twitter is a public stream, but being realistic, the only people who will ever notice your tweets are those that follow you, those you tag in the tweet or robots mindlessly retweeting hashtags. Considering the effective lifespan of a tweet, it’s a rather inefficient mechanism unless you have a lot of followers, or some very influential followers.

So I’m still of a mind that Twitter is useful, but shouldn’t be the basis of your community contribution if you are hoping to join an evangelism program. :)



Update: I’ve tried to emphasize it a number of times, but I think it’s still getting lost in the mix. This is not about Twitter=good/bad. It’s about the value you as an individual are adding by tweeting other people’s content, as opposed to creating good content yourself. All community participation is good, but just tweeting other people’s content is less worthy of attention *in my opinion*, than producing original content.

If someone asked the question, “What do I need to do to become an Oracle ACE?”, would you advise them to tweet like crazy, or produce some original content? I think that is the crux of the argument. :)

Of course, it’s just my opinion. I could be wrong. :)

Twitter : Is it a valuable community contribution? (Follow Up) was first posted on November 3, 2015 at 3:01 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.