Skip navigation.

Feed aggregator

Trace Files -- 8a : Using SQL Trace for Performance Evaluations

Hemant K Chitale - Sun, 2015-11-15 09:43
All previous examples of tracing are only to demonstrate enabling tracing and viewing the results.

Now, I present a case where we can use tracing to evaluate performance of queries.  There may be a follow-up post(s) to expand on or add example(s).

I start with a table with 7million rows and more than 750MB in size (larger than the database buffer cache).  This is a multiplied copy of DBA_OBJECTS.  Statistics have not been gathered on the Table.  The Table has no Indexes.

[oracle@ora11204 ~]$ sqlplus hemant/hemant

SQL*Plus: Release Production on Sun Nov 15 22:51:15 2015

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> exec dbms_session.session_trace_enable(waits=>TRUE);

PL/SQL procedure successfully completed.

SQL> select count(*) from all_objects_many_list where owner = 'HEMANT';


SQL> exec dbms_session.session_trace_disable;

PL/SQL procedure successfully completed.

SQL> select value from v$diag_info where name = 'Default Trace File';


SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@ora11204 ~]$ tkprof /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_8944.trc MY_ALL_OBJ_M_L_QRY.PRF sys=NO

TKPROF: Release - Development on Sun Nov 15 22:53:12 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

[oracle@ora11204 ~]$

We expect a Full Table Scan against the Table.  But the tkprof output shows something else as well.

SQL ID: 9s5xc0z5khvq7 Plan Hash: 3546461632

NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
SAMPLE BLOCK (:"SYS_B_6" , :"SYS_B_7") SEED (:"SYS_B_8")

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.88 123 65 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.01 0.88 123 65 0 1

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Optimizer mode: ALL_ROWS
Parsing user id: 87 (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
1 1 1 SORT AGGREGATE (cr=65 pr=123 pw=0 time=887883 us)
3289 3289 3289 TABLE ACCESS SAMPLE ALL_OBJECTS_MANY_LIST (cr=65 pr=123 pw=0 time=20429 us cost=19 size=149234 card=5146)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 35 0.08 0.70
db file scattered read 11 0.07 0.17

SQL ID: bpgst4ajh1wb2 Plan Hash: 2662061148

select count(*)
all_objects_many_list where owner = 'HEMANT'

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.03 2 2 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.68 12.88 97792 97798 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.68 12.91 97794 97800 0 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 87
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
1 1 1 SORT AGGREGATE (cr=97798 pr=97792 pw=0 time=12886862 us)
256 256 256 TABLE ACCESS FULL ALL_OBJECTS_MANY_LIST (cr=97798 pr=97792 pw=0 time=37649164 us cost=26772 size=26520 card=1560)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
reliable message 1 0.03 0.03
enq: KO - fast object checkpoint 1 0.00 0.00
direct path read 1542 0.18 12.54
SQL*Net message from client 2 18.06 18.06

Note the additional SQL that was executed before my query. That is the Dynamic Sampling query identified by the OPT_DYN_SAMP comment.  This query, although being recursive (indicated by "dep=1" in the raw trace file) is not excluded by the SYS=NO, because it is actually executed by the user "HEMANT" (indicated by "uid=87", "lid=87", where 87 is the userid of  database user 'HEMANT').

Let's see a portion of the raw trace file relating to this Dynamic Sampling query.

PARSING IN CURSOR #140417579320304 len=616 dep=1 uid=87 oct=3 lid=87 tim=1447599117299160 hv=3408424647 ad='7f8bb8f0' sqlid='9s5xc0z5khvq7'
PARSE #140417579320304:c=1000,e=670,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=1,plh=0,tim=1447599117299155
EXEC #140417579918768:c=0,e=36,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=3,plh=2239883476,tim=1447599117300059
FETCH #140417579918768:c=1000,e=20,p=0,cr=2,cu=0,mis=0,r=0,dep=2,og=3,plh=2239883476,tim=1447599117300113
CLOSE #140417579918768:c=0,e=7,dep=2,type=3,tim=1447599117300148
EXEC #140417579320304:c=1000,e=1319,p=0,cr=2,cu=0,mis=1,r=0,dep=1,og=1,plh=3546461632,tim=1447599117300542
WAIT #140417579320304: nam='db file sequential read' ela= 17 file#=4 block#=330 blocks=1 obj#=35014 tim=1447599117300633
WAIT #140417579320304: nam='db file sequential read' ela= 12384 file#=4 block#=1044 blocks=1 obj#=35014 tim=1447599117313168
WAIT #140417579320304: nam='db file sequential read' ela= 30217 file#=4 block#=1839 blocks=1 obj#=35014 tim=1447599117343572
WAIT #140417579320304: nam='db file sequential read' ela= 14367 file#=4 block#=4273 blocks=1 obj#=35014 tim=1447599117358103
WAIT #140417579320304: nam='db file sequential read' ela= 26739 file#=4 block#=4666 blocks=1 obj#=35014 tim=1447599117384964
WAIT #140417579320304: nam='db file sequential read' ela= 15278 file#=4 block#=8352 blocks=1 obj#=35014 tim=1447599117400429
WAIT #140417579320304: nam='db file scattered read' ela= 55038 file#=4 block#=14030 blocks=8 obj#=35014 tim=1447599117455816

*** 2015-11-15 22:51:57.527
WAIT #140417579320304: nam='db file sequential read' ela= 71363 file#=4 block#=14417 blocks=1 obj#=35014 tim=1447599117527415
WAIT #140417579320304: nam='db file sequential read' ela= 81866 file#=4 block#=18668 blocks=1 obj#=35014 tim=1447599117609519
WAIT #140417579320304: nam='db file sequential read' ela= 65981 file#=4 block#=23052 blocks=1 obj#=35014 tim=1447599117675718
WAIT #140417579320304: nam='db file sequential read' ela= 14640 file#=4 block#=25282 blocks=1 obj#=35014 tim=1447599117690503
WAIT #140417579320304: nam='db file sequential read' ela= 65302 file#=4 block#=26230 blocks=1 obj#=35014 tim=1447599117755921
WAIT #140417579320304: nam='db file scattered read' ela= 51919 file#=4 block#=27340 blocks=8 obj#=35014 tim=1447599117808090
WAIT #140417579320304: nam='db file scattered read' ela= 70379 file#=4 block#=27479 blocks=8 obj#=35014 tim=1447599117878651

Note the mix of single-block 'db file sequential read' waits and multi-block 'db file scattered read' waits.  Also, note from the block numbers (indicated by "block#") that the reads are not really sequential blocks from disk.  The first block read is block 330 of file 4.  That is the header block of the Table (i.e. Segment) (we could query DBA_SEGMENTS to confirm this).  After that, Oracle has randomly identified blocks to scan.  The multi-block reads can be identified by "blocks=8"  (in this extract we see only 3 multi-block reads, as I haven't presented the entire raw trace file here.  The Dynamic Sampling had 35 'db file sequential read' waits and 11 'db file scattered read waits' -- which we see in the tkprof output.  The tkprof shows that although there were physical block read requests for 123 blocks, only 65 were actually sampled.  Dynamic Sampling took 0.88seconds (i.e. added 0.88seconds to the time to execute my query0.

Dynamic Sampling is done using Buffered reads.  However, the Full Table Scan of the large table is executed using Direct Path reads.  That is why the wait event for the count(*) query is different -- it is a 'direct path read' wait that occurs many times.

ARSING IN CURSOR #140417579185064 len=65 dep=0 uid=87 oct=3 lid=87 tim=1447599118188941 hv=2734748002 ad='7f956058' sqlid='bpgst4ajh1wb2'
select count(*) from all_objects_many_list where owner = 'HEMANT'
PARSE #140417579185064:c=24996,e=1098676,p=127,cr=184,cu=0,mis=1,r=0,dep=0,og=1,plh=2662061148,tim=1447599118188936
EXEC #140417579185064:c=0,e=36,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=2662061148,tim=1447599118189025
WAIT #140417579185064: nam='SQL*Net message to client' ela= 5 driver id=1650815232 #bytes=1 p3=0 obj#=35014 tim=1447599118189093
WAIT #140417579185064: nam='reliable message' ela= 33598 channel context=2123970008 channel handle=2123913464 broadcast message=2125033832 obj#=35014 tim=1447599118223060
WAIT #140417579185064: nam='enq: KO - fast object checkpoint' ela= 3905 name|mode=1263468550 2=65560 0=1 obj#=35014 tim=1447599118227031
WAIT #140417579185064: nam='direct path read' ela= 32287 file number=4 first dba=331 block cnt=13 obj#=35014 tim=1447599118259487
WAIT #140417579185064: nam='direct path read' ela= 97 file number=4 first dba=345 block cnt=15 obj#=35014 tim=1447599118259681
WAIT #140417579185064: nam='direct path read' ela= 9530 file number=4 first dba=361 block cnt=15 obj#=35014 tim=1447599118269284
WAIT #140417579185064: nam='direct path read' ela= 81 file number=4 first dba=377 block cnt=15 obj#=35014 tim=1447599118269448
WAIT #140417579185064: nam='direct path read' ela= 4879 file number=4 first dba=393 block cnt=15 obj#=35014 tim=1447599118274418
WAIT #140417579185064: nam='direct path read' ela= 18847 file number=4 first dba=409 block cnt=15 obj#=35014 tim=1447599118293388
WAIT #140417579185064: nam='direct path read' ela= 3692 file number=4 first dba=425 block cnt=15 obj#=35014 tim=1447599118297175
WAIT #140417579185064: nam='direct path read' ela= 2612 file number=4 first dba=441 block cnt=15 obj#=35014 tim=1447599118299862
WAIT #140417579185064: nam='direct path read' ela= 42872 file number=4 first dba=514 block cnt=62 obj#=35014 tim=1447599118342814
WAIT #140417579185064: nam='direct path read' ela= 12735 file number=4 first dba=576 block cnt=64 obj#=35014 tim=1447599118355614
WAIT #140417579185064: nam='direct path read' ela= 41997 file number=4 first dba=642 block cnt=62 obj#=35014 tim=1447599118397975
WAIT #140417579185064: nam='direct path read' ela= 11584 file number=4 first dba=704 block cnt=64 obj#=35014 tim=1447599118409765
WAIT #140417579185064: nam='direct path read' ela= 10909 file number=4 first dba=770 block cnt=62 obj#=35014 tim=1447599118420916
WAIT #140417579185064: nam='direct path read' ela= 10969 file number=4 first dba=832 block cnt=64 obj#=35014 tim=1447599118432086

Before we look at the 'direct path read' waits, please see the 'reliable message' and 'enq: KO - fast object checkpoint' waits.  Oracle will always cause a segment checkpoint before it starts doing Direct Path reads.  This is to ensure that any dirty buffers for that segment (the table "all_objects_many_list") are flushed to disk before Oracle starts using Direct Path reads to bypass the Buffer Cache. 

Next, if we look at the 'direct path read' waits, we see that they are reported differently from the 'db file sequential || scattered read' waits.  Instead of reporting "block#" and "blocks", this wait event reports "dba" ("data block address") and "block cnt".  Note the first read is of 13 blocks starting at block number 331 (after the Segment Header).  The next read starts at block number 345 (which is 331+13+1).  This is a 15 block read.  The third read starts at block 361 (which is 345+15+1).  Thus, we can see that the Table is laid out contiguous extents in the same datafile (file=4).

The "ela=" time is the time in micrseconds for each read.  The obj# is the Object ID of the table.
During the Full Table Scan, there were 1,542 occurrences of the 'direct path read' wait event.  The average read time for 'direct path read' waits can be computed by dividing the 12.54seconds over the 1,542 occurrences.  But we must note that some of these reads were 16 or fewer blocks and others were 64 or so blocks. The first read was 32,287microseconds,  the next was 97microseconds.  It is possible that either the OS or the Storage Cache did some pre-fetching of blocks.  But note the fluctuations in read wait times.  Averages can be misleading.  It is a good idea to sometimes walk through the trace file (preferably using a tool) to determine a Histogram of these waits.

If we look at the small extract of the trace file that I've pasted here, we can see that the reads were growing.  There were 64 blocks read in the last wait event that I've pasted.  Reviewing the entire trace file, I see that "64" is the highest  reported "block cnt" figure.  However, the distribution of extent size goes upto 8MB (1024 blocks), yet the largest read was 64 blocks because of how Oracle has auto-configured 'db_file_multiblock_read_count'.  Remember, tkprof does NOT show you the sizes of the multi-block reads either as 'db file scattered read' or 'direct path read'.  You have to read or parse the raw trace file to identify this.

SQL> select blocks, count(*) 
2 from dba_extents
3 where owner = 'HEMANT'
4 and segment_name = 'ALL_OBJECTS_MANY_LIST'
5 group by blocks
6 order by 1;

---------- ----------
8 16
128 63
1024 88

SQL> show parameter db_file_multi

------------------------------------ ----------- ------------------------------
db_file_multiblock_read_count integer 106

The DBA here had not specifically configured db_file_multiblock_read_count to 128 (I would certainly have done so, and would have got better performance on the Full Table Scan).
Categories: DBA Blogs

Oracle Cloud Platform Partner Webcast: Oracle IoT Cloud Service Driving value in the connected world

Partner Webcast: Oracle IoT Cloud Service Driving value in the connected world ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle SSO Failure - Unable to process request Either the requested URL was not specified in terms of a fully-qualified host name or OHS single sign-on is incorrectly configured

Vikram Das - Sat, 2015-11-14 14:57
Today, during a cutover when we were moving one of our ERP instance on Cisco UCS VMware VMs to Exalogic and Exadata, I got a call from Bimal.  The extranet iSupplier URL had been configured, but whenever any user logged in, they were seeing the following error instead of the iSupplier OAF Home page:

Oracle SSO Failure - Unable to process request Either the requested URL was not specified in terms of a fully-qualified host name or OHS single sign-on is incorrectly configured

A search on showed many hits.  I went through a few of them and ruled out the solutions given. This article sounded promising: Oracle SSO Failure - Unable to process request Either the requested URL was not specified in terms of a fully-qualified host name or OHS single sign-on is incorrectly configured (Doc ID 1474474.1).

The solution suggested:

There is  a hardware load-balancer for a multi-tier environment on place, as well as an SSL accelerator.

     For R12, there is a context variable, s_enable_sslterminator, that was set to "#".

     This should be null for e-Business R12 using specific hardwarementioned before.

1. Set  context variable, s_enable_sslterminator to null,

2. Re-ran autoconfig,

3. Re-test Single sign-ons via IE and Firefox now works as expected.

I asked the DBAs to check the value of s_enable_sslterminator:

grep s_enable_sslterminator

and sure enough the value was #

As per article Enabling SSL or TLS in Oracle E-Business Suite Release 12 (Doc ID 376700.1), the value of s_enable_sslterminator should be made null if you are using an SSL accelerator.  In our case we use SSL certificate on the Load Balancer and never on Web servers.

The DBAs removed the #
Ran autoconfig
Deregistered SSO
Registered SSO

The user was able to login after that.

Categories: APPS Blogs

Interview with Josh Coates, CEO of Instructure, on today’s IPO

Michael Feldstein - Fri, 2015-11-13 17:01

By Phil HillMore Posts (379)

Instructure, maker of the Canvas (higher ed and K-12 markets) LMS and Bridge (corporate learning market) LMS, held their Initial Public Offering today. Prior to the IPO, Wall Street analysts focused on the company’s growth, its large losses, and the challenges of the education market. The company was priced on the lower end of its range ($16.00), and closed up 12.5% at $18.00.

This IPO and its lead up have been highly watched, particularly given the rapid growth in ed tech investments and questions on whether there are real businesses to emerge based on the investments. I had the opportunity to interview CEO Josh Coates today. What follows is an edited version of the interview, focusing mostly on how Instructure’s IPO will impact education markets and existing customers. I tried to capture as much of the Q&A verbatim as was feasible, but treat the answers below as a paraphrase.

Q. What are your impressions on how the IPO has gone so far?

A. The market in general has been a blood bath [ed. Dow down 585, or 3.3%, for the week], but we’re doing well so far. Given market conditions right now, we’re pleased as punch. We priced in range [ed. $16 – $18], and the market responding well. We’re really focused as a company 6-12 months down the road, but it is nice to get this IPO feedback.

Q. The use of funds in your S-1 filing indicates more of the same but with additional war chest. Do I read this correctly to say that you do not plan to change your limited mergers and acquisition (M&A) approach? If it’s just more of same, what is the biggest impact existing customers should expect (besides Jared Stein showing up in black Chevy Suburban with an entourage)?

A. We have a bias against M&A other than very limited cases. You are right that we plan no change of strategy with our usage of the funds [ed. they raised $70 million with the IPO]. Honestly, customers should expect no real change other than that they can now dig into our financials.

Q. Some of your competitors have been suggesting that the consistent losses listed in your S-1 means that you will have to raise prices. How do you respond? Will you be able to make additional revenue from existing clients?

A. Our prices are at market levels and we intend to keep them at market. We have fundamentally strong business that works, and it’s a healthy business, so we won’t have to do anything unnatural. [ed. I pushed back that there must be pressure to make additional revenue and upsell to existing clients]. Our upsell approach right now includes getting customers to add premium support. But we are a software company. Customers should expect us to create new independent software products every 12 – 18 months. Some existing customers will use, some won’t. That’s the strategy – create new business by building great new software.

Q. What is the relative importance of growth in K-12 vs Higher Ed vs Corporate Learning for your model over the next two years?

A. [ed. Josh did not directly answer this one but offered the following observations.] We have four markets that we serve – domestic higher ed, domestic K-12, domestic corporate learning, and international markets. Right now our fastest growth is in corporate learning, but that product, Bridge, was just released in February. Just behind that market in terms of growth is domestic K-12, which is largely a green-field market; we’ve just gotten started. It’s interesting, but by customer count, domestic K-12 is our largest market. We have to do well and grow in all four markets.

Q. Do you have any plans you can share on how long you’ll be at the company?

A. I will stick around as long as board will keep me. I love the job, have a lot more work to do, and have no active plans to leave.

Q. How will your IPO affect the broader ed tech market?

A. Everything is connected. The effect will be generally positive, providing an example of a little VC-backed company that launched in 2011 and has become a healthy independent company. This is a good proof point that education and training markets can support this type of company and investment.

Q. When will you aggressively target non-English-speaking international markets?

A. Both Canvas & Bridge are in 15 languages, and Instructure has clients in 25 countries. We provide phone support English & Spanish and soon Portuguese. We’re adding offices in multiple international cities including in Brazil. But we’re doing this carefully. [ed. I pushed back that in my visit to Latin America, very few people in ed tech had any real knowledge of Instructure.] You’re right – we’re just at point of setting up legal entities in Latin America and have done no marketing. We’re in the early days.

Given the nature of an IPO and SEC rules, some of these answers are not very specific and are in good faith. We’ll keep this interview in mind here at e-Literate, however, to see if the answers are backed up by future action.

The post Interview with Josh Coates, CEO of Instructure, on today’s IPO appeared first on e-Literate.

Comparing Schemas Between Hive Clusters

Pythian Group - Fri, 2015-11-13 13:36


When running several different hive instances, we found the process of maintaining/confirming synchronization to be quite time consuming. While several tools made available by Cloudera and other frameworks do provide a visual interface for an individual instance of hive, there was no available tool for comparing across clusters. We came up with a useful way to compare hive schemas between clusters.

Our process contains 3 main steps:

  1. Fetch the schema from all hive environments, store the schema as a dictionary in a file.
  2. Create a class to perform comparisons leveraging python list, dict and pandas dataframe structures.
  3. Display the results/visualize the changes using a web interface (this approach uses python Flask).


Step 1- Fetch the Schema From all Environments
To fetch the schema we use the hive metastore api via the package.

First, we create a socket to connect to the hive thrift server.

transport = TSocket.TSocket(host, port)
transport = TTransport.TBufferedTransport(transport)
protocol = TBinaryProtocol.TBinaryProtocol(transport)

global metastore_client
metastore_client = ThriftHiveMetastore.Client(protocol)

Once that is done, it’s straightforward to get all of the tables in a db (there are lots of other great methods  that have been documented).
db = 'myDatabase'
tables = metastore_client.get_all_tables(db)

Once we get the tables, we can call the get_fields() method
for table in tables:
for field in metastore_client.get_fields(db, table):
# field is a named tuple FieldSchema(comment, type, name)
print field

Using the methods above, we retrieve all of the values and store them in a dictionary, locally in a text file, one file per environment.
{'myDatabase': {
FieldSchema(comment=None, type=int, name='user_id'), FieldSchema(comment=None, type='string', name='first_name'), FieldSchema(comment=None, type='string', name='last_name')
FieldSchema(comment=None, type=int, name='order_id'), FieldSchema(comment=None, type='string', name='item_id'), FieldSchema(comment=None, type='string', name='price')


Note that thrift gives us a tuple for the ddl, so for ease, we declare this named tuple locally.
FieldSchema = collections.namedtuple(‘FieldSchema’, [‘comment’,’type’, ‘name’])


Step 2 – Create Class to Compare Dictionaries

Now that we have a separate schema file for each hive instance, we can load these files into dictionaries and begin to compare them.

f_stage_dict = open('schema_files/stage_schema_thrift.out')
f_prod_dict = open('schema_files/prod_schema_thrift.out')
d_stage = eval(
d_prod = eval(

Create a class to compare the dictionaries. Comparisons are done leveraging a variety of nice python tools (lists, set comparison, pandas dataframes, etc..). More detail in the repo.

Class DictCompare:
def get_fields_by_table(self, db, table):...
# returns list of fields for tableA and tableB
def get_databases(self):...
# union of all dbs in all schemas A,B
def compare_fields(self, db, table)...
# returns tablediff tuple showing all field differences
def compare_dbs(self, db)...
#returns list of differences between dbs
tables_a = [table for table in self.dict_a[db]] tables_b = [table for table in self.dict_b[db]] db_diffs_a = set(tables_a).difference(set(tables_b))
db_diffs_b = set(tables_b).difference(set(tables_a))

Instantiate the class by passing the path of the schema files, and the friendly names used in the config.json file.
dc = sc.DictCompare(schema_files_path,”dev”, “stage” )
dbs = dc.get_databases()

# show all of the databases..
print "DBS=",dbs

# get all diffs for a given database
compare_dict = dc.compare_dbs("my_db")


Step 3 Display the Results / Visualize the Changes

I used a quick and simple python flask website to display the differences. Note there is a form on the page that allows users to select the hive instances to compare.

def homepage():
env_a = # 'stage', for example
env_b = # 'prod', for example
dc = sc.DictCompare(schema_files_path,env_a, env_b )
dbs = dc.get_databases()
return render_template("home.html", dbs=dbs, a_name =dc.a_name, b_name = dc.b_name,form =form )

Source code (open sourced under Apache2 license):

I Look forward to any comments or feedback.


Discover more about our expertise in Big Data and Infrastructure technologies. 

Categories: DBA Blogs

Log Buffer #449: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-11-13 13:19


This Log Buffer Edition covers some of the niftiest blog posts from Oracle, SQL Server and MySQL.


  • OBIEE 11g and Essbase – Faking Federation Using the GoURL.
  • You can use one of these to link an exception name with an Oracle error number. Once you have done this, you can use the exception name in the exception block which follows the declaration.
  • This is a short post to help out any “googlers” looking for an answer to why their EM Cloud Control install is failing in the make phase with
  • A short video that Jonathan Lewis did at the OTN lounge at RMOUG a couple of years ago has just been posted on YouTube. It’s about the improvements that appear in histograms in 12c.
  • Changing the name of the server or load-balancing server handling your BI Publisher workload for OEM can be done with a single EM CLI command.

SQL Server:

  • Manoj Pandey was going through some sample Scripts provided by Microsoft SQL Server team on their site, and was checking the JSON Sample Queries procedures views and indexes.sql script file.
  • When you open Excel 2016, go to the Power Pivot tab, click the Mange button to bring up Power Pivot window In Power Pivot, click Get External Data, choose From Other Sources Choose Microsoft Analysis Services.
  • After the introduction of the StringStoresCompatibilityLevel property in SSAS dimension after SQL Server 2012, SSAS database designer may try to create a MOLAP dimension with more unique values than what is allowed in previous SQL Server versions.
  • After SQL Server 2012 is released, SSISDB provides stored procedures to create SSIS package executions. There is one problem though.
  • This is the fourth installment in a blog series. The previous entry is located here Based on the previous blogs in this series, you should have gotten your database hosted in WASD by now & secured access to your server.


  • finally runs MySQL 5.7 which was released on October 21.
  • What Does The Universal Scalability Law Reveal About MySQL?
  • There are many dimensions by which a DBMS can be better for small data workloads: performance, efficiency, manageability, usability and availability.
  • The new Mydumper 0.9.1 version, which includes many new features and bug fixes, is now available.
  • Nginx is well-known for its ability to act as a reverse-proxy with small memory footprint. It usually sits in the front-end web tier to redirect connections to available backend services, provided these passed some health checks.


Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

Categories: DBA Blogs

US-EU Safe Harbor Framework Invalidated – It’s time to Love Your Data!

Pythian Group - Fri, 2015-11-13 12:43

Pythian has been closely following the developments in the EU relating to the invalidation of the US-EU Safe Harbor Framework.

On Friday, November 6, 2015, the European Commission issued guidance on the cross-border data transfer issues. This guidance spoke more to “alternative transfer tools” available to legitimize data flows to jurisdictions deemed “not adequate,” like the United States.

If you do transfer data and rely on Safe Harbor principles, we recommend you:

Keep Calm
Don’t rush to other transfer mechanisms that may turn out to be less than ideal.

Love your data!
What personal data you are transferring outside the EU?
Where is it going?
What arrangements have you made to ensure that it is adequately protected?

Carry on and call Pythian
We can help you mitigate your risk of incidex* and prepare you for the pending EU regulation, General Data Protection Regulation (GDPR) slated to become law by end of 2015 and will mandate that all companies with EU customers follow the new rules wherever they’re based by 2018.

*Incidex: Today’s formula for total cost of ownership isn’t the traditional capex + opex + long-term expenses. Instead, there’s another more important cost component in today’s application landscape that Pythian refers to as “incidex”, or the cost of not investing in technologies that will secure and protect your revenue generating systems. If you redefine TCO as capex plus opex plus incidex, it’s a more holistic approach. And you can use the holistic TCO analysis to justify an investment in security technology.

Categories: DBA Blogs

Wanted: RDBMS superpower summary for app developers

Catherine Devlin - Fri, 2015-11-13 12:05

At last night's WWCode Cincinnati panel, I recommended that developers talk to their DBA about what advanced capabilities their RDBMS can offer, so that they don't end up reimplementing functionality in the app that are already available (better and more efficiently) in the database itself. Devs can waste a lot of effort by thinking of databases as dumb, inert data boxes.

I was asked an excellent question: "Where can a dev quickly familiarize herself with what those capabilities are?" My answer was, "Um."

Do not say they should read the docs. That is a "let them eat cake" answer. The PostgreSQL docs are over 2900 pages. That's not what they need.

Suggestions, folks? Python developers have built great summary sites, like the Hitchhiker's Guide to Python. What are the equivalents in the database world? Do they exist? Do we need to write them?

Update hinted for wrong index

Bobby Durrett's DBA Blog - Fri, 2015-11-13 10:52

I worked with our support team to improve the performance of a PeopleSoft Financials update statement yesterday. The update statement had an index hint already in it but the index was not the best one of the available indexes.

Here is the original update statement:

 BANK_CD = :2,

I listed out the columns for the indexes on the table using the “querytuning” part of my standard sql tuning scripts.

Here are the columns for the hinted index:


The where clause includes only the first two columns.

But another similar index, PSEPYMNT_VCHR_XREF, exists with these columns:


The where clause has all three of these columns. So, why was the original query hinted this way? Does the E index not work better than the C index? I ran this query to see how selective the condition PYMNT_SELCT_STATUS = ‘N’ is.

>select PYMNT_SELCT_STATUS,count(*)
 4 AND B.REMIT_VENDOR = '12345678'

- ----------
C 5
N 979
P 177343
X 5485

I included the conditions on the first two columns that both indexes share, but removed the other conditions from the original update. A count on the number of rows that meet the conditions of only these two columns shows how many rows the original index will have to use to check the remaining where clause conditions.

I grouped by PYMNT_SELCT_STATUS to see how many rows met the condition PYMNT_SELCT_STATUS = ‘N’ and how many did not. Grouping on PYMNT_SELCT_STATUS shows how many rows the new index will use to check the remaining conditions in the where clause. I ran this query to see if the second index would use fewer rows than the first.

This query showed that only 979 of the over 180,000 rows met the condition. This made me think that the E index which includes PYMNT_SELCT_STATUS has a good chance of speeding up the original update. I ran a count with a hint forcing the C index and then again forcing the E index:

>set timing on
>select /*+ INDEX(B PSCPYMNT_VCHR_XREF) */ count(*)
 4 AND B.REMIT_VENDOR = '12345678'
 6 AND B.PYMNT_ID = ' '


Elapsed: 00:13:52.53
>select /*+ INDEX(B PSEPYMNT_VCHR_XREF) */ count(*)
 4 AND B.REMIT_VENDOR = '12345678'
 6 AND B.PYMNT_ID = ' '


Elapsed: 00:00:01.28

The original hint caused the select count(*) query to run in 13 minutes while the new hint caused it to run in 1 second. Clearly the new E index causes the query to run faster!

The developer that I was working with found the problem update statement in some PeopleCode and was able to edit the hint forcing it to use the better index. We migrated the modified code to production and the user was able to run the update statement without the web site timing out. Prior to the change the user was not able to complete the update because the SQL statement took so long it exceeded our application server timeout.



Categories: DBA Blogs

Questions Asked, Answered from Webcast: 3 Ways to Power Collaboration and Mobility

WebCenter Team - Fri, 2015-11-13 06:00
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

Thanks to everyone who joined us earlier this week on our live webcast “Three Ways to Power Digital Workplace Collaboration and Mobility”. For those who missed it or are interested in catching, the webcast replay, the on demand recording is now available. You will also find a copy of the presentation for download there.

We also thank you for your questions and interest. Due to limited time, we were able to drive real time responses to most, but not all, questions. So, for good measure, I have captured relevant questions and answers from the live webcast here. Please do keep the communication going. Feel free to post your comments and questions here, start your free trial at: and visit us at for more information on the solutions and potential next steps for you. We are committed to making your cloud journey easier and more meaningful.

Can Documents Cloud be used with

Yes. Oracle Documents Cloud comes with REST APIs that allow you to easily integrate with or any other CRM, ERP, on premises or SaaS applications and other systems.

Do we need to buy Oracle Mobile Cloud Service to enable mobile use of Process Cloud? Same question for Documents Cloud?

Both Oracle Documents Cloud and Oracle Process Cloud come with their native mobile applications that can be easily installed from your mobile store. The great advantage of these Oracle Platform as a Service (PaaS) solutions are that they easily plug and play together. So, if for example, you are looking to build a custom mobile app for sales reps or to serve a specific function, you can easily build the mobile app with Oracle Mobile Cloud Service that will drive content with Documents Cloud and processes using Oracle Process Cloud.

How do you maintain data segregation for security and compliance?

Security is a key tenant for Oracle PaaS solutions. Oracle Documents Cloud support data and user seggregation. For example, you can have a public instance and a separate private instance to support your security and corporate policy compliance. You can even segregate on user types. Plus, you have granular privileges control for folder sharing, link sharing, link expiry and such to enforce data and information security.

What is the benefit of using Process Cloud over Oracle BPM?

Oracle Process Cloud offers you a subscription based business process management service in the Cloud. Designed for business users, it allows for rapid process design and automation by users off any device without IT reliance. One of the major benefits to using Oracle Process Cloud is that Oracle handles the installation and uptime. Other benefits include easy integration with other Oracle Cloud services (including Oracle Documents Cloud, Oracle Mobile Cloud, etc.), other Oracle and third party on-premises or Cloud systems.

Are Documents Cloud and Process Cloud integrated together?

Many of our customers, including Mythics, are indeed using these two solutions together to enhance their process automation and document collaboration activities. This means that processes can be kicked-off during content/document collaboration and in turn, you can attach a document to a process. In addition, both services have great REST APIs which allow users to drive their own integration with existing solutions and applications.

Is Documents Cloud public or private/on premise?

Oracle Documents Cloud is a service available in Oracle public cloud. You can see the related services on Given Oracle’s pedigree in on-premises content management with Oracle WebCenter Content, we are one of the leading providers to support hybrid, public and on-premises content management and sharing.

What branch of Oracle does the Documents Cloud Service fall under- for example, Customer Experience, Business Intelligence, or Enterprise Resource Planning?

Oracle Documents Cloud Service is part of Oracle Cloud Platform for content and process. Oracle has the distinct advantage of leveraging resources, learnings and R&D from our on-premises Oracle WebCenter and Oracle BPM solutions as foundational elements to provide you the most comprehensive, holistic and proven cloud solutions. For more information, please visit

Mythics – are you doing document sharing as part of the process with external parties too (legal, for example)?

Currently, our process is internal only. However, we often communicate with external parties. Increasingly, we are recommending the Documents Cloud Service to do this. However, this is a shift in thinking. Many people are used to emailing files via email. It will take a while before this changes.

This is for Mythics – who is doing process modeling in the company? Are you involving IT or business analysts?

Mythics is an IT solutions company, so our business analyst functions overlap with IT. To compare with other industries, we are involving both sides of the company in process modeling.

Does Oracle Documents Cloud Services come with a document viewer?

Yes. Oracle Documents Cloud Service does have a document viewer that allows proper rendering of all document types on any device. In addition, Oracle Documents Presenter app allows you to present curated content directly off your tablet. Sales reps find the Presenter app particularly useful for impactful, visually appealing presentations right off their iPads, et al.

Does the document viewer/Documents Presenter support annotations capabilities for collaborations among users?

Yes. You can make and view annotations from Oracle Documents Cloud web application.

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

Will Sites Cloud be integrated with Documents and Process Cloud Services?

Quite like Oracle’s other PaaS services including Oracle Documents Cloud Service and Oracle Process Cloud Service, Oracle Sites Cloud will support integration with other Oracle PaaS services. In this webcast, for example, you saw an example of how community sites can benefit from pulling content, conversations and processes all in one place, all in context. You will continue to see more of this from Oracle in the coming weeks and months. Take a look at Mythics blog post on early impressions of oracle Sites Cloud Service, coming out of OOW:

Mythics – how do you see public institutions using Process and Documents cloud given the compliance requirements?

I recently attended Oracle OpenWorld. I noted several Public Sector organizations interest and adoption of Platform as a Service cloud tools. Process Cloud and Documents Cloud are industry agnostic tools and can be used different business areas. Specific to compliance requirements, not all public sector institutions have the same compliance requirements - some will require strict FedRAMP controls, where others have specific data security requirements. Oracle has done tremendous investments for data security across all solutions and these Cloud Platform services are no different. I also understand that Oracle is currently working to make all PaaS tools available in their government cloud data centers.

Can we try Oracle Documents Cloud Service today?

Yes. You can start your free trial of Oracle Documents Cloud Service today. Simply go to to register for the free trial.

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

Confused about Oracle Identity Management : See me #Sangam15 Hyderabad on 21/22 Nov 2015

Online Apps DBA - Fri, 2015-11-13 05:36

Are you confused about so many Oracle Identity & Access Management (OAM, OIM, OID, OVD, OAG and the list goes on…) ?

Do you understand Identity Management Jargons like Authentication, Authorization (Fine-grained vs Coarse-grained), Provisioning, Reconciliation, SoD ?

Wish you could automatically creates Users and assign Roles/Responsibilities in Oracle E-Business Suite centrally ?

Interested in enhancing your skills and see my Journey from DBA/Apps DBA to Author of book on Oracle Identity & Access Management ?




I am presenting two papers on Oracle Identity & Access Management at #Sangam15 on 22nd Nov 2015 at Hyderabad@India



First Presentation at 10 AM on 22nd Nov (Sunday) covers various Oracle Identity & Access Management products and basics functionality of each of those products .


Second Presentation at 2:40 PM on 22nd Nov (Sunday) covers Oracle Identity Management and Integration with Oracle E-Business Suite. This session also covers basic concepts like Provisioning, Reconciliation, User, Role, and Responsibility Management in Oracle E-Business Suite Apps R12/11i using OIM.


Seats are limited for my session at #Sangam15 so register for these two sessions early to avoid disappointment.

1. Overview of Oracle IAM for terminology and all the products that are part of Oracle IAM like OAM, OIM, OID, OVD, OIF, OAG, OWSM …..

Screen Shot 2015-11-13 at 11.24.48


2. Integrate Oracle E-Business Suite with OIM for User, Role and Responsibility Management

Screen Shot 2015-11-13 at 11.08.00



Special Bonus to those who attend presentation will get my “30 Day Apps DBA Challenge” absolutely FREE (more details during the session).


Did you subscribe to our Private Facebook Group and YouTube Channel ?

The post Confused about Oracle Identity Management : See me #Sangam15 Hyderabad on 21/22 Nov 2015 appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Oracle JET and ADF Faces Integration in ADF

Andrejus Baranovski - Thu, 2015-11-12 22:49
Oracle JET provides a set of UI components, based on a combination of HTML and JavaScript. ADF Faces is an Ajax-enabled rich JavaServer Faces component framework that uses JavaScript to render client-side components, implement rich component functionality, validate user input and convert user data input. Although it is not officially documented, but obviously Oracle JET components can be integrated into JSF pages implemented with ADF Faces. I think such integration provides many benefits to ADF Faces. Use cases where performance is critically important can be implemented with Oracle JET client components and rich functionality can be implemented with ADF Faces, all in the same ADF application. Probably I would not create dependencies between Oracle JET and ADF Faces components rendered on the same page. Oracle JET should render data coming from REST services, while ADF Faces should work with data obtained from ADF Bindings layer.

I have managed to implement Oracle JET component and ADF Faces UI in the same physical ADF page. Oracle JET group renders input text, user types text and this text appears in the output below (use case is described here - Getting Started Quickly without the JET QuickStart). ADF Faces UI group renders regular ADF data entry form with navigation and save/cancel buttons. It all works together on the same page:

Download sample application implemented with JDeveloper 12.2.1 - I did not delete Oracle JET libraries from the sample application, so you could download and directly run it. Download size is a drawback, Oracle JET is part of application and it occupies around 11 MB.

Some text is entered into Oracle JET field, on enter it is copied into output below:

ADF also works as expected, for example I can change Salary value and commit changes with Save button:

ADF validation also works fine, I remove value for mandatory field and try to save. Required validation message will popup:

Take a look into ADF application structure with Oracle JET elements inside. Basically I just copied Oracle JET files into ADF ViewController public_html folder:

There is Oracle JET JavaScript file, where function is implemented to handle text entry and output update, pretty simple one:

Here it comes the most important part - ADF page. Instead of using HTML page to implement Oracle JET, I'm using regular ADF page based on JSF and ADF Faces. Oracle JET CSS and RequireJS script for Oracle JET are defined in the top of the page, just above the form - this will make sure correct initialisation for Oracle JET:

Oracle JET UI (input text, together with output text) is defined inside ADF Faces Panel Header with HTML div. This is how Oracle JET is rendered inside ADF page, using HTML:

Stay tuned and I will be posting more use cases on Oracle JET integration topic.

Oracle OpenWorld 2015 Recap – The Forecast Includes Clouds and a Better WebCenter User Experience

12189830_10153162421878053_7938706606449034768_nOracle OpenWorld 2015 wrapped 2 weeks ago. For those of you unable to attend, it was sunny all week with temperatures in the 70s, yet clouds were everywhere. My apologies for the pun, but you could have easily not noticed the blue skies and bright sun as The Cloud was really everywhere. Attendees even had the chance to immerse themselves in the cloud with an interactive display consisting of a few thousand plastic balls you could jump into and roll around in.

Sadly I wasn’t able to partake in this fun, but I was able to attend most of the keynotes, so I got pretty well immersed myself. The Cloud was the theme of the keynotes, with Oracle making it very clear that the future of their applications, middleware, and database technologies are in the cloud – as part of their software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS) offerings. In fact, the future is really now as Oracle announced they have over 1,300 customers already using their Oracle ERP Cloud.

All of this is very exciting from an IT and technology perspective, as the cloud makes it easier to innovate and deploy home-grown or 3rd-party applications across business functions. But what about Oracle WebCenter? Can that innovation and deployment flexibility be extended to Oracle WebCenter in the cloud? Not yet, or not entirely. WebCenter is currently in a hybrid mode when it comes to the cloud. WebCenter customers will continue to manage their content on premise, but use Oracle Documents Cloud Service for file syncing and sharing in the cloud. Despite this, Oracle did have some exciting announcements regarding Oracle WebCenter 12c, which was released on October 26th. Here are some highlights:

  • User Experience (UX) is a major priority for Oracle WebCenter. UX has been a problem area for WebCenter for many years, but a lot of enhancements have been made that customers should be excited about. I saw a lot of these demonstrated during the WebCenter Portal Strategy and Vision session, and I came away impressed with the improvements they have made with contribution, publishing, and mobility.

    Specific examples include a new inline editor (WYSIWYG) entirely written in Oracle’s Application Development Framework (ADF), which will make it easier for contributors to add content to the portal in-context of the page. Overall, the page composer is just more intuitive, and the click stream to add, edit, and publish content on the portal was easy to follow during the demonstration. Lastly, the page templates are now mobile responsive meaning they will adjust for the form factor of the device (desktops, tablets, and smartphones) being used to view the portal.

  • WebCenter Content also received some user interface improvements. This includes a new image viewer featuring annotations so users can mark up images. And speaking of images, the imaging server has been merged with the content server.
  • And for the front-end developers out there, Oracle was heavily promoting their new JavaScript framework called JET. If you like to use frameworks like bootstrap you should check out JET’s component library. JET will be Oracle’s framework of choice for the cloud and its applications going forward.

Fishbowl Solutions will be sharing more about what we learned regarding Oracle WebCenter 12c, Oracle’s cloud offerings, and Oracle’s front-end design and development technologies during a webinar in early December. Check back to this blog or the Fishbowl website for more details soon.

More information on Oracle WebCenter 12c can be found on the Oracle WebCenter blog and within this press release.

For more information on Oracle’s cloud offerings, see this Forbe’s article.

The post Oracle OpenWorld 2015 Recap – The Forecast Includes Clouds and a Better WebCenter User Experience appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Data To Back Up Concerns Of Textbook Expenditures By First-Generation Students

Michael Feldstein - Thu, 2015-11-12 15:49

By Phil HillMore Posts (379)

David Wiley has added to the conversation[1] over use of data on college textbook pricing and student spending patterns with “The Practical Cost of Textbooks”. The key argument is to go beyond prices and spending and look at the most direct measure of asking students themselves how textbooks costs have impacted them. He then looks at the Florida Virtual Campus surveys (also included in my post), concluding:

What impact does the cost of textbooks have on students? Textbook costs cause students to occasionally or frequently take fewer courses (35% of students), to drop or withdraw from courses (24%), and to earn either poor or failing grades (26%). Regardless of whether you have historically preferred the College Board number or the student survey number, a third fact that is beyond dispute is that surveys of students indicate that the cost of textbooks negatively impacts their learning (grades) and negatively impacts their time to graduation (drops, withdraws, and credits).

And yes, we need to do something about it.

Amen. Surveying over 18,000 students, the FVC surveys are quite important and should be on everyone’s radar.

More Out Of Data

I strongly feel that this type of discussion (as well as media quotes, policy, and legislation) should use the best data available, describe that data accurately, and ask for more data where there are holes. And it often takes multiple views into multiple data sources to get a complete picture. For the College Board / IPEDS data, it is indisputable what it says, but descriptions of the source should note that it comes from financial aid offices and not from student spending or budgets. For the student spending data (Student Monitor, NACS, Cal State), the data is solid and accurately described (that I have seen), but as reported they are missing segmentation across key student demographics.

In Mike Caulfield’s post he argues that it’s all well and good that students are creatively reducing their textbook expenses by renting, borrowing, etc, but that this data can mask other important factors.

You could use skills like this to cobble together that bag of books and say *that’s* the true cost. Look what you can get your books for if you game the system right!

But to say that, you’d have to have learned nothing in the past decade about why students fail. Requiring a non-traditional student to cobble together a bag of half-priced textbooks the way a second-generation student might is setting them up for failure.

Since I have argued that the student spending data is critical and much more relevant than the College Board data, let’s explore that data further – asking for more. Richard Hershman and NACS were kind enough to agree to requests for cross-tabs on two of their questions against sector and first-generation status.

For Fall 2014, students were asked how much they spent on required course materials.

Spending_and_Cost_Concerns 1

It turns out that Mike is right. First-generation students spend 10 % more, acquire 6% fewer textbooks, and end up paying 17% more per textbook than do non first-generation students. This data could be used as a starting point for policy that addresses this problem – explaining the problem and providing a measurement for improvement.

For Spring 2015, students were asked to list their top 3 cost concerns for college.

Spending_and_Cost_Concerns 3

First-generation students are somewhat more likely to worry about course materials (41% to 37%) than non first-generation students, but the view across sectors is more telling. 2-year college students are much more likely to worry about course materials (50% to 37%) than 4-year college students. Tuition is lower at 2-year schools, and fewer student live on campus or away from home. So it makes sense that course material concerns would increase in relative terms (% listing in top 3 concerns). It also makes sense how car payments / insurance / gas would be more important.

These are real issues of equity, and accurate understanding of multiple data sources is more likely to lead to effective policy decisions than using single sources. And you get that data by exploring it further and seeing where it takes you.

  1. My initial post, Mike Caulfield responseBracken Mosbacker, my response to Mike, Mike follow-up

The post Data To Back Up Concerns Of Textbook Expenditures By First-Generation Students appeared first on e-Literate.

Oracle Priority Support Infogram for 12-NOV-2015

Oracle Infogram - Thu, 2015-11-12 15:22

Another OpenWorld is in the books, and now it’s time to absorb and pursue the material covered. Here’s a good place to get started: Continue Learning Post-Oracle OpenWorld
Switch off "_rowsets_enabled" in Oracle Database 12c, Upgrade your Database - NOW!
The complete guide to tuning the appearance of NetBeans, from WebLogic Partner Community EMEA.
From the same source:  Showing HashMap values in af:table using af:iterator
Security Alert CVE-2015-4852 was released on November 10th, 2015.

This vulnerability, which involves the Apache Commons and Oracle WebLogic Server, has received a CVSS Base Score of 7.5.

Due to the severity of CVE-2015-4852, Oracle strongly recommends applying mitigation steps and patches as soon as available.

The Security Alert Advisory for CVE-2015-4852 is the starting point for relevant information. This Security Alert provides mitigation recommendations to be implemented while awaiting the release of Oracle WebLogic Server patches. It includes links to other important documents that provide a list of affected products and the patch availability information. It is essential to review the Security Alert supporting documentation referenced in the Advisory before applying patches or mitigation instructions.

The Security Alert Advisory is available at the following location:

Mitigation instructions are available at:

WebLogic Server Patch Availability information will be updated at:

All Oracle Critical Patch Updates and Security Alerts are available on the Oracle Technology
Network at:

SPARC and Solaris
Virtual HBA in Oracle VM Server for SPARC, from Virtually All The Time.
Updating an Oracle Solaris 8 system, from Ops Center.
SPARC M7 Software In Silicon - Useful Webinar, from Notes from the Main Sequence.
Last login tracking in pam_unix_session, from The Observatory.
Fusion Middleware 12c – Selective tracing, from SOA & BPM Partner Community Blog.
When is the next Java update?, from Java Platform Group, Product Management blog.
NetBeans IDE 8.1 Plugin: Entity Expander, from Geertjan’s Blog.
Oracle Retail
Announcing the New Oracle Retail Reference Library Repository, from Oracle Retail Documentation.
From the Oracle E-Business Suite Support blog:
Webcast: EAM 12.2.5 Enhancements, Nov 19, 2015 9:00 AM MT
Webcast: Item Orderability Functionality in OM, Nov 17, 2015 9:00 AM MT
Webcast: Actual Costing and Create Accounting In OPM, Nov 16, 2015 9:00 AM MT
Do you Know My Oracle Support Communities?
1099 Tax Reporting Patches for 2015 Are Now Available
From the Oracle E-Business Suite Technology blog:

Quarterly EBS Upgrade Recommendations: November 2015 Edition

OBIEE 11g and Essbase – Faking Federation Using the GoURL

Rittman Mead Consulting - Thu, 2015-11-12 14:56

This blog is going to address what happens when we can’t take advantage of the Admin tool’s powerful vertical federation capabilities when integrating relational stars and Essbase cubes. In the Admin tool, synonymously referred to as the RPD, vertical federation is the process of integrating an aggregate data source, in this case Essbase, with a detail level source from a data mart. This technique not only has the ability to increase query efficiency and decrease query time, it also has the added benefit of bringing together two powerful and dynamic reporting tools. But like most things, there is a pretty big caveat to this approach. But, before I jump into what that is, some housework. To start, let’s make sure things don’t get lost in translation when going back and forth between Essbase and OBIEE jargon. In Essbase speak, dimensions can be thought of as tables in a relational structure, whereas Essbase generations can be thought of as columns in each table, and members are the values in each column. Housework done, now the caveat. Often, dimensions in Essbase cubes are built in such a way as to not neatly support federation; that is, they are arranged so as to have an uneven number of generations relative to their corresponding relational dimension. It should be noted at this point that while federation is possible with a ragged hierarchical structure, it can get kind of messy, essentially ending up in a final product that doesn’t really look like something an Essbase-centric user community would readily and eagerly adopt. So what then, can we do when federation is out of the question? Let’s frame the solution in the form of a not-atypical client scenario. Say we’ve got a requirement per a large finance institution of a client to bring together their Essbase cubes they’ve used thus far for their standardized reporting, i.e. balance sheets, income statements and the like, with their relational source in order to drill to account detail information behind the numbers they’re seeing on said reports. They’ve got a pretty large user base that’s fairly entrenched and happy with their Smart View and Excel in getting what they want from their cubes. And why shouldn’t they be? OBIEE simply can’t support this level of functionality when reporting on an Essbase source, in most cases. And, in addition to these pretty big user adoption barriers to an OBIEE solution, now we’ve got technology limitations to contend with. So what are our options then when faced with this dilemma? How can we wow these skeptical users with near seamless functionality between sources? The secret lies with URL Action Links! And while this solution is great to go from summary level data in Essbase to its relational counterpart, it is also a great way to simply pass values from one subject area to another. There are definitely some tricks to set this up, but more on those later. Read on.

The Scenario

In order to best demonstrate this solution, let’s set up a dashboard with two pages, one for each report, and a corresponding dashboard prompt. The primary, source report, out of Essbase, will be something that could easily resemble a typical financial report, if not at least in structure. From this high-level chart, or similar summary level analysis, we’ll be able to drill to a detail report, out of a relational source, to identify the drivers behind any figures present on the analysis. In this example, we’re going to be using the Sample App, Sample Essbase subject area to go to the equivalent relational area, Sample Sales. Yes, you could federate these two, as they’ve done in Sample App, however they’ll serve well to demonstrate how the following concept could work for financial reporting against ragged or parent-child structures. Values for Product Type, in the following instance, could just as well be the descendants or children of a specific account, as an example. As well, there is no equivalent relational subject area to use for the sake of the SampleApp Essbase GL subject area. In the example below, we have a summary, month level pivot table giving us a monthly sales trend. The user, in the following example, can prompt on the Year and Customer segment through a dashboard prompt, but as you’ll see, this could easily be any number of prompts for your given scenario.

Monthly Trend Summary:

Solution 1:

In the sales trend example above, we are going to enable our user to click on a value for a revenue figure and then navigate to a detail report that shows products sold for the month by date. Again, this all must be done while passing any chosen parameters from both the dashboard prompt and analysis along to the detail analysis.

Proof of Concept

First, let’s start with the guts of the report example above. As you can see, there is quite a bit more under the hood than meets the eye. Let’s go over the approach piece by piece to help build a more thorough understanding of the method.

Step 1: Include the Columns!

So the idea here is that we want to pass any and all dimensional information associated with the revenue figure that we pick to a detail level report that will be filtered on the set of parameters at the chosen intersection. We can hide these columns later, so your report won’t be a mess. I’ll add here that you might want to set any promoted values to be equal to the presentation variable on its respective dashboard prompt with a default value set, as seen below. This will help to make the report digestible on the compound layout. The following picture shows the prompted values to drive our summary report on Year and Customer Segment. You can do this in the filters pane on the criteria tab with the following syntax:


                            All column values we want to pass need to be represented on the report:


                           Values that will be passed to detail report (in this case, BizTech, Communication, Active Singles, 2012, and 2012 / 11):

Step 2: More Columns!

In addition to the columns that comprise the report, we need to add an additional iteration of every column for all of those added to the report in the first place. In the pictures above, you can see that these are the columns titled with the ‘URL’ prefix. In the column editor, concatenate quotes to the column values by attaching the following string (this is a single quote followed by a double quote and another single quote w/ NO spaces between them):

‘ ” ‘ || “Table”.”Column Name” || ‘ ” ‘

While this step may seem extemporaneous, you’ll see a bit later that this step is all too necessary to successfully pass our column values through our URL Action Links. After you’ve created the custom columns, just group them along with their counterpart in the report, as in the pics above.

Step 3: An Approach to Handling Hierarchies

In the previous pictures, you can see the products hierarchy that comprises the rows to the report. In order to pass any value from the hierarchy as well as its members we are going to have to include its respective generations in the rows as well. For our example, we’re going to use Brand, LOB, and Product Type. In this way, a user can select any sales value and have all three of these values passed as filter parameters to the detail analysis through a URL. You’ll notice that we haven’t given these columns a counterpart wrapped in quotes as you were told to do previously. This is quite on purpose, as we’ll see later. These columns will provide for another example on how to pass values without having to implement a second column for the purpose of wrapping the value in quotes.


When first placing the hierarchy on your analysis and expanding it to where you’d like it for the sake of the report, you can simply select all the column values, right click and then select ‘Keep Only’. This will establish a selection step under the Products Hierarchy to ensure that the report always opens to the specified structure from now on. So, that’s good for now, let’s get to the magic of this approach.


Step 4. Set up the Action Link

In this case, we’re going to ‘drill’ off of the Sales column in our table, but we could really ‘drill’ off of anything, as you’ll see. So, pop open the Interaction tab for the column and select Action Links as our primary interaction. Edit that guy as follows (see URL procedure below). It used to be that we could do this via the ‘P’ parameters, however this method seems to be mostly deprecated in favor of the col/val method, as we shall utilize below.

URL Procedure – Server URL*
Portal&Path=@{1} – path to dashboard
&Page=@{2} – dashboard page
&Action=@{3} – action to perform, in this case navigate (there are others)
&col1=@{4} – column from target analysis we wish to manipulate (our sales detail analysis)
&val1=@{5} – column from source analysis with which we are going to pass a filter parameter to target
&val4=“@{11}” – will discuss these quoted parameters later on

*Note that this value can be made into a variable in order to be moved to different environments (DEV/TEST, etc…) while maintaining link integrity

The picture above details how to set up the URL link as described above. The col1 value is the column from the target analysis we want to filter using the value (val1) from our source. Be sure to qualify this column from the subject area from which it originates, in this case “A – Sample Sales”.

Ex: “A – Sample Sales”.”Time”.”T05 Per Name Year”

Val1, as these parameters exist in ‘sets’, is the column from our source analysis we want to use to filter the target analysis. This is where our custom, quoted columns come into play. Instead of using the original column from our analysis, we’re going to use its quoted counterpart. This will ensure that any values passed through the URL will be enclosed in quotes, as is required buy the URL. Note that we’re not using a value parameter in this case, but a column instead (the dropdown to the left of the text box).

Ex: ‘ ” ‘ || “Time”.”T05 Per Name Year” || ‘ ” ‘

You can proceed this way to pass as many values as you’d like to your detail analysis, with this coln, valn method. Again, just be sure that your columns are included in the source analysis or the values won’t get ported over. Once you’ve got all your columns and values set up, go ahead and enter them into the URL field in the Edit Action dialogue box, as above. Make sure you reference your variables using the proper syntax (similar to a presentation variable w/ an @ sign):

Ex: col1=@{4} – ‘4’ being the variable name (note that these can be named most anything)

Quoting Parameters

As an alternative to including an extra iteration of each column for the sake of passing quoted column values, we can instead, put quotes around the parameter in our URL, as in the example above. The limitation to this method, however, is that you can only pass a singular value, as in Year, for example. In later posts, we’ll address how to handle passing multiple values, as you might through a dashboard prompt.

Step 5. Set Up the Detail Analysis

For our detail analysis we’re going to set it up in much the same way as our summary. That is, we need to include the columns we want to filter on in the target report as. Unfortunately, our target report won’t simply pick them up as filters as you might put on your filters pane, without including them on the actual analysis. Again, any columns we don’t want visible to a user can be hidden. Below, we simply want to see the Calendar Date, Product, and Revenue, but filtered by all of our source analysis columns.

In the criteria view for our target, detail analysis, we need to make sure that we’re also setting any filtered columns to ‘is prompted’. This will ensure that our target analysis listens to any filter parameters passed through the URL from our source, summary analysis. As a last step, we must again fully qualify our filters, as in the picture below.

This picture shows our Year ‘is prompted’ filter on our target, detail analysis. Note that this column is also a column, albeit hidden, on this report as well. This will act as a filter on the analysis. It is being ‘prompted’ not by a dashboard prompt, in this instance, but by our source, summary analysis.

Step 6. Testing it All Out

Now that we’ve got all the pieces of the puzzle together, let’s see if it works! To QA this thing, let’s put a filter object on the target, detail analysis to make sure that the report is picking up on any values passed. So if we click on a sales value, we should be taken to the target analysis and see that all the parameters we set up were passed. The picture below confirms this!


Hopefully this can be one more trick to keep in the tool belt when faced with a similar scenario. If you have any hiccups in your implementation of this solution or other questions, please feel free to respond to this post. Stay tuned for additional articles related to this topic that go much more in depth. How do you handle passing multiple column values? How do I keep my report query time low with all those extra columns? How do I pass values using the presentation variable syntax? Can I use the Evaluate function to extract the descendants of a filtered column?



The post OBIEE 11g and Essbase – Faking Federation Using the GoURL appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Little Things Doth Crabby Make – Part XVIV: Enterprise Manager 12c Cloud Control Install Problem.

Kevin Closson - Thu, 2015-11-12 14:42

This is a short post to help out any possible “googlers” looking for an answer to why their EM Cloud Control install is failing in the make phase with

Note, this EM install was taking place on an Oracle Linux 7.1 host.

The following snippet shows the text that was displayed in the dialogue box when the error was hit:

INFO: 11/12/15 12:10:37 PM PST: ----------------------------------
INFO: 11/12/15 12:10:37 PM PST: Exception thrown from action: make
Exception Name: MakefileException
Exception String: Error in invoking target 'install' of makefile '/home/oracle/app/oracle/oms12cr5/Oracle_WT/webcache/lib/'. See '/home/oracle/oraInventory/logs/cloneActions2015-11-12_12-10-18-PM.log' for details.
Exception Severity: 1
INFO: 11/12/15 12:10:37 PM PST: POPUP WARNING:Error in invoking target 'install' of makefile '/home/oracle/app/oracle/oms12cr5/Oracle_WT/webcache/lib/'. See '/home/oracle/oraInventory/logs/cloneActions2015-11-12_12-10-18-PM.log' for details.

Click "Retry" to try again.
Click "Ignore" to ignore this error and go on.
Click "Cancel" to stop this installation.
INFO: 11/12/15 12:20:14 PM PST: The output of this make operation is also available at: '/home/oracle/app/oracle/oms12cr5/Oracle_WT/install/make.log'

The following shows the simple fix:

$ diff ./app/oracle/oms12cr5/Oracle_WT/lib/sysliblist.orig ./app/oracle/oms12cr5/Oracle_WT/lib/sysliblist
< -ldl -lm -lpthread -lnsl -lirc -lipgo --- > -ldl -lm -lpthread -lnsl -lirc -lipgo -ldms2

So if this error hath made at least one googler less crabby I’ll consider this installment in the Little Things Doth Crabby Make series all worth it.

Filed under: oracle


Jonathan Lewis - Thu, 2015-11-12 14:01

A short video that I did at the OTN lounge at RMOUG a couple of years ago has just been posted on YouTube. It’s about the improvements that appear in histograms in 12c. I’ll move this link into a more suitable posting in the near future.