Skip navigation.

Feed aggregator

Reverse Key

Jonathan Lewis - Wed, 2015-06-17 06:11

A question came up on the OTN database forum recently asking if you could have a partitioned index on a non-partitioned table.

(Aside: I’m not sure whether it would be quicker to read the manuals or try the experiment – either would probably be quicker than posing the question to the forum. As so often happens in these RTFM questions the OP didn’t bother to acknowledge any of the responses)

The answer to the question is yes – you can create a globally partitioned index, though if it uses range partitioning you have to specify a MAXVALUE partition. The interesting thing about the question, though is that several people tried to guess why it had been asked and then made suggestions based on the most likely guess (and wouldn’t it have been nice to see some response from the OP ). The common guess was that there was a performance problem with the high-value block of a sequence-based (or time-based) index – a frequent source of “buffer busy wait” events and other nasty side effects.

Unfortunately too many people suggesting reverse key as a solution to this “right-hand” problem. If you’re licensed for partitioning it’s almost certain that a better option would simple be to use global hash partitioning (with 2^N for some N) partitions. Using reverse keys can result in a bigger performance than the one you’re trying to avoid – you may end up turning a little time spent on buffer busy waits into a large amount of time spent on db file sequential reads. To demonstrate the issue I’ve created a sample script – and adjusted my buffer cache down to the appropriate scale:

create table t1(
	id	not null
)
nologging
as
with generator as (
	select	--+ materialize
		rownum id 
	from dual 
	connect by 
		rownum <= 1e4
)
select
	1e7 + rownum	id
from
	generator	v1,
	generator	v2
where
	rownum <= 1e7 
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1'
	);
end;
/

alter table t1 add constraint t1_pk primary key(id) 
using index 
	reverse 
	nologging 
;

alter system flush buffer_cache;
alter session set events '10046 trace name context forever, level 8';

begin
	for i in 20000001..20010000 loop
		insert into t1 values(i);
	end loop;
end;
/

I’ve created a table with 10,000,000 rows using a sequential value as the primary key, then inserted “the next” 10,000 rows into the table in order. The index occupied about about 22,000 blocks, so to make my demonstration show you the type of effect you could get from a busy production system with more tables and many indexes I ran my test with the buffer cache limited to 6,000 blocks – a fair fraction of the total index size. Here’s a small section of the trace file from the test running 10.2.0.3 on an elderly machine:


WAIT #43: nam='db file sequential read' ela= 13238 file#=6 block#=12653 blocks=1 obj#=63623 tim=3271125590
WAIT #43: nam='db file sequential read' ela=  7360 file#=6 block#=12749 blocks=1 obj#=63623 tim=3271133150
WAIT #43: nam='db file sequential read' ela=  5793 file#=6 block#=12844 blocks=1 obj#=63623 tim=3271139110
WAIT #43: nam='db file sequential read' ela=  5672 file#=6 block#=12940 blocks=1 obj#=63623 tim=3271145028
WAIT #43: nam='db file sequential read' ela= 15748 file#=5 block#=13037 blocks=1 obj#=63623 tim=3271160998
WAIT #43: nam='db file sequential read' ela=  8080 file#=5 block#=13133 blocks=1 obj#=63623 tim=3271169314
WAIT #43: nam='db file sequential read' ela=  8706 file#=5 block#=13228 blocks=1 obj#=63623 tim=3271178240
WAIT #43: nam='db file sequential read' ela=  7919 file#=5 block#=13325 blocks=1 obj#=63623 tim=3271186372
WAIT #43: nam='db file sequential read' ela= 15553 file#=6 block#=13549 blocks=1 obj#=63623 tim=3271202115
WAIT #43: nam='db file sequential read' ela=  7044 file#=6 block#=13644 blocks=1 obj#=63623 tim=3271209420
WAIT #43: nam='db file sequential read' ela=  6062 file#=6 block#=13741 blocks=1 obj#=63623 tim=3271215648
WAIT #43: nam='db file sequential read' ela=  6067 file#=6 block#=13837 blocks=1 obj#=63623 tim=3271221887
WAIT #43: nam='db file sequential read' ela= 11516 file#=5 block#=13932 blocks=1 obj#=63623 tim=3271234852
WAIT #43: nam='db file sequential read' ela=  9295 file#=5 block#=14028 blocks=1 obj#=63623 tim=3271244368
WAIT #43: nam='db file sequential read' ela=  9466 file#=5 block#=14125 blocks=1 obj#=63623 tim=3271254002
WAIT #43: nam='db file sequential read' ela=  7704 file#=5 block#=14221 blocks=1 obj#=63623 tim=3271261991
WAIT #43: nam='db file sequential read' ela= 16319 file#=6 block#=14444 blocks=1 obj#=63623 tim=3271278492
WAIT #43: nam='db file sequential read' ela=  7416 file#=6 block#=14541 blocks=1 obj#=63623 tim=3271286129
WAIT #43: nam='db file sequential read' ela=  5748 file#=6 block#=14637 blocks=1 obj#=63623 tim=3271292163
WAIT #43: nam='db file sequential read' ela=  7131 file#=6 block#=14732 blocks=1 obj#=63623 tim=3271299489
WAIT #43: nam='db file sequential read' ela= 16126 file#=5 block#=14829 blocks=1 obj#=63623 tim=3271315883
WAIT #43: nam='db file sequential read' ela=  7746 file#=5 block#=14925 blocks=1 obj#=63623 tim=3271323845
WAIT #43: nam='db file sequential read' ela=  9208 file#=5 block#=15020 blocks=1 obj#=63623 tim=3271333239
WAIT #43: nam='db file sequential read' ela=  7708 file#=5 block#=15116 blocks=1 obj#=63623 tim=3271341141
WAIT #43: nam='db file sequential read' ela= 15484 file#=6 block#=15341 blocks=1 obj#=63623 tim=3271356807
WAIT #43: nam='db file sequential read' ela=  5488 file#=6 block#=15437 blocks=1 obj#=63623 tim=3271362623
WAIT #43: nam='db file sequential read' ela= 10447 file#=6 block#=15532 blocks=1 obj#=63623 tim=3271373342
WAIT #43: nam='db file sequential read' ela= 12565 file#=6 block#=15629 blocks=1 obj#=63623 tim=3271386741
WAIT #43: nam='db file sequential read' ela= 17168 file#=5 block#=15725 blocks=1 obj#=63623 tim=3271404135
WAIT #43: nam='db file sequential read' ela=  7542 file#=5 block#=15820 blocks=1 obj#=63623 tim=3271411882
WAIT #43: nam='db file sequential read' ela=  9400 file#=5 block#=15917 blocks=1 obj#=63623 tim=3271421514
WAIT #43: nam='db file sequential read' ela=  7804 file#=5 block#=16013 blocks=1 obj#=63623 tim=3271429519
WAIT #43: nam='db file sequential read' ela= 14470 file#=6 block#=16237 blocks=1 obj#=63623 tim=3271444168
WAIT #43: nam='db file sequential read' ela=  5788 file#=6 block#=16333 blocks=1 obj#=63623 tim=3271450154
WAIT #43: nam='db file sequential read' ela=  9630 file#=6 block#=16429 blocks=1 obj#=63623 tim=3271460008
WAIT #43: nam='db file sequential read' ela= 10910 file#=6 block#=16525 blocks=1 obj#=63623 tim=3271471174
WAIT #43: nam='db file sequential read' ela= 15683 file#=5 block#=16620 blocks=1 obj#=63623 tim=3271487065
WAIT #43: nam='db file sequential read' ela=  8094 file#=5 block#=16717 blocks=1 obj#=63623 tim=3271495454
WAIT #43: nam='db file sequential read' ela=  6670 file#=5 block#=16813 blocks=1 obj#=63623 tim=3271502293
WAIT #43: nam='db file sequential read' ela=  7852 file#=5 block#=16908 blocks=1 obj#=63623 tim=3271510360
WAIT #43: nam='db file sequential read' ela= 10500 file#=6 block#=17133 blocks=1 obj#=63623 tim=3271521039
WAIT #43: nam='db file sequential read' ela= 11038 file#=6 block#=17229 blocks=1 obj#=63623 tim=3271532275
WAIT #43: nam='db file sequential read' ela= 12432 file#=6 block#=17325 blocks=1 obj#=63623 tim=3271544974
WAIT #43: nam='db file sequential read' ela=  7784 file#=6 block#=17421 blocks=1 obj#=63623 tim=3271553331
WAIT #43: nam='db file sequential read' ela=  7774 file#=5 block#=17517 blocks=1 obj#=63623 tim=3271561346
WAIT #43: nam='db file sequential read' ela=  6583 file#=5 block#=17613 blocks=1 obj#=63623 tim=3271568146
WAIT #43: nam='db file sequential read' ela=  7901 file#=5 block#=17708 blocks=1 obj#=63623 tim=3271576231
WAIT #43: nam='db file sequential read' ela=  6667 file#=5 block#=17805 blocks=1 obj#=63623 tim=3271583259
WAIT #43: nam='db file sequential read' ela=  9427 file#=6 block#=18029 blocks=1 obj#=63623 tim=3271592988
WAIT #43: nam='db file sequential read' ela= 52334 file#=6 block#=18125 blocks=1 obj#=63623 tim=3271646055
WAIT #43: nam='db file sequential read' ela= 50512 file#=6 block#=18221 blocks=1 obj#=63623 tim=3271697284
WAIT #43: nam='db file sequential read' ela= 10095 file#=6 block#=18317 blocks=1 obj#=63623 tim=3271708095

Check the block number for this list of single block reads – we’re jumping through the index about 100 blocks at a time to read the next block where an index entry has to go. The jumps are the expected (and designed) effect of reverse key indexes: the fact that the jumps turn into physical disc reads is the (possibly unexpected) side effect. Reversing an index makes adjacent values look very different (by reversing the bytes) and go to different index leaf blocks: the purpose of the exercise is to scatter concurrent similar inserts across multiple blocks, but if you scatter the index entries you need to buffer a lot more of the index to keep the most recently used values in memory. Reversing the index may eliminate buffer busy waits, but it may increase time lost of db file sequential reads dramatically.

Here’s a short list of interesting statistics from this test – this time running on 11.2.0.4 on a machine with SSDs) comparing the effects of reversing the index with those of not reversing the index – normal index first:


Normal index
------------
CPU used by this session               83
DB time                                97
db block gets                      40,732
physical reads                         51
db block changes                   40,657
redo entries                       20,174
redo size                       5,091,436
undo change vector size         1,649,648

Repeat with reverse key index
-----------------------------
CPU used by this session              115
DB time                               121
db block gets                      40,504
physical reads                     10,006
db block changes                   40,295
redo entries                       19,973
redo size                       4,974,820
undo change vector size         1,639,232

Because of the SSDs there’s little difference in timing between the two sets of data and, in fact, all the other measures of work done are very similar except for the physical read, and the increase in reads is probably the cause of the extra CPU time thanks to both the LRU manipulation and the interaction with the operating system.

If you want to check the effect of index reversal you can take advantage of the sys_op_lbid() function to sample a little of your data – in my case I’ve queried the last 10,000 rows (values) in the table:


select 
	/*+ 
		cursor_sharing_exact 
		dynamic_sampling(0) 
		no_monitoring 
		no_expand 
		index_ffs(t1,t1_i1) 
		noparallel_index(t,t1_i1) 
	*/ 
	count (distinct sys_op_lbid( &m_ind_id ,'L',t1.rowid)) as leaf_blocks
from 
	t1
where 
	id between 2e7 + 1 and 2e7 + 1e4
;

The &m_ind_id substition variable is the object_id of the index t1_i1.

In my case, with an index of 22,300 leaf blocks, my 10,000 consecutive values were scattered over 9,923 leaf blocks. If I want access to “recent data” to be as efficient as possible I need to keep that many blocks of the index cached, compared to (absolute) worst case for my data 100 leaf blocks. When you reverse key an index you have to think about how much bigger you have to make your buffer cache to keep the performance constant.


HOWTO: Check if an XMLType View or Table is Hierarchy Enabled

Marco Gralike - Wed, 2015-06-17 03:53
The following simple, code snippet, demonstrates how you can check if an XMLType view or…

Introducing Formspider 1.9, the Web Application Development Framework for PL/SQL Developers.

Gerger Consulting - Wed, 2015-06-17 02:50
The new version of Formspider is coming out this summer. Join our webinar and find out its new features and how your organization can benefit from them. 

The following topics will be covered during the webinar: 
- New features in Formspider version 1.9 
- Formspider architecture&benefits 
- Introduction to development with Formspider 

You can sign up for the webinar at this link.
Categories: Development

ODA workshop at Arrow ECS

Yann Neuhaus - Wed, 2015-06-17 02:30
On the 16th and 17th of June David Hueber, Georges Grey and myself had the chance to attend the ODA hands on workshop at Arrow ECS. Lead Trainer Ruggero Citton (Oracle ODA Product Development) did the first day with plenty of theory and examples. On the second day we had the opportunity to play on a brand new ODA X5-2:

SQL Server 2014: Analysis, Migrate and Report (AMR) - a powerful In-Memory migration tool

Yann Neuhaus - Wed, 2015-06-17 02:28

An important new functionality of Microsoft SQL Server 2014 is the In-Memory OLTP engine, which enable you to load Tables and also Stored Procedures In-Memory for a very fast response time.
The goal is not to load all the database In-Memory but just Tables with critical performances and Stored Procedures with complex logical calculations.

To identify which Tables or Stored Procedures will give you the best performance gain after migration; Microsoft SQL Server 2014 has introduced a new tool: Analysis, Migrate and Report (AMR).

This tool will collect statistics about Tables and Stored Procedures in order to analyze the current workload. It will give you advice on the migration benefits of the different Tables or Stored Procedures. It will also give you an overview of the time/work needed to push Tables or Stored Procedures In-Memory.

In the following article I will show you how to setup and use this Tool.

Configuration of the Management Data Warehouse

The AMR Tool is built into SQL Server Management Studio.
It consists of:

  • Reports which come from a Management Data Warehouse and give recommendations about tables and Stored procedures which could be migrated to In-Memory OLTP
  • Memory Optimization Advisor which will help you during the migration process of a disk table to a Memory Optimized table
  • Native Compilation Advisor which will help you migrated a Stored Procedure to a Natively Compiled Stored Procedure

AMR Tool leverages the Management Data Warehouse and the Data Collector, with his new Transaction Performance Collection Sets, for gathering information about workloads.

AMR will analyze the collected data and provide recommendations via reports.
First, we have to configure the Management Data Warehouse.

To start the configuration, open Management Studio, go to Object Explorer, then Management folder, and right-click on Data Collection. Then select Tasks and click on Configure Management Data Warehouse as shown below:

AMR_picture1.png

On the Configure Management Data Warehouse Storage screen, enter the server name and the database name where you Management Data Warehouse will be host. AMR tool will collect, via its collection sets, data from three Dynamic Management Views every fifteen minutes and will save those data in the MDW database. Uploading data will have minimal performance impact.

If you already have a database, enter her name. If not, click the New button to create a new one.
On the Map logins and Users page, if needed, you can map a user to administer, read, or write the Data Management Warehouse database.
Verify the Management Data Warehouse configuration and proceed to the configuration.
When the configuration of the Management Data Warehouse has been successfully finalized, you should see the following screen:

 AMR_picture2.png

The Management Data Warehouse setup is finished.

Configuration of the Data collection

Take care, SQL Server agent has to be started on the instance that will collect the data.
To collect data, we will enable the new Transaction Performance Collection set which is composed of two new collection sets:

  • Stored Procedure Usage Analysis: used to capture statistics about Stored Procedures which could be migrate to Natively Compiled Stored Procedures
  • Table Usage Analysis: takes information about disk based tables for a future migration to Memory Optimized tables.

 In order to configure the Data Collection, go to Object Explorer, then Management folder, right-click on Data Collection, select Tasks, and click on Configure Data Collection, as shown below:

AMR_picture3.png

After having skipped the Welcome page, you have to select a server and a database name that will host the Management Data Warehouse.

Now, you need to select the data collector sets. In the wizard, check “Transaction Performance Collection Set” in the list of collection sets. This will collect statistics for transaction performance issues.

If the Management Data Warehouse is located on a different SQL Server instance from the data collector and SQL Server agent is not running under a domain account which has dc_admin permissions on the remote instance you have to use a SQL Server Agent proxy.

AMR_picture4.png

After having performed the Data Collection configuration with success, you will have an enabled Data Collection which will collect information about all user databases.

Via SQL Server Agent Folder Jobs, you are now able to see new collection jobs used to collect data from your workloads with names like collection_set_N_collection and jobs used to populate our new Management Data Warehouse database with names like collection_set_N_upload.

It is also good to know that upload jobs will be run every thirty minutes for Stored Procedure Usage Analysis (job: collection_set_5_upload) and every fifteen minutes for Table Usage Analysis (job: collection_set_6_upload). So if you want to speed your upload, you can execute these jobs manually.
 

Reports

To access recommendations based on collected information about all user databases on the workload server, you have to right-click on your Management Data Warehouse database, select Reports, then Management Data Warehouse, and finally Transaction Performance Analysis.

In the Transaction Performance Analysis Overview report, you can choose among three reports, depending on what you want or need:

  • Usage analysis for tables
  • Contention analysis for tables
  • Usage analysis for Stored procedures

AMR_picture5.png

Usage analysis for tables

This report, based on table's usage, shows you best candidate tables that could be pushed In-Memory.
On the left side of the report you have the possibility to select the database and the number of tables you would like to see.
The central part is a chart with two axis:

  • Horizontal axis represents significant to minimal works needed to migrate a table to In-Memory OLTP
  • Vertical axis represents increasing gains you will benefit after having moved the table to In-Memory OLTP

The best part of this graph is the top right corner, which shows tables that could be migrated easily In-Memory, but will give you the best performance gain.

AMR_picture6.png

When you click a table point on the graph, you will access a more detailed statistics report.

This report shows access characteristics (lookup statistics, range scan statistics, etc.) and also contention statistics (latches statistics, lock statistics, etc.) of the concerning table and for the monitoring time period of your instance’s workload with the Transaction Performance Collection Set.

AMR_picture7.png

Contention analysis for table

This report is based on table's contention instead of usage. It shows you best candidate tables that could be migrated In-Memory.
As before, on the left side of the report you have the possibility to select the database and the number of tables you would like to see.
The central part is a chart with two axis:

  • Horizontal axis represents significant to minimal works needed to migrate a table to In-Memory OLTP
  • Vertical axis represents increasing gains you will benefit after having moved the table to In-Memory OLTP

The best part of this graph is the top right corner, showing tables that can be easily migrated In-Memory, but will give you best performance gain.

AMR_picture8.png

As for the usage analysis report, you can also click a table name on the graph to see the statistics details of the table.

Usage analysis for Stored Procedures

This report contains the top candidate stored procedures for an In-Memory OLTP migration with regards to their usage. This report is based on the Total CPU Time.

You also have the possibility to select the database and the number of stored procedure you would like to see.

AMR_picture9.png

If you want to see the usage statistics for a specific stored procedure, you can click on the blue bar. You will then have a more detailed report.

AMR_picture10.png

Now, you know which Tables and Stored Procedures will give you best performance gain after migration to In-Memory OLTP.
AMR provide us two Advisors which will help you to manage the transformation of your disk tables to Memory Optimized Tables as well as your Stored Procedures to Natively Compiled Stored Procedures. To know more about those advisors, please have a look to my blog.

WWDC 2015: Apple Push goes HTTP2 for APNs

Matthias Wessendorf - Wed, 2015-06-17 02:19

Last week was WWDC 2015 and one session got my attention: What’s New in Notifications!

The session is a two part session, focusing on iOS notifications (local/remote) and new features, like text-apply, but the most interesting part for me was the second half, which announced some coming APNs changes!

The big news is, Apple will have HTTP/2 API to send notification requests to APNs.

YAY!

Here is a little summary of more details:

  • request/reponse (aka stream) for every sent (more reliable (e.g. JSON reason for a ‘bad request’ or 410 if the token is invalid))
    • allows ‘instant’ feedback (no separate feedback service!), since details are on the HTTP/2 response
  • multiplexing: multiple requests (to APNs) on a single connection
  • binary
  • simpler certificate handling: Just a single cert! (no separate for dev/prod, VOIP etc)
  • 4KB size of payload (for all versions of iOS/OSX), but just on the new HTTP/2 API

The new HTTP/2 API for APNs will be available in “Summer 2015″ for the development environment and will be made available for production “later this year”. No exact dates were given.

 

I really like this move, and this means for our AeroGear UnifiedPush Server, we will be busy implementing this new Apple APIs!


DOAG Database 2015

Yann Neuhaus - Wed, 2015-06-17 00:17

It was my first time at DOAG Datenbank in Dusseldorf. 

Integration Is Hard

Floyd Teter - Tue, 2015-06-16 20:58
If you know me at all, you know I love services-based integration.  The whole idea of interfacing, moving and exchanging data, guided by industry standards...I'm an enthusiastic supporter.  The appeal of this idea made me an ardent supporter of Oracle's Fusion Applications.  And I still believe it's an important part of the potential for today's SaaS offerings.

So I'll share a secret with you...I really hate services-based integration.  It's hard.  Packaged integrations rarely work out of the box.  SaaS integrations are tough to implement.  Integration platforms are still in their infancy.  Data errors are frequent problems.  Documentation is either inaccurate or non-existent.  Building your own - oy!  Even simple integrations require large investments of blood, sweat, and tears.  And orchestrating service integrations into a business process...agony on a stick.  I personally believe that the toughest aspect of enterprise software is services integration.  SaaS, hybrid, on-premise, packaged applications, middleware...it does not matter, services integration is hard regardless of context.

I see SaaS integration as "hero ground":  there is nowhere to go but up, and even simple wins will create heroes.  Service integrations that really work, simple and easily understood documentation, design patterns, data templates and useable tools... I think we have a ton of work to do.  Because, even though it shouldn't be, integration is hard.

Mid-June Roundup

Oracle AppsLab - Tue, 2015-06-16 16:07

A busy June is half over now, but we still have miles to go before July.

We’ve been busy, which you know if you read here. Raymond went to Boston. Tony, Thao (@thaobnguyen), Ben and I were in Las Vegas at OHUG 15. John and Thao were in Minneapolis the week before that. Oh, and Anthony was at Google I/O.

The globetrotting continues this week, as John and Anthony (@anthonyslai) are in the UK giving a workshop on Visualizations at the OUAB meeting. Plus, Thao and Ben are attending the QS15 conference in San Francisco.

And next week, Noel (@noelportugal), Raymond, Mark (@mvilrokx) and I head to Hollywood, FL for Kscope15 (#kscope15).

Did you hear we’re collaborating with the awesome organizers (@odtug) to put on a super fun and cool Scavenger Hunt? If you’re going to Kscope15, you should register.

You can do it now, I’ll wait.

Back? Good check out the sweet infographic Tony C. on our team created for the big Hunt:

posterLayout

Coincidentally, one of the tasks is to attend our OAUX session on Tuesday at 2pm, “Smart Things All Around.” Jeremy Ashley (@jrwashley), our GVP, and Noel will talk about the Scavenger Hunt, IoT, new experiences, design philosophies, all that good stuff.

Speaking of philosophies, VoX has a post on glance-scan-commit the design philosophy that informs our research and development, and more importantly, how glance-scan-commit trickles into product. You should read it.

And finally, Ultan (@ultan) and Mark collaborated on a post about partners, APIs, PaaS and IoT that you should also read, if only so you can drop a PaaS4SaaS into your next conversation.

If you’re attending any of these upcoming events, say hi to us, and look for updates here.Possibly Related Posts:

Preview Release 10 Oracle Applications Cloud Readiness Content!

Linda Fishman Hoyle - Tue, 2015-06-16 15:57
Normal 0 false false false EN-US X-NONE X-NONE

A Guest Post by Katrine Haugerud (pictured left), Senior Director, Oracle Product Management

To help you prepare for upcoming Release 10, we are pleased to offer a preview of its new, modern business-empowering features.

On the Release Readiness page, we have added content for HCM, Sales, ERP, and SCM, as well as Common Technologies for each.

Specifically, we have just introduced:

Spotlights: Delivered by senior development staff, these webcast-delivered presentations highlight top level messages and product themes, and are reinforced with a product demo.

Release Content Documents (RCDs): The content includes a summary level description of each new feature and product.

Next month we will add (and announce) more Release 10 readiness content including:

  • What's New: Learn about what's new in the upcoming release by reviewing expanded discussions of each new feature and product, including capability overviews, business benefits, setup considerations, usage tips, and more.
  • Release Training: Created by product management, these self-paced, interactive training sessions are deep dives into key new enhancements and products. Also referred to as Transfers of Information (TOIs).
  • Product Documentation: Oracle’s online documentation includes detailed product guides and training tutorials to ensure your successful implementation and use of the Oracle Applications Cloud.

Access is Simple

From the cloud.oracle.com: Click on Menu > Discover > What's New



/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

The Pendulum Swings Back

Linda Fishman Hoyle - Tue, 2015-06-16 15:56

A Guest Post by Andy Campbell (pictured left), Oracle HCM Cloud Sales Evangelist

I am currently working on a white paper specifically on the topic of ‘Living with the HR Cloud’ with a number of fascinating case studies.Therefore, I was delighted to come across the latest piece of research from Harvard Business Review entitled Cloud Computing Comes of Age.

This report assesses the maturity (and thereby the experience) of customers who have deployed cloud applications. The results are quite significant. Those organizations classified as ‘cloud leaders’ also achieved higher levels of business success. It reports a correlation between an organization’s cloud maturity and the health of its growth initiatives such as business expansion.

The benefits they realized included improved business agility, enhanced organisational flexibility, and faster speed of deployment. They also reported improved decision making through an increased ability to analyze and act upon data and information. For HR leaders, the natural consequence of this is the ability to offer a more proactive value-added service to the business, something that I think we all aspire to.

Anyway, perhaps of most interest to me was the fact that the cloud leaders took a more managed and enterprise-wide approach to their cloud applications, something that embodied a range of good practices. For example, cloud leaders are more likely to define the business value that they expect to get from their cloud initiatives, 69 percent in fact, compared to only 40 percent of novices. Similarly, only 53 percent of the survey had established policies for cloud security, a figure that rises to 79 percent amongst cloud leaders. Also, cloud leaders are more likely to have a strong partnership between IT and other parts of the business. Cloud technologies including social, mobile, etc. have had a democratizing impact on IT, and enhanced collaboration with business users is, quite rightly, becoming the norm.

However, to me, one thing stands out. Evidently cloud leaders are more than twice as likely to have a CIO who leads the transformation agenda!! Sure IT and business must work together, but somebody needs to be in charge, and that is the CIO.

Now if you had said such a thing a few years ago, you would probably have been strung up by a lynch mob to chants of ‘the business user is king’! The perceived wisdom at the time was that ultimate power was vested with the business and the user community.

However, things have changed and the pendulum has swung back again. As the adoption of cloud technology has become more mainstream, the experience of users is that to be truly successful both parties, IT and the business, need to truly work well with each other.

Overall I/O Query

Bobby Durrett's DBA Blog - Tue, 2015-06-16 14:57

I hacked together a query today that shows the overall I/O performance that a database is experiencing.

The output looks like this:

End snapshot time   number of IOs ave IO time (ms) ave IO size (bytes)
------------------- ------------- ---------------- -------------------
2015-06-15 15:00:59        359254               20              711636
2015-06-15 16:00:59        805884               16              793033
2015-06-15 17:00:13        516576               13              472478
2015-06-15 18:00:27        471098                6              123565
2015-06-15 19:00:41        201820                9              294858
2015-06-15 20:00:55        117887                5              158778
2015-06-15 21:00:09         85629                1               79129
2015-06-15 22:00:23        226617                2               10744
2015-06-15 23:00:40        399745               10              185236
2015-06-16 00:00:54       1522650                0               43099
2015-06-16 01:00:08       2142484                0               19729
2015-06-16 02:00:21        931349                0                9270

I’ve combined reads and writes and focused on three metrics – number of IOs, average IO time in milliseconds, and average IO size in bytes.  I think it is a helpful way to compare the way two systems perform.  Here is another, better, system’s output:

End snapshot time   number of IOs ave IO time (ms) ave IO size (bytes)
------------------- ------------- ---------------- -------------------
2015-06-15 15:00:25        331931                1              223025
2015-06-15 16:00:40        657571                2               36152
2015-06-15 17:00:56       1066818                1               24599
2015-06-15 18:00:11        107364                1              125390
2015-06-15 19:00:26         38565                1               11023
2015-06-15 20:00:41         42204                2              100026
2015-06-15 21:00:56         42084                1               64439
2015-06-15 22:00:15       3247633                3              334956
2015-06-15 23:00:32       3267219                0               49896
2015-06-16 00:00:50       4723396                0               32004
2015-06-16 01:00:06       2367526                1               18472
2015-06-16 02:00:21       1988211                0                8818

Here is the query:

select 
to_char(sn.END_INTERVAL_TIME,'YYYY-MM-DD HH24:MI:SS') "End snapshot time",
sum(after.PHYRDS+after.PHYWRTS-before.PHYWRTS-before.PHYRDS) "number of IOs",
trunc(10*sum(after.READTIM+after.WRITETIM-before.WRITETIM-before.READTIM)/
sum(1+after.PHYRDS+after.PHYWRTS-before.PHYWRTS-before.PHYRDS)) "ave IO time (ms)",
trunc((select value from v$parameter where name='db_block_size')*
sum(after.PHYBLKRD+after.PHYBLKWRT-before.PHYBLKRD-before.PHYBLKWRT)/
sum(1+after.PHYRDS+after.PHYWRTS-before.PHYWRTS-before.PHYRDS)) "ave IO size (bytes)"
from DBA_HIST_FILESTATXS before, DBA_HIST_FILESTATXS after,DBA_HIST_SNAPSHOT sn
where 
after.file#=before.file# and
after.snap_id=before.snap_id+1 and
before.instance_number=after.instance_number and
after.snap_id=sn.snap_id and
after.instance_number=sn.instance_number
group by to_char(sn.END_INTERVAL_TIME,'YYYY-MM-DD HH24:MI:SS')
order by to_char(sn.END_INTERVAL_TIME,'YYYY-MM-DD HH24:MI:SS');

I hope this is helpful.

– Bobby

Categories: DBA Blogs

Oracle and Adaptive Case Management: Part 2

Jan Kettenis - Tue, 2015-06-16 14:24

This posting is the second of a series about Oracle Adaptive Case Management. The first one can be found here. I discuss the different options to define an activity, and the setting you can use to configure when and how activities are started.

There are two ways to implement an activity in ACM. The first one is by creating a Human Task and then "promote" it (as it is called) to an activity. The other way is to create a business process and promote that as an activity. As far as I know there are also plans to use a BPEL process to implement an activity, but that option is not there yet.

When using a Human Task the limitations of it (obviously) are that of a human task, meaning that the means to do some to do some pre- or post-processing for the activity are very limited. There are only a few hooks for Java call outs and XPath expressions, but as processing of that happens on the Human Workflow Engine this won't show up in Enterprise Manager, and error handling will be hard if not impossible. So, when you for example need to call a service before or after a human task (like sending a notification email) you better use a process.


So unless you are sure that such pre- or post-processing will be not necessary, the safest option is to use a process with a human task instead. That will give you all the freedom you have with a BPMN process. The disadvantage is that you will not be able to expose the UI of the task on the Case tab in workspace. However, as for any case management application of a reasonable size you probably will have one or more human activities in a process anyway, and as from a user experience perspective it probably is confusing to have tasks on Task tab, and some of them also on the Case tab, I don't expect this to be a practical issue in most cases. Meaning that in practice you probably handle all tasks from the Task tab only and on the Case tab show only some overview screen.

In ACM activities can be Manually Activated or Automatically Activated. Furthermore you can specify if an activity is Required, Repeated, and/or Conditionally available.


The difference between manually and automatically activated is that in the first case the user explicitly starts an activity by choosing it from a list of available activities. Automatically activitated activites are for example used for some case pre- and post-processing, and for activities that always have to start at some point, and (optionally) given some specific conditions (like some milestone being reached or some other activity being completed). An example is that once a claim has been entered, it has to be reviewed before anything else can happen.

Required activities should be completed before a stage is completed. Be careful though, as nothing is preventing you from closing the stage even though a required activity has not yet finished. If the user has the proper rights, he/she can complete an activity event even when no actual work has been done. There is no option to prevent that. However, in case of an automatically activated activity you can use business rules to reschedule it. For example, if the Review Complaint activity is required, and by that the complaint must have been given a specific status by the Complaints Manager you can use a rule to reactivate the activity if the user tries to close it without having set the status.

Repeatable activities can be started by the user more than once. There is no point in checking automatically activated activities as being repeatable. An example of a repeatable activity can be one where the Complaints Manager invites some Expert to provide input for a complaint, and he/she may need to be able to involve any amount of experts.

Conditionally available activities are triggered by some rule. Both manually as well as automatically activated activities can be conditional. If automatically activated, the activity will start as soon as the rule conditions are satisfied. In case of manually activated activities the rule conditions will determine whether or not the user can choose to from the list of available activities.

SQLcl , yet again

Kris Rice - Tue, 2015-06-16 13:54
By the Numbers There's a new SQLcl out for download.  In case, there are too many to keep track of the build numbers are quite easy to tell if you have the latest.  The build posted today is  sqlcl-4.2.0.15.167.0827-no-jre.zip Here's what we are doing 4.2.0 <-- doesn't matter at all 15     <- year 167   <- day in julian 0827 <- time the build was done So yes, this build was done today at 8am

A Database Wordfile…

Marco Gralike - Tue, 2015-06-16 09:08
It is not often that something like the following happens on Google while searching for…

The Byte-Anniversary

Darwin IT - Tue, 2015-06-16 05:15
I was looking into my blog-entries, and found that my former blog was number 254. So by entering this nonsense blog-entry, I reach the 255. Which makes it my 8-bit, or byte, anniversary:


And apparently my articles have been read as well... Nice thing is that I've reached this amount in nearly 8 years. So up to the next byte. Unfortunately in this case 2 bytes does not make up a Word...

Can Better Visual Design Impact User Engagement?

Rittman Mead Consulting - Tue, 2015-06-16 04:44
Background

For every dashboard succinctly displaying key business metrics, there’s another that is a set of unconnected graphs which don’t provide any insight to its viewers.

In order for your users to get value from your business intelligence and analytics systems they need to be engaging, they need to tell a story.

As part of its User Engagement initiative Rittman Mead has created a User Engagement Service. A key part of this is a Visual Redesign process. Through this process, we review an organisation’s existing dashboards and reports and transform them into something meaningful and engaging.

This service focuses on the user interface and user experience; here we will use our expertise in data visualisation to deliver high value OBIEE dashboards.

The process starts by prioritising your dashboards and then, taking one at a time, rebuilds them. There are 3 key concepts that lie behind this process.

Create a guided structure of information

The layout of information on a dashboard should tell the user a story. This makes it much easier for data to be consumed because users can identify related data and instantly see what is relevant to them. If a user can consume the data they need easily, they’re more likely to come back for more. ‘The founder of modern management’, Peter Drucker said “If you can’t measure it, you can’t manage it.” When users are in touch with their data, they will be more engaged with the business.

visual-redesign-sm

To achieve this, first we must consider the audience. We need to know who will be consuming the dashboards and how they will be used. Then we can begin to create a design that will satisfy the users’ needs. Secondly, we need to think about what we exclude as much as what we include on the dashboard. There is limited space, so it needs to be used effectively. Only information that adds value should be displayed on a dashboard and everything should be there for a reason. Finally, we need to consider what questions the users want to answer and what decisions will be based on this. This will enable us to guide the user to the information that will help them take action.

Choose the right visuals

One common design mistake is overcrowding of dashboards. Dashboards often develop over time to become a jumbled array of graphs and tables with no consideration of the visual design.

The choice of graphs will determine how readable the information being displayed on a dashboard is. We constantly ask ourselves “What is the best graph for the data?” Understanding how different types of graphs answer different questions, allows us to make the best visual choices. This is a vital tool for communicating messages to the users and providing them with the ability to identify patterns and relationships in the data more efficiently.

Thoughtful use of colour

The use of colour is an effective way to draw attention to something, connect related objects and evoke users’ emotions. Thoughtful use of colour can have a big impact on user engagement. To be sure we choose the best colours throughout dashboard design, the key question we need to ask ourselves is, “how will these colours make the user feel?”

Like the charts themselves, every different colour used on a dashboard should be there for a reason. Intentional use of colour could determine how a user will feel whilst consuming the information being displayed to them. Bright, unnatural colours will alarm users and attract their attention. Cool colours will give a restful, calming feel to the user and are most effective for displaying sustained trends. Through taking into consideration the most effective way to use colour, we can work towards creating an attractive visual design, which is engaging and enjoyable to use.

Applying these 3 concepts through Rittman Mead’s visual redesign process, has proven to result in engaging OBIEE dashboards. Users are equipped to make the most out of their data, allowing them to make informative business decisions.

Rittman Mead’s Visual Redesign process is a key part Rittman Mead’s User Engagement Service, for more info see http://www.rittmanmead.com/user-engagement-service/.

If you are interested in hearing more about User Engagement please sign up to our mailing list below.

#mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */


Categories: BI & Warehousing

Index variables in Replace/Insert/Delete: Bug or not a bug?

Darwin IT - Tue, 2015-06-16 04:01
At my current customer I'm to process Attachments in Oracle Service Bus (11g).
I get a soap message, in which several documents are registered to be processed in a Content Server.
For each document the content is delivered as an soap/mime-attachment.

Because of some requirements I need to store the message, complete with the attachments Base64 encoded, in the database. So I have to pick each attachment, base64 encode it and then insert the content at the corresponding document in the soap message. So I need to do an insert or replace of a specific element of the body variable based on an index variable.

It turns out that you can perfectly do an assign with an expression like:
$body/stag:StageDocumentsRequestMessage/Payload/stag:documents/stag:document[contentId/text()=$contentId]
to a variable, for instance called document.

I can do an insert of the base64-encoded content into that document variable. But that does not get into the body variable. Since, apparently, document is a copy of and not a reference to the particular node.

So lets do a replace with the xpath-expresin:
$body/stag:StageDocumentsRequestMessage/Payload/stag:documents/stag:document[contentId/text()=$contentId]
in the variable body. But this gives the error:
[PL_MyPipeLine, Request Pipeline, HandleAttachments, Delete action] XPath expression validation failed: An error was reported compiling the XPath expression: XQuery exception: line 34, column 91: {err}XP0008 [{bea-err}XP0008a]: Variable "$contentId" used but not declared for expression: declare namespace stag = 'http://www.darwin-it.nl/CS/StageDoc';
declare namespace jca = 'http://www.bea.com/wli/sb/transports/jca';
declare namespace wsp = 'http://schemas.xmlsoap.org/ws/2004/09/policy';
...


Same counts for Insert and delete: I thought of inserting a new version of the node in the list and delete the old one, but that would not work either.

I've googled around, and found several occurences of basically the same problem, but no satisfying solution.

At support.oracle.com I found the following bug:
"Bug 17940786 : CANNOT USE INDEX VARIABLE IN THE REPLACE ACTION WITHIN FOR-EACH LOOP" with the following description:

The customer uses 2 for-each loops with index variables ($i, $j).
In the Replace action, in the Xpath expression buider, they want to use
"./entity1[$i]/entity2[$j]". This is not permitted by the editor. The problem
also occurs with only 1 variable like "./entity1[$i]/entity2[1]".

However, for no apparent reason, this "bug" has the status "92 - Closed, Not a Bug". So apparently, Oracle finds it as "functioning as designed". But why can't I modify or delete a particular node indexed by a variable?
Apparently I'm now stuck with building the document list document-by-document and do a replace of the complete document-list...

Feedback from the Oracle documentation team

Tim Hall - Tue, 2015-06-16 03:36

feedbackI got some feedback from the Oracle documentation team, based on my recent post.

GUIDs

One of the concerns I raised was about how the GUIDs would be used in different releases of the documentation. Although I don’t like the look of the GUIDs, I can understand why they might be more convenient that trying to think of a neat, descriptive, human readable slug. My concern was the GUID might be unique for every incarnation of the same page. That is, a new GUID for the same page for each patchset, DB version and/or minor text correction. That would make it really hard to flick between versions, as you couldn’t predict what the page was called in each variant.

It seems my worries were unfounded. The intention is the GUID of a specific page will stay the same, regardless of patchset, DB version or document correction. That’s great news!

Broken Links

The team are trying to put some stuff in place to correct the broken links. I think I might know who is developing this solution. :)

The quick fix will be to direct previously broken links to the table of contents page of the appropriate manual. Later, they will attempt to provide topic-to-topic links. No promises here, but it sounds promising.

Conclusion

I’m going to continue to fix the broken links on my site as I want to maintain the direct topic links in the short term, but this sounds like really good news going forward.

It also sounds like the documentation team are feeling our pain and putting stuff in place to prevent this happening in future, which is fantastic news! :)

Note to self: It’s much better to engage with the right people and discuss the issue, rather than just bitch about stuff.

Cheers

Tim…

Feedback from the Oracle documentation team was first posted on June 16, 2015 at 10:36 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle Enterprise Manager Cloud Control 12c Release 5 (12.1.0.5) – Just Born

Tim Hall - Tue, 2015-06-16 00:49

em-12cOracle Enterprise Manager Cloud Control 12c Release 5 (12.1.0.5) was announced a few days ago. I woke up today and checked the interwebs and it’s actually available for download.

I must admit I’m a little nervous about the upgrade. I had a few bad times with upgrades in the early days of Grid Control and Cloud Control and that has left me with a little bit of voodoo lurking in the back of my mind. The last couple of upgrades have been really easy, so I’m sure it will be fine, but that voodoo…

I’ll download it now and do a clean install. Then do a couple of practice upgrades. If all that goes well, I’ll schedule a date to sacrifice a chicken, raise a zombie from the dead to do my bidding, then do the real upgrade.

Cheers

Tim…

Update. Looking at the certification matrix, the repository is now certified on 12.1.0.2, as well as 11.2.0.4 and 11.2.0.3.

Update 2. Pete mentioned in the comments that 12.1.0.2 has been certified for the Cloud Control repository since march, with some restrictions. So it’s not new to this release. See the comments for details.

Update 3. Remember to download from edelivery.oracle.com (in a couple of days) for your production installations. Apparently there is a difference to the license agreement.

Oracle Enterprise Manager Cloud Control 12c Release 5 (12.1.0.5) – Just Born was first posted on June 16, 2015 at 7:49 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.