Skip navigation.

Feed aggregator

How to restrict data coming back from a SOAP Call

Angelo Santagata - Thu, 2014-04-10 09:51

In sales cloud a big positive of the SOAP interface is that lots of related data is returned by issuing a single query, including master-detail data (ie multiple email addresses in contacts) however these payloads can be very very large, e.g. In my system querying single person you get 305 Lines(!), whereas I only want the firstName,LastName and partyId which is 3 lines per record..


For each findCriteria element you can add multiple <findAttribute> element indicating what elements you want to return. By default if you provide <findAttribute> entries then only those attributes are returned, and this functionality can be reversed by setting the <excludeAttributes> to true.

Example 1 :  only retrieving PersonLastName,PersonFirstName,PartyId

<soapenv:Envelope xmlns:soapenv="" xmlns:typ="" xmlns:typ1="">




         <typ:findCriteria xsi:type="typ1:FindCriteria" xmlns:xsi="">












findAttributes work on the level1 attributes of that findCriteria, the value can be a attribute or an element

If you want to restrict SubElements you can use childFindCriterias for that subelement and then add findAttributes within that

Example 2 :  Only Retrieving PartyId, and within Email element only EmailAddress     

<soapenv:Envelope xmlns:soapenv="" xmlns:typ="" xmlns:typ1="">







         <typ:findCriteria xsi:type="typ1:FindCriteria" xmlns:xsi="">


















For a childFindCriteria to work you must query it in the parent, which is why “Email” is referenced in a findAttribute on line 14

What Happens in Vegas, Doesn’t Stay in Vegas – Collaborate 14

Pythian Group - Thu, 2014-04-10 08:04

IOUG’s Collaborate 14, is star-studded this year with the Pythian team illuminating various tracks in the presentation rooms. It’s acting like a magnet in the expo halls of The Venetian for data lovers. It’s a kind of rendezvous for those who love their data. So if you want your data to be loved, feel free to drop by at Pythian booth 1535.

Leading from the front is Paul Vallée with an eye-catching title, with real world gems. Then there is Michael Abbey’s rich experience, Marc Fielding’s in-depth technology coverage and Vasu’s forays into Apps Database Administration. There is my humble attempt at Exadata IORM, and Rene’s great helpful tips, and Alex Gorbachev’s mammoth coverage of mammoth data – it’s all there with much more to learn, share and know.

Vegas Strip is buzzing with the commotion of Oracle. Even the big rollers are turning their necks to see what the fuss is about. Poker faces have broken into amazed grins, and even the weird, kerbside card distribution has stopped. Everybody is focused on the pleasures of Oracle technologies.

Courtesy of social media, all of this fun isn’t confined to Vegas. You can follow @Pythian on Twitter to know it all, live, and in real time.

Come Enjoy!

Categories: DBA Blogs

Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5

Senthil Rajendran - Thu, 2014-04-10 04:23
Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
Big Data Oracle NoSQL in No Time - Getting Started Part 1
Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2
Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4

With the current 3x1 setup the NoSQL store is write efficient. In order to make it read efficient the replication factor has to be increased which internally creates more copies of the data to improve performance.

In the below scenario we are going to increase the replication from 1 to 3 to the  existing topology to make it read friendly.

export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39
java -jar $KVHOME/lib/kvstore.jar runadmin -port 5000 -host server1
kv-> show topologystore=mystore  numPartitions=30 sequence=60  dc=[dc1] name=datacenter1 repFactor=1
  sn=[sn1]  dc=dc1 server1:5000 capacity=1 RUNNING    [rg1-rn1] RUNNING          No performance info available  sn=[sn2]  dc=dc1 server2:5100 capacity=1 RUNNING    [rg2-rn1] RUNNING          No performance info available  sn=[sn3]  dc=dc1 server3:5200 capacity=1 RUNNING    [rg3-rn1] RUNNING          No performance info available
  shard=[rg1] num partitions=10    [rg1-rn1] sn=sn1  shard=[rg2] num partitions=10    [rg2-rn1] sn=sn2  shard=[rg3] num partitions=10    [rg3-rn1] sn=sn3
kv-> plan change-parameters -service sn1 -wait -params capacity=3Executed plan 8, waiting for completion...Plan 8 ended successfullykv-> plan change-parameters -service sn2 -wait -params capacity=3Executed plan 9, waiting for completion...Plan 9 ended successfullykv-> plan change-parameters -service sn3 -wait -params capacity=3Executed plan 10, waiting for completion...Plan 10 ended successfullykv-> topology clone -current -name 3x3Created 3x3kv-> topology change-repfactor -name 3x3 -pool AllStorageNodes -rf 3 -dc dc1Changed replication factor in 3x3kv-> topology preview -name 3x3Topology transformation from current deployed topology to 3x3:Create 6 RNs
shard rg1  2 new RNs : rg1-rn2 rg1-rn3shard rg2  2 new RNs : rg2-rn2 rg2-rn3shard rg3  2 new RNs : rg3-rn2 rg3-rn3
kv-> plan deploy-topology -name 3x3 -waitExecuted plan 11, waiting for completion...Plan 11 ended successfullykv-> show topologystore=mystore  numPartitions=30 sequence=67  dc=[dc1] name=datacenter1 repFactor=3
  sn=[sn1]  dc=dc1 server1:5000 capacity=3 RUNNING    [rg1-rn1] RUNNING          No performance info available    [rg2-rn2] RUNNING          No performance info available    [rg3-rn2] RUNNING          No performance info available  sn=[sn2]  dc=dc1 server2:5100 capacity=3 RUNNING    [rg1-rn2] RUNNING          No performance info available    [rg2-rn1] RUNNING          No performance info available    [rg3-rn3] RUNNING          No performance info available  sn=[sn3]  dc=dc1 server3:5200 capacity=3 RUNNING    [rg1-rn3] RUNNING          No performance info available    [rg2-rn3] RUNNING          No performance info available    [rg3-rn1] RUNNING          No performance info available
  shard=[rg1] num partitions=10    [rg1-rn1] sn=sn1    [rg1-rn2] sn=sn2    [rg1-rn3] sn=sn3  shard=[rg2] num partitions=10    [rg2-rn1] sn=sn2    [rg2-rn2] sn=sn1    [rg2-rn3] sn=sn3  shard=[rg3] num partitions=10    [rg3-rn1] sn=sn3    [rg3-rn2] sn=sn1    [rg3-rn3] sn=sn2

So what we have done ?

plan change-parameters -service sn1 -wait -params capacity=3plan change-parameters -service sn2 -wait -params capacity=3plan change-parameters -service sn3 -wait -params capacity=3We are increasing the capacity from 1 to 3 with the change-parameters command.
topology clone -current -name 3x3We are cloning the current topology with the new name 3x3
topology change-repfactor -name 3x3 -pool AllStorageNodes -rf 3 -dc dc1We are using the change-repfactor method to modify the replication factor to 3. The replication factor cannot be changed for this topology after executing this command.
You can use the show topology command to verify if the storage nodes are up and running. Alternatively , use the web interface to check the storage nodes 3x3 distributions.

Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4

Senthil Rajendran - Thu, 2014-04-10 02:39
Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
Big Data Oracle NoSQL in No Time - Getting Started Part 1
Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2
Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3

Previously we setup 1x1 topology and now we are going to move into a 3x1 topology.
By doing so we are going to increase the data that is distributed in the NoSQL Store. The main advantage of doing so will increase the write throughput and this is achieved using the redistribute command. During the redistribution partitions are distributed across the new shards and the end result is you have more replication nodes that will help your write operations.

In the below scenario we are going to add two replication nodes to the existing topology to make it write friendly.

$ export KVBASE=/oraclenosql/lab
$ export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39
$ java -jar $KVHOME/lib/kvstore.jar runadmin -port 5000 -host server1
kv-> plan deploy-sn -dc dc1 -port 5100 -wait -host server2
Executed plan 5, waiting for completion...
Plan 5 ended successfully
kv-> plan deploy-sn -dc dc1 -port 5200 -wait -host server3
Executed plan 6, waiting for completion...
Plan 6 ended successfully
kv-> show topology
store=mystore  numPartitions=30 sequence=36
  dc=[dc1] name=datacenter1 repFactor=1

  sn=[sn1]  dc=dc1 server1:5000 capacity=1 RUNNING
    [rg1-rn1] RUNNING
          No performance info available
  sn=[sn2]  dc=dc1 server2:5100 capacity=1 RUNNING
  sn=[sn3]  dc=dc1 server3:5200 capacity=1 RUNNING

  shard=[rg1] num partitions=30
    [rg1-rn1] sn=sn1

kv-> topology clone -current -name 3x1
Created 3x1
kv-> topology redistribute -name 3x1 -pool AllStorageNodes
Redistributed: 3x1
kv-> topology preview -name 3x1
Topology transformation from current deployed topology to 3x1:
Create 2 shards
Create 2 RNs
Migrate 20 partitions

shard rg2
  1 new RN : rg2-rn1
  10 partition migrations
shard rg3
  1 new RN : rg3-rn1
  10 partition migrations

kv-> plan deploy-topology -name 3x1 -wait
Executed plan 7, waiting for completion...
Plan 7 ended successfully
kv-> show topology
store=mystore  numPartitions=30 sequence=60
  dc=[dc1] name=datacenter1 repFactor=1

  sn=[sn1]  dc=dc1 server1:5000 capacity=1 RUNNING
    [rg1-rn1] RUNNING
          No performance info available
  sn=[sn2]  dc=dc1 server2:5100 capacity=1 RUNNING
    [rg2-rn1] RUNNING
          No performance info available
  sn=[sn3]  dc=dc1 server3:5200 capacity=1 RUNNING
    [rg3-rn1] RUNNING
          No performance info available

  shard=[rg1] num partitions=10
    [rg1-rn1] sn=sn1
  shard=[rg2] num partitions=10
    [rg2-rn1] sn=sn2
  shard=[rg3] num partitions=10
    [rg3-rn1] sn=sn3


So what we have done ?

plan deploy-sn -dc dc1 -port 5100 -wait -host server2
We are adding the second storage node into the datacenter dc1 which already has one storage node.

plan deploy-sn -dc dc1 -port 5200 -wait -host server3
We are adding one more storage node into the datacenter dc1 making it three storage nodes.

topology clone -current -name 3x1
We are cloning the existing 1x1 topology to a new candidate topology 3x1. This topology will be used for the change operations that is planned to be performed.

topology redistribute -name 3x1 -pool AllStorageNodes
We are redistributing the partitions on to the 3x1 topology

topology preview -name 3x1
We can preview the topology before deploying it to the store.

plan deploy-topology -name 3x1 -wait
We are approving the deployment plan 3x1 and the deployment will take time to complete as it depends on the store size.

You can use the show topology command to verify if the storage nodes are up and running. Alternatively , use the web interface to check the storage nodes 3x1 distributions. 

JDeveloper XSL Mapper tip

Darwin IT - Thu, 2014-04-10 01:48
Of course you know already that in Jdeveloper you can create xsl maps, just by drawing lines between source and target elements. In many cases you need functions or complex expressions in between. Those are "drag-and-dropable" as well. I found that you can even drop a function on a line and the function will be added to the expression. So with a little thought of the sequence of "drag-and-drops" of functions you can assemble pretty complex expressions together just by using the mouse. Although I'm not affraid to hack back in the source code of the xsl for quickness, I found that this allowed me to spare a few switches between the Design and the Source tab. That is convenient, since hacking the source and switching back to the Design tab will cause the Designer to initialize again, driving you to expand again all the nodes you were working on. Understandable, but inconvenient with large XSD's.

What I did not know until recently is how to set a variable to an element. So what I did before was to hack in the source a piece of code like:
<xsl:value-of select="$landCodeNL" />

It turns out that you can do that by "drag-and-drop" as well. In the component-palette you need to select the "Advanced" functions:
 At the bottom you find  a xpath-expression element. Drag-and-drop that in the design area and connect it to the target element.

When you edit it you can just type in your expression, for instance just a variable. When you start with a dollar sign, it even gives you a drop-down-list with available variables. Just pick the right one and our done.

I admit, no high-standard tip, but convenient enough though, for me at least.

_direct_read_decision_statistcs_driven, _small_table_threshold and direct path reads on partitioned tables in (Part 2)

Mihajlo Tekic - Thu, 2014-04-10 00:30
This is continuation of my last post regarding direct path reads on partitioned tables in Oracle

To recap, the behavior I observed is that direct path reads will be performed if number of blocks for all partitions that will be accessed exceeds _small_table_threshold value. That is if a table is consisted of 10 partitions each having 100 blocks and if a query goes after two of the partitions, direct path reads will be performed if _small_table_threshold is lower than 200.

Also regardless of how much data has been cached(in the buffer cache)  for each of the partitions, if direct path reads are to be performed, all partition segments will be directly scanned. So, it is all or nothing situation.

I also indicated that _direct_read_decision_statistics_driven parameter was set to TRUE (default) for the tests done in my earlier post.

What is _direct_read_decision_statistics_driven anyway? According to the parameter description, it enables direct path read decision to be based on optimizer statistics. If the parameter is set to FALSE Oracle will use segment headers to determine how many blocks the segment has. (read Tanel Poder’s blogpost for more information)

Let’s see how queries that access table partitions (full scan) behave if _direct_read_decsision_statiscs_driven parameter is set to FALSE in My expectation was that it should be the same as if it was set to TRUE. I thought that once Oracle gets information about the number of blocks in each of the partitions it would use the same calculation as if the parameter was set to TRUE. Let’s see.

But, before moving forward a small disclaimer: Do not perform these tests in production or any other important environment. Changing of undocumented parameters should be done under the guidance of Oracle Support. The information presented here is for demonstration purposes only.

I will use the same table, TEST_PART, that I used in my earlier post.

I started with flushing the buffer cache (to make sure none of the partitions has blocks in the cache).

I set _direct_read_decision_statistcs_driven parameter to false and ran a query that selects data from PART_1 partition only. Each of the partitions contains 4000 rows stored in 65 blocks, plus one segment header block.

_small_table_threshold in my sandbox environment was set to 117.

SQL> alter session set "_direct_read_decision_statistics_driven"=FALSE;

Session altered.

SQL> SELECT count(1) FROM test_part WHERE col1 in (1);


As expected, no direct path reads were performed (I used my sese.sql script that scans v$sesstat for statistics that match given keyword)

SQL> @sese direct

no rows selected

Now let’s see what happens with a query that accesses the first two partitions. Remember if _direct_read_decision_statistcs_driven parameter is set to TRUE, this query would perform direct path reads because the number of blocks in both partitions, 130 (2x65) exceeds
_small_table_threshold(117) parameter.

SQL> select count(1) from test_part where col1 in (1,2);


SQL> @sese direct

no rows selected

No direct reads. Definitely different compared to when _direct_read_decision_statistcs_driven was set to TRUE.

How about for a query that accesses three partitions:

SQL> select count(1) from test_part where col1 in (1,2,3);


SQL> @sese direct

no rows selected

Still no direct path reads.

How about if we access all 7 partitions:

SQL> select count(1) from test_part where col1 in (1,2,3,4,5,6,7);


SQL> @sese direct

no rows selected

No direct path reads.

So what is going on? Seems when _direct_read_decision_statistcs_driven is set to FALSE, Oracle makes decision on partition by partition basis. If the number of blocks in the partition is less or equal than _small_table_threshold buffer cache will be used, otherwise direct path reads.

What if some of the partitions were already cached in the buffer cache?

In the next test I’ll:
  • Flush the buffer cache again
  • Set _direct_read_decision_statistcs_driven is set to FALSE
  • Run a query that accesses the first two partitions
  • Decrease the value for _small_table_threshold to 60
  • Run a query that accesses the first three partitions.
  • Check if direct path reads were performed and how many
With this test I’d like to see if Oracle will utilize the buffer cache if the segment data is cached and the number of blocks in partition is greater than _small_table_threshold.

SQL> alter system flush buffer_cache;

System altered.

SQL> alter session set "_direct_read_decision_statistics_driven"=FALSE;

Session altered.

SQL> select count(1) from test_part where col1 in (1,2);


SQL> @sese direct

no rows selected

At this point, PART_1 and PART_2 partitions should be entirely in the buffer cache. If you want, you could query X$KCBOQH to confirm this (from a different session logged in as SYS).

SQL> conn /as sysdba
SQL> select o.subobject_name, b.obj#, sum(b.num_buf)
2 from X$KCBOQH b, dba_objects o
3 where b.obj#=o.data_object_id
4 and o.object_name='TEST_PART'
5 group by o.subobject_name, b.obj#
6 order by 1;

------------------------------ ---------- --------------
PART_1 146024 66
PART_2 146025 66

As expected, both partitions are in the buffer cache.

Now let’s change decrease _small_table_threshold to 60 and run a query that scans the first three partitions:

SQL> alter session set "_small_table_threshold"=60;

Session altered.

SQL> alter session set events '10046 trace name context forever, level 8';

Session altered.

SQL> select count(1) from test_part where col1 in (1,2,3);


alter session set events '10046 trace name context off';

SQL> @sese direct

---------- ---------- -------------------------------------------------- ----------
9 76 STAT.consistent gets direct 65
9 81 STAT.physical reads direct 65
9 380 STAT.table scans (direct read) 1

Here they are, 65 direct path reads, one table scan (direct read) which means one of the partitions was scanned using direct path reads. Which one? Yes, you are right, the one that is not in the buffer cache (PART_3 in this example).

If you query X$KCBOQH again you can see that only one block of PART_3 is in the cache. That is the segment header block.

SQL> conn /as sysdba
SQL> select o.subobject_name, b.obj#, sum(b.num_buf)
2 from X$KCBOQH b, dba_objects o
3 where b.obj#=o.data_object_id
4 and o.object_name='TEST_PART'
5 group by o.subobject_name, b.obj#
6 order by 1;

------------------------------ ---------- --------------
PART_1 146024 66
PART_2 146025 66
PART_3 146026 1 <===

This means that when _direct_read_decision_statistcs_driven is set to FALSE, in, Oracle uses totally different calculation compared to the one used when the parameter is set to TRUE (see in my earlier post).

Moreover, seems Oracle examines each of the partitions separately (which I initially expected to be a case even when _direct_read_decision_statistcs_driven is set to TRUE ) and applies the rules as described in Alex Fatkulin’s blogpost. That is, if any of the following is true, oracle will scan the data in the buffer cache, otherwise direct path reads will be performed: 
  •  the number of blocks in the segment is lower or equal than _small_table_threshold 
  •  at least 50% of the segment data blocks are in the buffer cache
  •  at least 25% of the data blocks are dirty 
The conclusion so far is that in, you may observe different behavior for the queries that access table partitions using FTS if you decide to change _direct_read_decision_statistcs_driven parameter.

I will stop here. I ran the same tests against and and noticed some differences in the behavior compared to the one I just wrote about ( I will post these results in the next few days.

Stay tuned...

Paul Vallée’s Interview with Oracle Profit Magazine

Pythian Group - Wed, 2014-04-09 23:00

Aaron Lazenby, Editor at Oracle’s Profit Magazine interviewed Pythian Founder, Paul Vallée this week to discuss the growing risk of internal threats to IT. Paul Vallee

“What we need to create is complete accountability for everything that happens around a data center, and that’s where our industry is not up to snuff right now. We tend to think that if you secure access to the perimeter of the data center, then what happens in the meeting inside can be unsupervised. But that’s not good enough.” says Paul.

The interview, Inside Job, is a preview to Paul’s Collaborate ’14 session taking place in later today in Las Vegas. If you’re at Collaborate, make sure you don’t miss Paul’s presentation Thou Shalt Not Steal: Securing Your Infrastructure in the Age of Snowden. The presentation begins at 4:15 PM Pacific at the Venetian, Level 3 – Murano 3306.

What are your thoughts? How else can organizations mitigate the risk of internal threats? Comment below.

Categories: DBA Blogs

SQL Developer’s Interface for GIT: Interacting with a GitHub Repository Part 1

Galo Balda's Blog - Wed, 2014-04-09 22:45

In my previous post, I showed how to clone a GitHub repository using SQL Developer. In this post I’m going to show to synchronize the remote and local repositories after remote gets modified.

Here I use GitHub to commit a file called sp_test_git.pls.  You can create files by clicking on the icon the red arrow is pointing to.


The content of the file is a PL/SQL procedure that prints a message.


At this point, the remote repository and the local repository are out of sync. The first thing that you may want to do before modifying any repository, is to make sure that you have the most current version of it so that it includes the changes made by other developers. Let’s synchronize remote and local.

Make sure you open the Versions window. Go to the main menu click on Team -> Versions.


Open the Local branch and click on master, then go to main menu click on Team -> Git -> Fetch to open the “Fetch from Git” wizard. Fetching a repository copies changes from the remote repository into your local system, without modifying any of your current branches. Once you have fetched the changes, you can merge them into your branches or simply view them. We can see the changes on the Branch Compare window by going to the main menu click on Team -> Git -> Branch Compare.


 Branch Compare is showing that sp_test_git.pls has been fetched from the remote master branch. We can right click on this entry and select compare to see the differences.


The window on the left displays the content of the fetched file and the window on right displays the content of the same file in the local repository. In this case the right windows is empty because this is a brand new file that doesn’t exist locally. Let’s accept the changes and merge them into the local repository. We go to the Branch Compare window, right click on the entry, select merge and click on the “Ok” button.


Now the changes should have been applied to the local repository.


We can go to the path where the local repository is located and confirm that sp_test_git.pls is there.



Filed under: GIT, SQL Developer, Version Control Tagged: GIT, SQL Developer, Version Control
Categories: DBA Blogs

New Web Services Capabilities available

Anthony Shorten - Wed, 2014-04-09 17:59

As part of Oracle Utilities Application Framework V4., a new set of Web Services capabilities is now available to replace the Multi-Purpose Listener (MPL) and also XAI Servlet completely with more exciting capabilities.

Here is a summary of the facilities:

  • There is a new Inbound Web Services (IWS) capability to replace the XAI Inbound Services and XAI Servlet (which will be deprecated in a future release). This capability combines the meta data within the Oracle Utilities Application Framework with the power of the native Web Services capability within the J2EE Web Application Server to give the following advantages:
    • It is possible to define individual Web Services to be deployed on the J2EE Web Application Server. Web based and command line utilities have been provided to allow developers to design, deploy and manage individual Inbound Web Services.
    • It is now possible to define multiple operations per Web Service. XAI was restricted to a single operation with multiple transaction types. IWS supports multiple operations separated by transaction type. Operations can even extend to different objects within the same Web Service. This will aid in rationalizing Web Services.
    • IWS  makes it  possible to monitor and manage individual Web Services from the J2EE Web Application Server console (or Oracle Enterprise Manager). These metrics are also available from Oracle Enterprise Manager to provide SLA and trend tracking capabilities. These metrics can also be fine grained to the operation level within a Web Service.
    • IWS allows greater flexibility in security. Individual Services can now support standards such as WS-Policy, WS-ReliableMessaging etc as dictated by the capabilities of the J2EE Web Application Server. This includes message and transport based security, such as SAML, X.509 etc and data encryption.
    • For customers lucky enough to be on Oracle WebLogic and/or Oracle SOA Suite, IWS now allows full support for Oracle Web Services Manager (OWSM) on individual Web Services. This also allows the Web Services to enjoy additional WS-Policy support, as well as, for the first time, Web Service access rules. These access rules allow you to control when and who can run the individual service using simple or complex criteria ranging from system settings (such as dates and times), security (the user and roles) or individual data elements in the payload.
    • Customers migrating from XAI to IWS will be able to reuse a vast majority of their existing definitions. The only change is that each IWS service has to be registered and redeployed to the server, using the provided tools, and the URL for invoking the service will be altered. XAI can be used in parallel to allow for flexibility in migration.
  • The IWS capability and the migration path for customers using XAI Inbound Services is available in a new whitepaper Migrating from XAI to IWS (Doc Id: 1644914.1) available from My Oracle Support.

Over the next few weeks I will be publishing articles highlighting capabilities for both IWS and the OSB to help implementations upgrade to these new capabilities.

Database experts contend with clients using Windows XP

Chris Foot - Wed, 2014-04-09 14:23

Despite the fact that fair warning was given to Windows XP users several months before Microsoft announced that it would terminate support services for the outdated operating system, a large number of businesses continue to use it. Citing security concerns, database administration services have urged these professionals to make the transition to Windows 8.1.

Why it's a concern
The last four patches were delivered to XP users on April 7. Michael Endler, a contributor to InformationWeek, stated that the the 12-year-old OS still has twice as many subscribers as there are for Windows 8 and 8.1 combined. It's believed that general reluctance to switch to the new systems is rooted in how comfortable XP users have become with the solution. The problem is, IT professionals are expecting hackers to launch full-scale assaults on the machines hosting these programs in an attempt to harvest information belonging to individuals, as well as the companies they're working for. 

To the dismay of consumers, a fair number of banks and other organizations handling a large flow of sensitive customer data are still using XP. However, many of these institutions have hired the expertise of database support services to provide protection and surveillance for their IT infrastructures. Endler noted that select XP subscribers will still receive backing from Microsoft, though they'll be shelling out millions of dollars for the company to do so. 

Making a push for the new OS
In an effort to convince others to switch to the new Windows 8.1 update, Microsoft took a couple of strategic initiatives. Firstly, the corporation offered $100 to XP users still operating through the 12-year-old system to help consumers cover the cost of obtaining up-to-date machines. In addition, CIO reported that Windows 8.1 users won't receive patches or other future updates for the OS unless they install the new update. In other words, if businesses don't favor the changes the company has been making to 8.1, there's no way they can receive security fixes, leaving many to rely on database administration to mitigate the issue. 

In contrast, Windows 7 and 8 users will still continue to receive the same assortment of patches they've been accepting. Though Microsoft has garnered generally positive attention for its integration of cloud and mobile applications into its brand, the company's business techniques have been met with criticism. It's likely that the software giant is simply employing these strategies to assert itself as a forward-acting corporation. 

Oracle Application Express 4.2.5 now available

Joel Kallman - Wed, 2014-04-09 13:50
Oracle Application Express 4.2.5 is now released and available for download.  If you wish to download the full release of Oracle Application Express 4.2.5, you can get it from the Downloads page on OTN.  If you have Oracle Application Express 4.2, 4.2.1, 4.2.2, 4.2.3 or 4.2.4 already installed, then you need to download the APEX 4.2.5 patch set from My Oracle Support.  Look up patch number 17966818.

As is stated in the patch set note that accompanies the Oracle Application Express 4.2.5 patch set:
  • If you have Oracle Application Express release 4.2, 4.2.1, 4.2.2, 4.2.3 or 4.2.4 installed, download the Oracle Application Express 4.2.5 patch set from My Oracle Support and apply it.  Remember - patch number 17966818.
  • If you have Oracle Application Express release 4.1.1 or earlier installed (including Oracle HTML DB release 1.5), download and install the entire Oracle Application Express 4.2.5 release from the Oracle Technology Network (OTN).
  • If you do not have Oracle Application Express installed, download and install the entire Oracle Application Express 4.2.5 release from the Oracle Technology Network (OTN).
As usual, there are a large number of issues corrected in the Application Express 4.2.5 patch set.  You can see the full list in the patch set note.

Some changes in the the Oracle Application Express 4.2.5 patch set:
  1. A number of bug fixes and functionality additions to many of the Packaged Applications.
  2. One new packaged application - Live Poll.  This was the creation of Mike Hichwa.  Live Poll is intended for real-time, very brief polling (in contrast to a formal survey, which can be created and administered via Survey Builder).
  3. One new sample application - the Sample Geolocation Showcase, created by Oracle's Carsten Czarski, who did a masterful job in demonstrating how Oracle's spatial capabilities (via Oracle Locator) can be easily exploited in an Oracle Application Express application.  Try it for yourself today on!
  4. A handful of bug fixes in the underlying Application Express engine and APIs.

APEX 4.2.5 should be the end of the line for Oracle Application Express 4.2.x.

On Error Messages

Chen Shapira - Wed, 2014-04-09 13:01

Here’s a pet peeve of mine: Customers who don’t read the error messages. The usual symptom is a belief that there is just on error: “Doesn’t work”, and that all forms of “doesn’t work” are the same. So if you tried something, got an error, your changed something and you are still getting an error, nothing changed.

I hope everyone who reads this blog understand why this behavior makes any troubleshooting nearly impossible. So I won’t bother to explain why I find this so annoying and so self defeating. Instead, I’ll explain what can we, as developers, can do to improve the situation a bit. (OMG, did I just refer to myself as a developer? I do write code that is then used by customers, so I may as well take responsibility for it)

Here’s what I see as main reasons people don’t read error messages:

  1. Error message is so long that they don’t know where to start reading. Errors with multiple Java stack dumps are especially fun. Stack traces are useful only to people who look at the code, so while its important to get them (for support), in most cases your users don’t need to see all that very specific information.
  2. Many different errors lead to the same message. The error message simply doesn’t indicate what the error may be, because it can be one of many different things. I think Kerberos is the worst offender here, so many failures look identical. If this happens very often, you tune out the error message.
  3. The error is so technical and cryptic that it gives you no clue on where to start troubleshooting.  “Table not Found” is clear. “Call to localhost failed on local exception” is not.

I spend a lot of time explaining to my customers “When <app X> says <this> it means that <misconfiguration> happened and you should <solution>”.

To get users to read error messages, I think error messages should be:

  1. Short. Single line or less.
  2. Clear. As much as possible, explain what went wrong in terms your users should understand.
  3. Actionable. There should be one or two actions that the user should take to either resolve the issue or gather enough information to deduce what happened.

I think Oracle are doing a pretty good job of it. Every one of their errors has an ID number, a short description, an explanation and a proposed solution. See here for example:

If we don’t make our errors short, clear and actionable – we shouldn’t be surprised when our users simply ignore them and then complain that our app is impossible to use (or worse – don’t complain, but also don’t use our app).




Categories: DBA Blogs

Efficacy, Adaptive Learning, and the Flipped Classroom, Part II

Michael Feldstein - Wed, 2014-04-09 10:45

In my last post, I described positive but mixed results of an effort by MSU’s psychology department to flip and blend their classroom:

  • On the 30-item comprehensive exam, students in the redesigned sections performed significantly better (84% improvement) compared to the traditional comparison group (54% improvement).
  • Students in the redesigned course demonstrated significantly more improvement from pre to post on the 50-item comprehensive exam (62% improvement) compared to the traditional sections (37% improvement).
  • Attendance improved substantially in the redesigned section. (Fall 2011 traditional mean percent attendance = 75% versus fall 2012 redesign mean percent attendance = 83%)
  • They did not get a statistically significant improvement in the number of failures and withdrawals, which was one of the main goals of the redesign, although they note that “it does appear that the distribution of A’s, B’s, and C’s shifted such that in the redesign, there were more A’s and B’s and fewer C’s compared to the traditional course.”
  • In terms of cost reduction, while they fell short of their 17.8% goal, they did achieve a 10% drop in the cost of the course….

It’s also worth noting that MSU expected to increase enrollment by 72 students annually but actually saw a decline of enrollment by 126 students, which impacted their ability to deliver decreased costs to the institution.

Those numbers were based on the NCAT report that was written up after the first semester of the redesigned course. But that wasn’t the whole story. It turns out that, after several semesters of offering the course, MSU was able to improve their DFW numbers after all:

MSU DFWThat’s a fairly substantial reduction. In addition, their enrollment numbers have returned to roughly what they were pre-redesign (although they haven’t yet achieved the enrollment increases they originally hoped for).

When I asked Danae Hudson, one of the leads on the project, why she thought it took time to see these results, here’s what she had to say:

I do think there is a period of time (about a full year) where students (and other faculty) are getting used to a redesigned course. In that first year, there are a few things going on 1) students/and other faculty are hearing about “a fancy new course” – this makes some people skeptical, especially if that message is coming from administration; 2) students realize that there are now a much higher set of expectations and requirements, and have all of their friends saying “I didn’t have to do any of that!” — this makes them bitter; 3) during that first year, you are still working out some technological glitches and fine tuning the course. We have always been very open with our students about the process of redesign and letting them know we value their feedback. There is a risk to that approach though, in that it gives students a license to really complain, with the assumption that the faculty team “doesn’t know what they are doing”. So, we dealt with that, and I would probably do it again, because I do really value the input from students.

I feel that we have now reached a point (2 years in) where most students at MSU don’t remember the course taught any other way and now the conversations are more about “what a cool course it is etc”.

Finally, one other thought regarding the slight drop in enrollment we had. While I certainly think a “new blended course” may have scared some students away that first year, the other thing that happened was there were some scheduling issues that I didn’t initially think about. For example, in the Fall of 2012 we had 5 sections and in an attempt to make them very consistent and minimize missed classes due to holidays, we scheduled all sections on either a Tuesday or a Wednesday. I didn’t think about how that lack of flexibility could impact enrollment (which I think it did). So now, we are careful to offer sections (Monday through Thursday) and in morning and afternoon.

To sum up, she thinks there were three main factors: (1) it took time to get the design right and the technology working optimally; (2) there was a shift in cultural expectations on campus that took several semesters; and (3) there was some noise in the data due to scheduling glitches.

There are a number of lessons one could draw from this story, but from the perspective of educational efficacy, I think it underlines how little the headlines (or advertisements) we get really tell us, particularly about components of a larger educational intervention. We could have read, “Pearson’s MyPsychLabs Course Substantially Increased Students Knowledge, Study Shows.” That would have been true, but we have little idea how much improvement there would have been had the course not been fairly radically redesigned at the same time. We also could have read, “Pearson’s MyPsychLabs Course Did Not Improve Pass and Completion Rates, Study Shows.” That would have been true, but it would have told us nothing about the substantial gains over the semesters following the study. We want talking about educational efficacy to be like talking about the efficacy of Advil for treating arthritis. But it’s closer to talking about the efficacy of various chemotherapy drugs for treating a particular cancer. And we’re really really bad at talking about that kind of efficacy. I think we have our work cut out for us if we really want to be able to talk intelligently and intelligibly about the effectiveness of any particular educational intervention.

The post Efficacy, Adaptive Learning, and the Flipped Classroom, Part II appeared first on e-Literate.

Isn’t all Information Digital Already? Well…Not Quite.

WebCenter Team - Wed, 2014-04-09 08:04

Author: Ryan Sullivan, Sr. Solution Specialist with Aurionpro Sena

In today’s create-and-share mindset, the production of digital content is nothing short of second nature for most of us. Snapping a photo and sharing it through Instagram. Sending a personal update through Twitter. Publishing a restaurant review on Yelp. On the flip side, a large portion of the business world doesn’t work that way yet. “Old school” processes and industry regulations often dictate that much of our business information still needs to exist (at last for part of its lifecycle) in a non-digital format. Material safety data sheets are often printed and placed on manufacturing floors. Invoices are still faxed to clients. Purchase receipts are commonly attached to paper-based expense reports.

Over the last decade, many enterprises have been trying to handle the management and processing of these non-digital pieces of content more effectively. Many have done a great job doing so, but surprisingly, many are not even close yet. So how would a company approach such a challenge? The first phase of this type of initiative focuses on planning out the process for digitizing non-digital content. For many of the world’s largest corporations, these projects start with Oracle’s WebCenter Enterprise Capture 11g.

After implementing WebCenter Enterprise Capture dozens of times, we find ourselves explaining the tool’s most basic components time and time again. Here’s how we typically break it down:

Oracle’s WebCenter Enterprise Capture is a single, thin-client entity with a separation between the admin “console” and the user “client”. The “console” provides access to all of the security, metadata, classification, capture, processing, and commit configurations while the “client” handles the actual scanning and indexing.

Configuring Capture is quite straightforward, once you understand the constructs. Workspaces are the components used to define commit configurations, metadata, security, users, etc. Multiple workspaces can be defined, and each can be cloned and migrated through dev/test/production environments, which help greatly during the implementation process. Within workspaces live different categories of “processors”. The “import processor” usually starts the process by completing an import from email, folders, and list files. Next, the “document conversion processor” exposes the Outside In Technology (OIT), which actually performs the digital transformation, followed by the “recognition processor”, which provides automated bar code recognition, document separation, and indexing for image documents in a Capture workspace. Finally, the “commit processor” completes the processing and converts the output into TIFF, PDF Image-only, or Searchable PDF formats.

Many of the processing configurations have a “post-processing” option available for configuration of the next processor, and three levels of customization are provided to enable the development of feature extensions. These customizations are written in JavaScript and can be implemented in the “client”, the “import processor”, and the “recognition processor”.

With a little bit of training, and some guidance from Imaging experts, the first step of transforming manual-intensive paper-based activities to efficient and automated processes can be easily accomplished with Oracle WebCenter Enterprise Capture. For more information on WebCenter Enterprise Capture, or any aspect of a WebCenter project, feel free to contact

About the Author: Ryan Sullivan is a Sr. Solution Specialist with Aurionpro Sena, and an expert across the Oracle WebCenter suite of products. Ryan has focused his entire career on architecting, developing, customizing, and supporting content-centric applications, during which time he has led more than a dozen WebCenter implementations. You can follow Ryan on Sena's WebCenter blog at and can find him presenting session #261 on the topic of Oracle WebCenter Capture at the upcoming COLLABORATE14 event in Las Vegas on Fri, April 11th at 11am.

Pivotal Greenplum GPLOAD with multiple CSV files

Pas Apicella - Wed, 2014-04-09 05:48
I recently needed to setup a cron script which loaded CSV files from a directory into Greenplum every 2 minutes. Once loaded the files are moved onto Hadoop for archive purposes. The config below shows how to use GPLOAD data load utility which utilises GPFDIST.

1. Create a load table. In this example the data is then moved to the FACT table once the load is complete
drop table rtiadmin.rtitrans_etl4;

CREATE TABLE rtiadmin.rtitrans_etl4 (
imsi character varying(82),
subscriber_mccmnc character varying(10),
msisdn character varying(82),
imei character varying(50),
called_digits character varying(50),
start_datetime integer,
end_datetime integer,
first_cell_lac integer,
first_cell_idsac integer,
current_cell_lac integer,
current_cell_idsac integer,
dr_type integer,
status character varying(50),
ingest_time bigint,
processed_time bigint,
export_time bigint,
extra_col text,
gploaded_time timestamp without time zone
WITH (appendonly=true) DISTRIBUTED BY (imsi);

2. GPLOAD yaml file defined as follows

USER: rtiadmin
PORT: 5432
    - SOURCE:
            - loadhost
         PORT: 8100
          - /data/rti/stage/run/*.csv
    - COLUMNS:
          - imsi : text
          - subscriber_mccmnc : text
          - msisdn : text
          - imei : text
          - called_digits : text
          - start_datetime : text
          - end_datetime : text
          - first_cell_lac : integer
          - first_cell_idsac : integer
          - current_cell_lac : integer
          - current_cell_idsac : integer
          - dr_type : integer
          - status : text
          - ingest_time : bigint
          - processed_time : bigint
          - export_time : bigint
          - extra_col : text
    - FORMAT: text
    - HEADER: false
    - DELIMITER: ','
    - NULL_AS : ''
    - ERROR_LIMIT: 999999
    - ERROR_TABLE: rtiadmin.rtitrans_etl4_err
    - TABLE: rtiadmin.rtitrans_etl4
    - MAPPING:
           imsi : imsi
           subscriber_mccmnc : subscriber_mccmnc
           msisdn : msisdn
           imei : imei
           called_digits : called_digits
           start_datetime : substr(start_datetime, 1, 10)::int
           end_datetime : substr(end_datetime, 1, 10)::int
           first_cell_lac : first_cell_lac
           first_cell_idsac : first_cell_idsac
           current_cell_lac : current_cell_lac
           current_cell_idsac : current_cell_idsac
           dr_type : dr_type
           status : status
           ingest_time : ingest_time
           processed_time : processed_time
           export_time : export_time
           extra_col : extra_col
           gploaded_time : current_timestamp
    - TRUNCATE : true 
    - REUSE_TABLES : true
    - AFTER : "insert into rtitrans select * from rtitrans_etl4" 

3. Call GPLOAD as follows

source $HOME/.bash_profile
gpload -f rtidata.yml

Note: We use the ENV variable as $PGPASSWORD which is used during the load if a password is required which was in this demo
Few things worth noting here.
REUSE_TABLES : This ensures the external tables created during the load are maintained and re-used on next load.
TRUNCATE: This clears the load table prior to load and we use this as we COPY the data once the load is finished into the main FACT table using the "AFTER"
Categories: Fusion Middleware

SQLServer: date conversions

Darwin IT - Wed, 2014-04-09 02:14
In my current project I need to query an MS SqlServer database.
Unfortunately the dates are stored as a BigInt instead of a proper date datatype.
So I had to find out how to do compare the dates with the systemdate, and how to get the system date. To log this for possible later use, as an exception, a blog about SqlServer.

To get the system date, you can do:

It's maybe my Oracle background, but I would write this like:

An alternative is:

I found this at this blog. Contrary to the writer of that blog I would prefer this version, since I found that it works on Oracle too. There are several ways to convert this to a bigint, but the most compact I found is:

( SELECT YEAR(DT)*10000+MONTH(dt)*100+DAY(dt) sysdateInt
-- Test Data
(SELECT GETDATE() dt) a ) utl
The way I wrote this, makes it usefull as a subquery or a joined query:

Ent.* ,
AND Ent.endDate-1 < sysdateInt
THEN Ent.endDate-1
ELSE sysdateInt
END refEndDateEntity ,
SomeEntity Ent,
( SELECT YEAR(DT)*10000+MONTH(dt)*100+DAY(dt) sysdateInt
-- Test Data
(SELECT GETDATE() dt) a ) utl;
To convert a bigint to a date, you can do the following:

However, I found that although this works in a select clause, in the where-clause this would run into a "Data Truncation" error. Maybe it is due to the use of SqlDeveloper and thus a JDBC connection to SqlServer, but I'm not so enthousiastic about the error-responses of SqlServer... I assume the error has to do with the fact that it has to do with the fact that SqlServer has to interpret a column-value of a row when it did not already selected it, that is when evaluating wheter to add the row (or not) to the result set. So to make it work I added the construction as a determination value in the select clause of a 1:1 view on the table, and use that view in stead of the table. Then the selected value can be used in the where clause.

Deep Dive: Oracle WebCenter Tips and Traps!

Bex Huff - Tue, 2014-04-08 17:26

I'm currently at IOUG Collaborate 2014 in Las Vegas, and I recently finished my 2-hour deep dive into WebCenter. I collected a bunch of tips & tricks in 5 different areas: metadata, contribution, consumption, security, and integrations:

Deep Dive: Oracle WebCenter Content Tips and Traps! from Brian Huff

As usual, a lot of good presentations this year, but the Collaborate Mobile App makes it a bit tough to find them...

Bezzotech will be at booth 1350, right by Oracle, be sure to swing by and register for a free iPad, or even a free consulting engagement!

read more

Categories: Fusion Middleware

Impala docs now included in CDH 5 library

Tahiti Views - Tue, 2014-04-08 16:12
With the release of CDH 5.0.0 and Impala 1.3.0, now for the first time the Impala docs are embedded alongside the CDH Installation Guide, Security Guide, and other CDH docs. This integration makes it easier to link back and forth both ways, and also will help readers find Impala-related content when they search within the CDH 5 library. Here's the full layout of the CDH 5.0.0 library. Notice John Russell

How Do You Deliver High-Value Moments of Engagement?

WebCenter Team - Tue, 2014-04-08 16:10
Webcast: Delivering Moments of Engagement Across the Enterprise a{text-decoration:none} img{border:0} Oracle Corporation  Delivering Moments of Engagement Across the Enterprise

How Do You Deliver High-Value Moments of Engagement?

The web and mobile have become primary channels for engaging with customers today. To compete effectively, companies need to deliver multiple digital experiences that are contextually relevant to customers and valuable for the business—across various channels and on a global scale. But doing so is a great challenge without the right strategies and architectures in place.

As the kickoff of the new Digital Business Thought Leadership Series, noted industry analyst Geoffrey Bock investigated what some of Oracle’s customers are already doing, and how they are rapidly mobilizing the capabilities of their enterprise ecosystems.

Join us for a conversation about building your digital roadmap for the engaging enterprise. In this webcast you’ll have an opportunity to learn:

  • How leading organizations are extending and mobilizing digital experiences for their customers, partners, and employees
  • The key best practices for powering the high-value moments of engagement that deliver business value
  • Business opportunities and challenges that exist for enterprise wide mobility to fuel multichannel experiences

Register now to attend the webcast.

Register Now

Attend the webcast.

Thurs, April 17, 2014
10 a.m. PT / 1 p.m. ET

Presented by:

Geoffrey Bock

Geoffrey Bock

Principal, Bock & Company

Michael Snow

Michael Snow

Senior Product Marketing Director, Oracle WebCenter

Agenda xx:xx a.m Registration xx:xx a.m Keynote: Transform the Enterprise: Simplify. Differentiate. Innovate.. x:xx a.m. Session 1: A Day in the Life of Information xx:xx a.m Session 2: Integration for the Enterprise and the Cloud xx:xx a.m Session 3: Building the Right Foundation with Next-Generation Data Integration xx:xx a.m Session 4: Enterprise Mobility: All About the Platform xx:xx a.m Session 5: Securing the New Digital Experience xx:xx a.m Session 6: The Innovator’s Application Platform: Are You Keeping Up? xx:xx a.m Session 7: Cultivate IT Innovation with Oracle Exalogic and Oracle Exadata xx:xx a.m Closing Remarks and Adjourn --> Hardware and Software Copyright © 2014, Oracle and/or its affiliates.
All rights reserved.
Contact Us | Legal Notices and Terms of Use | Privacy Statement