Skip navigation.

Feed aggregator

Internals of Querying the Concurrent Requests’ Queue – Revisited for R12.2

Pythian Group - Tue, 2016-04-19 08:01

Once upon a time I wrote about the Internal Workflow of an E-Business Suite Concurrent Manager Process. Many things have changed since that blog post, the most obvious change being the release of Oracle e-Business Suite R12.2. I decided to check if the way the concurrent manager queues were processed by concurrent manager processes were still the same. My main goal was to see if the manager processes still don’t attempt any way of coordination to distribute the requests among them.

This is how I did the testing:

  • I used the VM templates provided by Oracle to build my R12.2.4 test environment. By the way, I didn’t expect that the process of getting the environment up would be so simple! Downloading the media files from edelivery.oracle.com was the most time-consuming step, once done – it took me just 1 hour to un-compress everything, import the Virtual Assembly file and bring up the R12.2.4 environment on my laptop.
  • 3 Standard managers are defined by default
  • Sleep seconds were left as is = 30 seconds
  • Cache size was increased from 1 to 5.
  • Identified the 3 DB processes that belong to the Standard managers:
    select sid, serial# from v$session where module='e:FND:cp:STANDARD'
  • I enabled tracing with binds and waits for each of them like this:
    exec dbms_monitor.session_trace_enable(sid,serial#,true,true);
  • Once that was done I submitted one concurrent program – “Active users” and waited for it to complete.
  • I disabled the tracing and collected the trace files.
    exec dbms_monitor.session_trace_disable(sid,serial#);
  • Collected the trace files

I found 2 of the trace files to be very interesting. To make things more simple, the manager process “A” will be the one that executed the concurrent request, and process “B” will be the one that didn’t.

Before the “Active Users” Request Was Submitted

No other requests were running at the time I did the testing, so I clearly observed how both Managers A and B queried the FND_CONCURRENT_REQUESTS table BOTH of the trace files displayed the same method of how requests are picked up from the queue. Note, I’m showing only the lines relevant to display the main query only, and I have formatted the query text to make it more readable:

PARSING IN CURSOR #139643743645920 len=1149 dep=0 uid=100 oct=3 lid=100 tim=1460211399835915 hv=3722997734 ad='d275f750' sqlid='cd23u4zfyhvz6'
SELECT R.Rowid
FROM Fnd_Concurrent_Requests R
WHERE R.Hold_Flag                             = 'N'
AND R.Status_Code                             = 'I'
AND R.Requested_Start_Date                   <= Sysdate
AND (R.Node_Name1                            IS NULL
OR (R.Node_Name1                             IS NOT NULL
AND FND_DCP.target_node_mgr_chk(R.request_id) = 1))
AND (R.Edition_Name                          IS NULL
OR R.Edition_Name                            <= sys_context('userenv', 'current_edition_name'))
AND EXISTS
  (SELECT NULL
  FROM Fnd_Concurrent_Programs P
  WHERE P.Enabled_Flag         = 'Y'
  AND R.Program_Application_Id = P.Application_Id
  AND R.Concurrent_Program_Id  = P.Concurrent_Program_Id
  AND EXISTS
    (SELECT NULL
    FROM Fnd_Oracle_Userid O
    WHERE R.Oracle_Id = O.Oracle_Id
    AND EXISTS
      (SELECT NULL
      FROM Fnd_Conflicts_Domain C
      WHERE P.Run_Alone_Flag = C.RunAlone_Flag
      AND R.CD_Id            = C.CD_Id
      )
    )
  AND (P.Execution_Method_Code                          != 'S'
  OR (R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) IN ((0,98),(0,100),(0,31721),(0,31722),(0,31757)))
  )
AND ((R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) NOT IN ((510,40032),(510,40033),(510,42156),(510,42157),(530,43793),(530,43794),(535,42626),(535,42627),(535,42628)))
ORDER BY NVL(R.priority, 999999999),
  R.Priority_Request_ID,
  R.Request_ID
END OF STMT
EXEC #139643743645920:c=0,e=33,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3984653669,tim=1460211399835910
FETCH #139643743645920:c=0,e=546,p=0,cr=106,cu=0,mis=0,r=0,dep=0,og=1,plh=3984653669,tim=1460211399836507
WAIT #139643743645920: nam='SQL*Net message to client' ela= 3 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=1460211399836572

*** 2016-04-09 10:17:09.837
WAIT #139643743645920: nam='SQL*Net message from client' ela= 30000367 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=1460211429836965
...
EXEC #139643743645920:c=0,e=59,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3984653669,tim=1460211429838767
FETCH #139643743645920:c=0,e=689,p=0,cr=106,cu=0,mis=0,r=0,dep=0,og=1,plh=3984653669,tim=1460211429839587
WAIT #139643743645920: nam='SQL*Net message to client' ela= 4 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=1460211429839652

*** 2016-04-09 10:17:39.840
WAIT #139643743645920: nam='SQL*Net message from client' ela= 30000325 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=1460211459840003
...

It’s important to observe that:

  • All manager’s processes still compete for the same requests. If the query is executed at the same time, the same list of concurrent requests will be retrieved by all processes.
  • The constants literals used in lines 30-32 mean that the query for checking the queue is still built when the concurrent manager process starts up. These constants are mainly used to implement the specializations rules in the query.
  • Only rowid for the pending requests’ rows in FND_CONCURRENT_REQUESTS are fetched.
  • The sleep time is clearly visible on lines 41,42 and 48,49
After the “Active Users” Request Was Submitted – Starting the Concurrent Request

The manager process A was the first to pick up the submitted requests and it could be observed by the “r=1” (1 row fetched) in the FETCH call for the query we just reviewed:

FETCH #139643743645920:c=0,e=437,p=0,cr=113,cu=0,mis=0,r=1,dep=0,og=1,plh=3984653669,tim=1460211519844640

Immediately after this, the manager process A locked the row in FND_CONCURRENT_REQUESTS table, this way, the request got assigned to this process. Notice the similar where predicates used in this query, these are actually required to make sure that the request is still not picked up by another manager process. However the main thing here is the fact that the request row is accessed by the “rowid” retrieved earlier (row 45, the value of the bind variable “:reqname” is “AAAjnSAA/AAAyn1AAH” in this case). Locking of the row is done by the “FOR UPDATE OF R.status_code NoWait” clause on line 49:

PARSING IN CURSOR #139643743640368 len=4530 dep=0 uid=100 oct=3 lid=100 tim=1460211519864113 hv=4239777398 ad='cde86338' sqlid='6ya6bzgybbrmq'
SELECT R.Conc_Login_Id,
  R.Request_Id,
  ... excluded other 156 columns for brevity...
FROM fnd_concurrent_requests R,
  fnd_concurrent_programs P,
  fnd_application A,
  fnd_user U,
  fnd_oracle_userid O,
  fnd_conflicts_domain C,
  fnd_concurrent_queues Q,
  fnd_application A2,
  fnd_executables E,
  fnd_conc_request_arguments X
WHERE R.Status_code             = 'I'
AND (R.Edition_Name            IS NULL
OR R.Edition_Name              <= sys_context('userenv', 'current_edition_name'))
AND R.Request_ID                = X.Request_ID(+)
AND R.Program_Application_Id    = P.Application_Id(+)
AND R.Concurrent_Program_Id     = P.Concurrent_Program_Id(+)
AND R.Program_Application_Id    = A.Application_Id(+)
AND P.Executable_Application_Id = E.Application_Id(+)
AND P.Executable_Id             = E.Executable_Id(+)
AND P.Executable_Application_Id = A2.Application_Id(+)
AND R.Requested_By              = U.User_Id(+)
AND R.Cd_Id                     = C.Cd_Id(+)
AND R.Oracle_Id                 = O.Oracle_Id(+)
AND Q.Application_Id            = :q_applid
AND Q.Concurrent_Queue_Id       = :queue_id
AND (P.Enabled_Flag            IS NULL
OR P.Enabled_Flag               = 'Y')
AND R.Hold_Flag                 = 'N'
AND R.Requested_Start_Date     <= Sysdate
AND ( R.Enforce_Seriality_Flag  = 'N'
OR ( C.RunAlone_Flag            = P.Run_Alone_Flag
AND (P.Run_Alone_Flag           = 'N'
OR NOT EXISTS
  (SELECT NULL
  FROM Fnd_Concurrent_Requests Sr
  WHERE Sr.Status_Code         IN ('R', 'T')
  AND Sr.Enforce_Seriality_Flag = 'Y'
  AND Sr.CD_id                  = C.CD_Id
  ))))
AND Q.Running_Processes                                     <= Q.Max_Processes
AND R.Rowid                                                  = :reqname
AND ((P.Execution_Method_Code                               != 'S'
OR (R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID)       IN ((0,98),(0,100),(0,31721),(0,31722),(0,31757))))
AND ((R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) NOT IN ((510,40032),(510,40033),(510,42156),(510,42157),(530,43793),(530,43794),(535,42626),(535,42627),(535,42628))) 
FOR UPDATE OF R.status_code NoWait

The behavior of the manager process B was a little bit more interesting. It too managed to fetch the same rowid from FND_CONCURRENT_PROCESSES table belonging to the submitted “Active Users” processes. However, when it tried to lock the row in FND_CONCURRENT_REQUESTS (By using exactly the same query), this happened:

PARSING IN CURSOR #139690311998256 len=4530 dep=0 uid=100 oct=3 lid=100 tim=1460211519900924 hv=4239777398 ad='cde86338' sqlid='6ya6bzgybbrmq'
...
BINDS #139690311998256:
...
Bind#2
  oacdty=01 mxl=32(18) mxlc=00 mal=00 scl=00 pre=00
  oacflg=20 fl2=1000001 frm=01 csi=873 siz=0 off=64
  kxsbbbfp=7f0c2f713f20  bln=32  avl=18  flg=01
  value="AAAjnSAA/AAAyn1AAH"
EXEC #139690311998256:c=1000,e=1525,p=0,cr=25,cu=1,mis=0,r=0,dep=0,og=1,plh=4044729389,tim=1460211519902727
ERROR #139690311998256:err=54 tim=1460211519902750

The query failed with “ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired”.
This is how the access to pending concurrent requests is serialized to make sure only one of the manager processes can run it. And, I think, relying on the well-tuned and highly efficient locking mechanism of Oracle Database is a very very smart idea.

Conclusions
  • The coordination between manager processes is still not happening to distribute the requests, but the managers all query the queue the same way and then compete between themselves to lock the requests’ entries on the table 1st. The process that gets the lock also gets to execute the concurrent request.
  • The cache size variable couldn’t be observed in the trace files, but as far as I remember from my previous research the process would only fetch “cache size”-number of rowids using the 1st query in this post. This could be tested by submitting larger volume of requests simultaneously.
  • The “sleep seconds” kicks in only when the manager process didn’t fetch any rowids from the queue. After all the cached requests are attempted/executed by the manager process, the queue is checked again immediately without waiting for the “sleep seconds” (Not explained in detail in this post, but it’s revealed in the trace files)
  • The DMLs used to query the FND_CONCURRENT_REQUESTS and to lock the row are very very similar to Pre-R12.2 releases of e-Business Suite (Another sign that the process hasn’t changed, though one change that I see is the addition of where clause predicates for Checking the Editions).
Categories: DBA Blogs

State of Higher Ed LMS Market for US and Canada: Spring 2016 Edition

Michael Feldstein - Tue, 2016-04-19 05:00

By Phil HillMore Posts (402)

This is the eighth year I have shared the LMS market share graphic, commonly known as the squid graphic, for (mostly) US higher education. The original idea remains – to give a picture of the LMS market in one page, highlighting the story of the market over time. The key to the graphic is that the width of each band represents the percentage of institutions using a particular LMS as its primary system.

This year marks a significant change based on our upcoming LMS subscription service. We are working with LISTedTECH to provide market data and visualizations. This data source provides historical and current measures of institutional adoptions, allowing new insights into how the market has worked and current trends. This current graphic gets all of its data from LISTedTECH. Where previous versions of the graphic used an anchoring technique, combining data from different sources in different years, with interpolation where the data was unavailable. Now, every year’s data is based on this single data source.

This graphic has been in the public domain for years, however, and we think it best to keep it that way. In this way we hope that the new service will provide valuable insight for subscribers but also improve what we continue to share here on the e-Literate blog.

Since we have data over time now and not just snapshots, we have picked the end of each year for that data. For this reason, the data goes through the end of 2015. We have 2016 data but chose not to share partial-year results in an effort to avoid confusion.

LMS_MarketShare_20160316

A few items to note:

  • As noted in previous years, the fastest-growing LMS is Canvas. There is no other solution close in terms of matching the Canvas growth.
  • Blackboard continues to lose market share, although the vast majority of that reduction over the past two years has been from customers leaving ANGEL. Blackboard Learn lost only a handful of clients in the past year.
  • While the end-of-life occurs next year, Pearson’s has announced LearningStudio’s end-of-life for the end of 2017.
  • With the new data set, the rapid rise and market strength of WebCT becomes much more apparent than previous graphics.
  • There is a growing line for “Other”, capturing the growth of those systems with less than 50 active implementations as primary systems; systems like Jenzabar, Edvance360, LoudCloud Systems, WebStudy, Schoology, and CampusCruiser.
  • While we continue to show Canvas in the Open Source area, we have noted a more precise description as an Open Core model.

For a better description of the upcoming LMS subscription service, read this post and / or sign up for more information here.

The post State of Higher Ed LMS Market for US and Canada: Spring 2016 Edition appeared first on e-Literate.

COLLABORATE16: Bots & Virtual Reality

Oracle AppsLab - Tue, 2016-04-19 02:00

Last week, Ben (@goldenmean1618) and I were in Las Vegas for COLLABORATE. We ran two studies which focuses on two trending topics in tech: bots and virtual reality!

Bot Focus Group

Ready for the focus group fun to begin. #oaux #CLV16 pic.twitter.com/sG0amqMGJx

— The AppsLab (@theappslab) April 12, 2016

Our timing for the bot study was perfect! The morning we were to run our focus group on bots in the workplace, Facebook launched it’s bot platform for messenger. They are not the only ones with a platform. Microsoft, Telegram as well as Slack has their own platform too.

The goal of our focus group was to generate ideas on useful bots in the workplace. This can range from the concierge bot that Facebook has to workflow bots that Slack has. To generate as many ideas as we could, without groupthink, we had everyone silently write down their ideas using the “I WANT [PAT] TO…SO I CAN…” Tower of Want framework I stumbled upon at the GDC16 conference last March.

Not only do you distill the participant’s motivations, intents and needs, but you also acquire soft goals to guide the bot’s development. Algorithms are extremely literal. The Harvard Business Review notes how social media sites were once “quickly filled with superficial and offensive material.”

The algorithm was simple, find the articles with the most clicks and feed them to the users. Somewhere, the goal of QUALITY highly engaged articles were lost to highly engaged articles at the expense of QUALITY. Intention is everything.

“Algorithms don’t understand trade-offs; they pursue objectives single-mindedly.”

Soft goals are in place to steer a bot away from unintended actions.

After the ideas were generated and shared, we had them place their bot tasks on a pain/frequency chart: How painful is this task for you to do? and How frequently do you need to do this task?

Focus group preparation underway #oaux #CLV16 pic.twitter.com/74tccV5kLZ

— The AppsLab (@theappslab) April 12, 2016

Then it was time for the business origami! Business Origami is similar to a task flow analysis that uses folded paper cutouts as memory nudges. We now have our bot tasks, but we do not know (a) what triggers the task, (b) what the bot needs to know to do its job and (c) what the desired output is. We modified the Business Origami activity with the inputs and outputs that a Resource Flow activity demands.

Before our customers created their own flows based on their best bot task idea, we did we group warm up. The flow below illustrates the flow of scheduling and booking meeting rooms. Everyone was involved as they talked about the myriad of ways that would trigger the act of scheduling a meeting, the mediums of communication used, what they would need to know in order to schedule that, and what feedback is needed when the task is done.

Business origami taking shape #oaux #CLV16 pic.twitter.com/PJARBrZGka

— The AppsLab (@theappslab) April 12, 2016

Virtual Reality Guerrilla Test

For 3 days, Ben and I ran a guerrilla study to get customer’s and partner’s thoughts on VR and where they might find it useful in their work/industry.

Getting ready for some #VR #oaux research #Collaborate16 w @iheartthannie and Ben pic.twitter.com/qfVjs9QadE

— The AppsLab (@theappslab) April 12, 2016

Our customers experienced virtual reality through the Samsung Gear VR. It relies on our Samsung Note 5 to deliver the immersive experience.

$99 Samsung Gear VR. The consumer version of the Oculus powered head mount display (HMD).

Because of the makeup of our audience at the demo pod, we had to ensure that our study took approximately 5 minutes. We had 2 experiences to show them: an under water adventure with the blue whale in the Artic Ocean (theBlu) and a heart-pounding task of diffusing a bomb (Keep Talking and Nobody Explodes).

Everyone really wanted to reach out and touch the sea animals. 2 reached out and accidentally touched Ben and I and freaked out at how realistic the experience was! Another case for haptic gloves?

Storage in the Oracle Cloud

Pat Shuff - Tue, 2016-04-19 01:07
This week we are going to focus on storage. Storage is a slippery slope and difficult conversation to have. Are we talking about a file synchronization like dropbox.com, google.com/docs, or box.com? Are we talking about raw block storage or long term archive storage? There are many services available from many vendors. We are going to focus on block storage in the cloud that can be used for files if desired or for backups of databases and virtual machines. Some of the cloud vendors have specific focused storage like Azure tables that offer a noSQL type storage or Amazon S3 allowing you to run a website without a web server. Today we will look at the Oracle IaaS Storage set of products. This is different than the Oracle PaaS Documents option which is more of a Google Docs like solution. The IaaS Storage is a block of storage that you pay for either on a metered usage or non-metered usage basis.

Notice from the cloud.oracle.com web page, we click on Infrastructure and follow the Storage route. We see that we get the raw block storage or the archive storage as options. We also have the option of an on-site cache front end that reduces latency and offers an NFS front end to the users providing more of a document management strategy rather than a raw block option.

Before we dive a little deeper into the options and differences between the storage appliance, spinning disk, and spinning tape in the cloud, we need to have a discussion about pricing and usage models. If you click on the Pricing tab at the top of the screen you see the screens below.

Metered pricing consists of three parts. 1) how much storage are you going to start with, 2) how much storage are you going to grow to, and 3) how much are you going to read back? Metering is difficult to guestimate and unfortunately it has a significant cost associated with being wrong. Many long term customers of AWS S3 understand this and have gotten sticker shock when the first bill comes in. The basic cost for outbound transfer is measured on a per GB basis. The more that you read across the internet, the more you pay. You can circumvent this by reading into a compute server in the Oracle cloud and not have to pay the outbound transfer. If, for example, you are backing up video surveillance data and uploading 24 hours of video at the end of they day, you can read the 24 hour bundle into a compute server and extract the 10-15 minutes that you are interested in and pay for the outbound charges on compute for the smaller video file.

Non-Metered pricing consists of one part. How much storage are you going to use over the year. Oracle does not charge for the amount of data transferred in-bound or out-bound with this storage. You can read and write as much as you want and there is no charge for data transfer across the internet. In the previous example you could read the 24 hours of video from the cloud storage, throw away 90% of it from a server in your data center, and not incur any charges for the volume of transfer.

Given that pricing is difficult to calculate, we created our own spreadsheet to estimate pricing as well as part numbers that should be ordered when consuming Oracle cloud resources. The images below show the cost of 120 TB of archive storage, metered block storage, and non-metered block storage.

Note that the data transfer price is non-trivial. Reading the data back from the cloud can get significantly more expensive than the cost of the storage itself. A good rule of thumb is the cost of spinning disk in the cloud should not exceed $30/TB/month or $400/TB/year. If you look at the cost of a NetApp or EMC storage system, you are looking at $3K-$4K/TB purchase price with 10% annual maintenance per year ($300-$400). If you are currently running out of storage and your NFS filer is filling up, you can purchase cloud resources for a few months and see if it works. It won't cost you anything more than paying support and you can grow your cloud storage as needed rather than buying 3 years ahead as you would with a filer in your data center. The key issue with cloud storage is latency and access times. Access to a filer in your data center is typically 10ms where access time to cloud storage is typically 80+ms. All cloud storage vendors have on site appliance solutions that act as cache front ends to address this latency problem. Oracle has one that talks NFS. Amazon has one that talks iSCSI. Microsoft has one that talk SMB. There truly is no single vendor with a generic solution that addresses all problems.

Enough with the business side of storage. Unfortunately, storage is a commodity so the key conversation is economics, reliability, and security. We have already addressed economics. When it comes to reliability the three cloud vendors address data replication and availability in different ways. Oracle triple mirrors the data and provides public-private key encryption of all data uploaded to the cloud. Data can be mirrored to another data center in the same geography but can not be mirrored across an ocean. This selection is done post configuration and is tied to your account as a storage configuration.

Now to the ugly part of block storage. Traditionally, block storage has been addressed through an operating system as a logical unit or aggregation of blocks on a disk drive. Terms like tracks and sectors bleed into the conversation. With cloud storage, it is not part of the discussion. Storage in the cloud is storage. It is accessed through an interface called a REST api. The data can be created, read, updated, and deleted using html calls. All of this is documented in the Oracle Documents - Cloud Storage web site.

The first step is to authenticate to the cloud site with an instance name, username, and password. What is passed back is an authentication token. Fortunately, there are a ton of tools to help read and write HTML code and are specifically tuned to help create headers and JSON structured data packets for the REST api interfaces. The screen below shows the Postman interface available through Chrome. A similar one exists for Firefox called RESTClient API. Unfortunately, there is no extension for Internet Explorer.

The first step is to get an auth header by typing in the username and password into the Basic Authentication screen.

Once we are authorized, we connect to the service by going to https://storage.us2.oraclecloud.com/v1/Storage-(identity domain) where identity domain is the cloud provider account that we have been assigned. In our example we are connecting to metcsgse00029 as our identity domain and logging in as the user cloud.admin. We can see what "containers" are available by sending a GET call or create a new container by sending a PUT call with the new container name at the end of our html string. I use the word container because the top level of storage consists of different areas. These areas are not directories. They are not file systems. The are containers that hold special properties. We can create a container that is standard storage which represents spinning disk in the cloud or we can create a container that is archive storage which represents a tape unit in the cloud. This is done by sending the X-Storage-Class header. If there is no header, the default is block storage and spinning disk. If the X-Storage-Class is assigned to Archive it is tape in the cloud. Some examples of creating a container are shown below. We can do this via Postman inside Chrome or a command line

From the command line this would look like

export OUID=cloud.admin
export OPASS=mypassword
export ODOMAIN=metcsgse00029
c url -is -X GET -H "X-Storage-User:Storage-$ODOMAIN:$OUID" 
                 -H "X-Storage-Pass:$OPASS" 
                 https://$ODOMAIN.storage.oraclecloud.com/auth/v1.0

This should return an html header with HTTP 200 OK and an embedded header of X-Auth-Token: AUTH_tk578061b9ae7f864ae9cde3cfdd75d706. Note that the value after the X-Auth-Token is what we will use to pass into all other requests. This token will change with each request and is good for 30 minutes from first execution. Once we have the authentication finished we either change the request type from a GET to a PUT and append the container name to the end. The screen above shows how to do this with Postman. The results should look like the screen below. We can do this from the command line as show below as well.

c url -is -X PUT -H "X-Auth-Token: AUTH_tk578061b9ae7f864ae9cde3cfdd75d706" 
                 https://storage.us2.oraclecloud.com/v1/Storage-$ODOMAIN/new_area
In this example we create a new container from the command line called new_area. We can verify this by reviewing the cloud storage by changing the PUT to a GET.

c url -is -X GET -H "X-Auth-Token: AUTH_tk578061b9ae7f864ae9cde3cfdd75d706" 
                 https://storage.us2.oraclecloud.com/v1/Storage-$ODOMAIN
Both of these methods allow us to see the storage that we created. I personally do not like this interface. It is not intended to be human consumable. Uploading and downloading a file is difficult at best. A user interface that makes dragging and dropping files is desirable. This is where dropbox and google docs shine. They allow you to drag and drop as well as synchronize directories to cloud storage. The Oracle Storage Cloud is not intended to be this solution. It is designed so that you can drop a new library into your rman backup and backup straight from your database to the cloud. You can point your ComVault or Legato backup software to a cloud instance and replicate your data to the cloud. If you want a human readable interface you need to purchase something like the Cloudberry Explorer from Cloudberry. This give you a Windows Explorer like interface and allows your to drag and drop files, create containers and directories, and schedule archives or backups as desired.

Note that the way that you create a block storage container vs an archive container is a simple menu selection. Retrieving the archive storage is a little more complex because the tape unit must stage the file from the tape to disk and notify you that the restoration has been completed. This is a little more complex and we will defer this discussion to a later blog.

Copying files is little more than dragging and dropping a file between sections of a window in Cloudberry.

For completeness, I have included the command line screen shots so that you can see the request/response of a command line interaction.

It is important to remember our objective. We can use the cloud block storage as a repository for things like database and a holding point for our backups. When we configure a database in the cloud, we backup and restore from this storage. This is configured in the database provisioning screen. The Storage-metcsgse00029/backup is the location of RMAN backup and restores. The backup container is created through the REST api or Cloudberry interface. We can also attach to the cloud storage through the cloud storage appliance software which runs inside a virtual machine and listens for NFS requests and translates them into REST api calls. A small disk is attached to the virtual machine and it acts as a cache front end to the cloud storage. As files are written via NFS they are copied to the cloud storage. As the cache fills up, files contents are dropped from local storage and the metadata pointing to where the files are located are updated relocating the storage to the cloud rather than the cache disk. If a file is retrieved via NFS, the file is read from cache or retrieved from the cloud and inserted into the cache as it is written to the client that requested it.

In summary, we covered the economics behind why you would select cloud storage over on site storage. We talked about how to access the storage from a browser based interface, web based interface, or command line. We talked about improving latency and security. Overall, cloud based storage is something that everyone is familiar with. Products like Facebook, Picaso, or Instagram do nothing more than store photos in cloud storage for you to retrieve when you want. You pay for these services by advertisements injected into the web page. Corporations are turning more and more towards cloud storage as a cheaper way to consume long term storage at a much lower price. The Oracle Storage Cloud service is first of three that we will evaluate this week.

Links for 2016-04-18 [del.icio.us]

Categories: DBA Blogs

node-oracledb 1.9.0-Dev Branch with Promises is on GitHub

Christopher Jones - Mon, 2016-04-18 22:22

Top features: Promise support

node-oracledb 1.9.0-Development is now available only as a development-only branch on GitHub. It adds Promise support and some other goodies mentioned below. The plan is to incorporate any user feedback, stabilize the features, improve documentation, improve test covereage, and run stress tests. Once this is all done, then a production release to npm will be made. Personally I think this will take 2-4 weeks, but it is totally dependent on what is uncovered by you, the user.

Since this is a development branch, features are subject to change. Use GitHub Issue #410 to ask design questions and discuss the branch.

Install by setting OCI_LIB_DIR and OCI_INC_DIR as normal, and running npm install oracle/node-oracledb.git#dev-1.9. Or install from GitHub by cloning the repository, checking out the dev-1.9 branch, setting OCI_LIB_DIR and OCI_INC_DIR, and install with npm install

Anyone is welcome to report test results, contribute new tests or update documentation (or code!) to help us get a production release out faster. You do need to have your OCA accepted first before we can look at any pull requests.

As you can see, most of the doc is ready, and there are a solid number of tests for new functionality. You can help us by testing your own favorite Promise library, as we are only planning on testing the default Promise implementation in Node 0.12, 4 and 5. Also check out how connections are handled as you may like a different style - let us know.

In node-oracledb 1.9-dev:

  • Promise support was added. All asynchronous functions can now return promises. The default Promise library is used for Node 0.12, 4 and 5. It can be easily overridden if you wish to incorporate your own implementation.

    The current implemention typically requires two promise chains. Let us know what you think. There are solutions that could be made. What do you want?

  • A new toQueryStream() method was added. It is for ResultSets. It lets REF CURSORS be transformed into Readable Streams. It can also be used to convert ResultSets from top-level queries to streams, however the existing connection.queryStream() method will probably be easier to use for these queries.

  • An experimental query Stream _close() method was added. It allows query streams to be closed without needing to fetch all the data. It is marked experimental because there isn't good information about whether the current Node Streams implementation really allows interruptions.

    Open question: Should _close() invoke pause() internally?

  • Aliases pool.close() and connection.close() have been added for pool.terminate() and connection.release() respectively. This should make it easier to remember which method to use for releasing a connection, terminating a connection pool, or closing a ResultSet: you can just use close().

  • Some method parameter validation checks, such as the number or types of parameters, will now throw errors synchronously instead of returning errors via the callback.

  • Removed an extra call to getRows() made by queryStream() at end-of-fetch.

  • Some annoying, seemingly random crashes caused by connections being garbage collected while still in use should no longer occur. These were more common in smaller scripts that finished quickly, but could be seen in systems under load.

Resources

Issues and questions about node-oracledb can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line!

node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Partner Webcast – Collaboration in a digital world with Oracle Documents Cloud

The modern digital workplace requires a new set of capabilities, one that leverages the best in consumerization, social engagement and business empowerment. The key to this transformation are the...

We share our skills to maximize your revenue!
Categories: DBA Blogs

More Effective Anti-Entropy Repair in Cassandra

Pythian Group - Mon, 2016-04-18 12:12
1. Introduction

Cassandra offers three different repair mechanisms to make sure data from different replicas are consistent: Hinted Hand-off, Read-Repair, and Anti-Entropy Repair.

The first two mechanisms are kind of “automatic” mechanisms that will be triggered internally within Cassandra, depending on the configuration parameter values that are set for them either in cassandra.yaml file (for Hinted Hand-off) or in table definition (for Read-Repair). The last mechanism (Anti-Entropy Repair), however, needs to be triggered with manual intervention, by running “nodetool repair” command (possibly with various options associated with it).

Despite the “manual” nature of Anti-Entropy repair, it is nevertheless necessary, because the first two repair mechanisms cannot guarantee fixing all data consistency scenarios for Cassandra. For example, 1) what if a node is down longer than “max_hint_window_in_ms” (which defaults to 3 hours)? and  2) For deletes, Anti-Entropy repair has to be executed before “gc_grace_seconds” (defaults to 10 days) in order to avoid tombstone resurrection.

At the same time, however, due to how Anti-Entropy repair is designed and implemented (Merkle-Tree building and massive data streaming across nodes), it is also a very expensive operation and can cause a big burden on all hardware resources (CPU, memory, hard drive, and network) within the cluster. In this case, how to run Anti-Entropy repair effectively becomes a constant topic within Cassandra community.

In this post, I’d like to explore and summarize some of the techniques that can help achieve a more effective Anti-Entropy repair for Cassandra.

2. What is the issue of an Anti-Entropy Repair

Before Cassandra version 2.2, a sequential full Anti-Entropy repair is the default behavior when “nodetool repair” command (with no specific options) is executed. When this happens,

1) The node that initiates the operation, called coordinator node, scans the partition ranges that it owns (either as primary or replica) one by on. For each partition range, it sends the request to each of the peer/replica nodes to build a Merkle tree.

2) The peer/replica node scans all SSTables and a major, or validation, compaction is triggered, which reads every row in the SSTables, generates a hash for it, and then adds the result to a Merkle tree.

3) Once the peer node finishes building the Merkle tree, it sends the result back to the coordinator. The coordinator node compares every Merkle tree with all other trees. If difference is detected, data is exchanged between differing nodes.

Looking at this highly summarized procedure, there are a few things that immediately caught my attention:

First, because building Merkletree requires hashing every row of all SSTables,  it is a very expensive operation, stressing CPU, memory, and disk I/O.

Second, when Merkletree difference is detected, network bandwidth can be overwhelmed to stream the large amount of data to remote nodes.

Lastly, although not obvious, it is a worse problem in my opinion, which is that this operation can cause computation repetition and therefore waste resources unnecessarily. Please see this post for an example of how this can happen.

Based on the high cost related with it, a default (pre-Cassandra 2.2) sequential full Anti-Entropy repair is, in practice, rarely considered to be a routine task to run in production environment. Actually, a common trick that many people do with Cassandra repair is simply touching all data and let read-repair do the rest of work.

However, there do have situations when an Anti-Entropy repair is required, for example, to recover from data loss. What can we do in these cases?

3. What can help to achieve a more effective Anti-Entropy repair?

Over the time, different options for “nodetool repair” command have been introduced in different Cassandra versions. These options represent different techniques that can help achieve more effective Anti-Entropy repair, which I’ll go through in more details in this section. Meanwhile, I will also try to clarify some of the confusion that new Cassandra users might have around the different nodetool repair command options (and related jargon’s) of different Cassandra versions.

3.1 Primary range repair

In Cassandra, each node manages several (token) ranges of data. One of them is the primary range which is the token range that is assigned to the node according to the token setup within the ring. Other ranges of data are actually replica of primary ranges from other nodes. Running “nodetool repair” with option “-pr” (or “–partition-range”)  on a node means that the node only repairs the data of the primary range, but not other ranges managed on this node. The default behavior of a repair (without this option) is to repair all ranges of data managed by a node.

When using this option, it avoids the cost of doing Merkle tree calculation on non-primary range data. It also helps reduce excessive data streaming across the network. One caveat of using this option is that since each node only repairs one range of data that is managed by a node, this option needs to run on ALL nodes in the ring in order to repair all the data.

This option is available in very early release (0.x) of Cassandra.

3.2 Sequential repair

“nodetool repair” option “-snapshot” (or “–with-snapshot“) means a sequential repair (also called snapshot repair) and is a feature introduced in Cassandra version 1.2 and made as default in Cassandra version 2.0. Actually, in DataStax document for Cassandra 2.0 and later, you won’t find this option. In Cassandra 2.0 and beyond, you can specify option “-par” (or “–parallel”) to tell Cassandra to run in parallel. What does this really mean?

As we mentioned in section 3.1, each node manages several different ranges of data, either as primary or replica. When “-par” option is used, it instructs Cassandra to run repair on all these ranges at the same, which means the expensive repair operation (especially the Merkletree building part) happens on multiple nodes concurrently. This could be problematic and may slow down the entire cluster.

But when using “-snapshot” option (or default in Cassandra 2.0 and beyond), for each range of data, a snapshot is first taken and the repair operation is executed on the snapshot sequentially. This means that at any time for a given replica set of data, only one replica is being repaired, allowing other replica to satisfy the application requests in a more performant way.

3.3 Incremental repair

Since Cassandra version 2.1, a new option “-ic” (or “–incremental “) is introduced for incremental repair. Starting from Cassandra version 2.2, this option becomes the default option and at the same time, option “-full” (or “–full”) is used to specify a full repair.

The option of incremental repair is a great feature that will help overcome all 3 issues as listed in Section 2. Instead of building a Merkle tree out of all SSTables (repaired or not), this option only builds the tree out of un-repaired SSTables. Therefore,

  • The size of the Merkle tree to be built is way smaller and therefore requires much less computing resources
  • When doing Merkle tree comparison, less data will be compared and potentially streamed over the network
  • Already repaired data don’t need to be computed and compared again, thus avoiding a lot of unnecessary repetition.

There are a few things that need to pay attention to when using this option, please check my previous post  Apache Cassandra 2.1 Incremental Repair for more details.

Please also be noted that when running incremental repair in Cassandra 2.1 with Leveled Compaction Strategy (LCS), it may fail with RuntimeException (see CASSANDRA-9935 for more detail).

3.4 Sub-range repair

So far in this post when talking about repairing a range of data, it means the entire range (either primary or non-primary), which is sometimes also called as endpoint range. Beginning with Cassandra version 1.1.11, “nodetool repair” has options of “-st” (or “–start-token”) and “-et” (or “–end-token”) to specify a particular sub-range of data to repair.

Conceptually, sub-range repair is much like primary range repair, except that each sub-range repair operation focuses even smaller subset of data. So in general sub-range repair shares much of the pros and cons as primary range repair. One key benefit of using sub-range repair is the freedom of specifying the repair granularity through command line, which gives the DevOps team much more flexibility regarding how to set up a repair schedule to best match the cluster’s workload status. It is also doable to consider parallel running of multiple sub-range repairs on different sets of data at the same time, either on the same node, or on different nodes. This option has actually been deemed as one of the advanced repair techniques, as per post: Advanced repair techniques

Despite the flexibility of this technique, it has to be emphasized that no matter how the repair schedule is set for sub-range repairs, all ranges of data in the ring has to be repaired at least once before gc_grace_seconds limit is reached.

3.4.1 How to calculate valid sub-range token values

When invalid sub-range values are provided to the “-st” or “-et” option of “nodetool repair” command, most likely the command will fail and throw errors about “invalid tokens”. So in order to make sub-range repair work effectively, we need a systematic method that can generate valid sub-range token values.

We know that in Cassandra, data spans multiple nodes using a token ring. Each node is responsible for one or more slices of that ring. Each slice is a token range that has the start and end point. By this understanding, a natural way to generate valid sub-range token values would be: 1) find out all token ranges associated with one node in the entire ring; 2) for each token range associated with the node, divide the range into smaller chunks of sub-ranges. The start and end point of these sub-ranges would be valid values to be fed into the “-st” or “-et” option of “nodetool repair” command. The actual method and granularity to divide a node’s token range into smaller sub-ranges is where the flexibility comes from and can be adjusted accordingly to best suit the needs of the system.

There are different ways to find token ranges that are associated with a Cassandra node. Some of them are summarized below:

  1. Run “nodetool ring” command. “nodetool status” command will also do for single-token set-up.
  2. Run CQL query against “local” or “peers” tables in “system” keyspace to get the token values associated with a host.
  3. Cassandra client drivers provides APIs to get such information as well. For example, the “Metadata” class (in package com.datastax.driver.core) of the Java client API  provides the following two methods:
    * public Set<TokenRange> getTokenRanges() :
      Returns the token ranges that define data distribution in the ring.
    * public Set<TokenRange> getTokenRanges(String keyspace,Host host) :
      Returns the token ranges that are replicated on the given host, for the given keyspace.
  4. Based on the thrift Java API describe_splits call, an open-source utility tool called “cassandra-list-subranges” has been developed to list valid sub-range values for a specific keyspace and table on a particular Cassandra node. For details of this tool, please check the GitHub repository page at https://github.com/pauloricardomg/cassandra-list-subranges

Sub-range repair is available since Cassandra version 1.1.11.

3.4.2 DataStax Enterprise (DSE) repair service

If the Cassandra cluster is running under DataStax Enterprise version, the OpsCenter provides a “repair service” starting from version 4.0. The way that this service works is to continuously repairing small chunks of data (using sub-range repair) in a cluster in the background until the entire cluster is repaired and then it starts the next cycle of processing.

From the OpsCenter Window, there is only one parameter to specify the maximum amount of time it takes to repair the entire cluster once, which is typically less than gc_grace_seconds setting. The default is 9 days, compared with default 10 days of gc_grace_seconds setting.

More advanced setting, such as the parallelism of concurrent repairs, the maximum pending repairs allowed to be running on a node at one time, and etc. can be set either in opscenterd.conf file (for all clusters) or in cluster_name.conf file (for a specific cluster).

More detailed information about DSE repair service and its configuration can be found in the following documents:

Please be noted that DSE repair service, although convenient, is only available for DSE version and not for open source Cassandra version. It also lacks the capability to specify what keyspaces or tables  to repair, such as in “nodetool repair” tool.

3.5 Data-Center vs. Cluster wide repair

Since Cassandra version 1.2, Cassandra starts to provide options for “nodetool repair” to offer the following repair capabilities regarding repair locality/scope:

  • Cassandra 1.2: “-local” (or “–in-local-dc”), only repair nodes in the same data center as the node on which the “nodetool repair” command is executed.
  • Cassandra 2.0: “-dc <dc_name>” (or “–in-dc <dc_name>”), repairs nodes in the named data center
  • Cassandra 2.1:
    • “-dcpar” (or “–dc-parallel”), repairs data center in parallel;
    • “-hosts” (or “–in-hosts”), repairs specified hosts

When none of these options is specified, all data replica across the entire cluster is repaired.

4. Conclusion

In this post, we discussed in details some of the techniques that can help with a more effective Anti-Entropy repair in Cassandra. To summarize:

  • Primary range repair is recommended for maintenance purpose and when it is used, it has to be executed on all nodes in the ring one by one to make sure the whole data range in the ring is covered.
  • When available (Cassandra version 2.1 and beyond), incremental repair will bring the biggest benefit compared with other techniques. But please be aware of the caveats related with it. Please also be noted that with the bug as specified in CASSANDRA-9935, incremental repair may have issues with Level Compaction Strategy (LCS).
  • Sub-range repair can be very useful, especially when incremental repair is not an option. But please make sure that whatever schedule that is set up using this technique has to make sure all range of data to be finished within gc_grace_seconds limit.
  • If a repair is needed to recover from data loss, a sequential, full repair is needed. Neither primary range repair nor incremental repair technique works properly for this purpose. But due to the high cost associated with this operation, sometimes it might be faster to simply wipe out the data directory on a node and let it do bootstrapping.
Categories: DBA Blogs

AIIM Conference 2016 | NOLA | April 26-28

WebCenter Team - Mon, 2016-04-18 12:02

Oracle is proud to be a Platinum sponsor of the AIIM 2016 Conference, taking place April 26-28 in New Orleans, where we will have a booth, lead a roundtable session, and participate on a sponsor panel!

Roundtable (Group Therapy Session), Wednesday, April 27 at 3:35 p.m.: Connect Faster, Simplify Self-Service and Re-define Engagement  Are you looking to improve IT productivity, drive efficiency and innovation? Attend this roundtable to learn how you can enhance employee productivity and mobility through self-service, engagement and application extensions. Hear firsthand from your peers as we discuss how the cloud model reduces IT administration overhead, enforces enterprise wide security and drives agility and innovation.  The results -- effective customer engagement directly impacting revenue growth. Oracle Speaker: David Le Strat, Senior Director Product Management, Content and Process, Oracle

Sponsor Panel, Thursday, April 28 at 10:00 a.m.:
Topic: Insight: Understand and Engage 

  • Get a complete view of the customer -- and act upon it
  • Extract valuable business data to improve business decisions
  • Improve collaboration with customers, employees, and partners
Oracle Panelist: David Le Strat, Senior Director Product Management, Content and Process, Oracle
Booth #13 at the Solutions lounge, April 26-28:
  • Cloud Platform live Demos
  • Giveaways
Location: Hyatt Regency New Orleans 601 Loyola Avenue New Orleans, Louisiana, USA, 70113 Date: April 26-28, 2016 . About AIIM 2016: Digital Transformation in Action 

We are at a unique moment in time. During the next 3-5 years, the accumulated impact of geometric improvements in technology will force organizations to transform or risk irrelevancy. In these chaotic and disruptive times, information is your most important asset. Learn the skills needed to manage it at The AIIM Conference 2016.

Join us for three days of interactive sessions and engaging conversations packed with solid action items to take back to your office. You will learn how to:

  • Automate your business processes to save time and money
  • Protect and secure your information with governance
  • Gain insight to better understand and engage your customers and employees

GET SOCIAL: #OracleDOCS #OraclePCS #OracleSCS

Register today! We hope to see you there. 

Side-effect of ALTER SESSION SET CURRENT_SCHEMA

Dominic Brooks - Mon, 2016-04-18 10:11

For most of our database set-ups we use a different TEMP space for application users than for end-user/support/developer/reporting usage.
The intention is to minimise the potential impact of a rogue ad-hoc query on the main applicaiton.

However, turns out this is ineffective IF you use:

ALTER SESSION SET CURRENT_SCHEMA = main_app_schema;

This is documented but I wasn’t previously aware.
No surprise that this learning opportunity was presented as a direct result of a rogue query on a prod DB blowing out the main TEMP space.

Deduction quickly suggested that the above must be the case and it was quickly proven by a test case and also supported by the documentation .

the database uses the temporary tablespace of the specified schema for sorts, joins, and storage of temporary database objects

Obviously… not using ALTER SESSION and using fully qualified object names prevents this “vulnerability”… but that’s not good enough unfortunately.


9 Days of Learning About Engineered Advantages

APRIL 2016 ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

java.lang nullpointerexception While Installing 12.1.0.2 RAC Database Software

Oracle in Action - Mon, 2016-04-18 03:41

RSS content

I have an Oracle  12.1.0.2c cluster having following 5 nodes :

  • Hub Nodes : host01, host02, host03
  • Leaf nodes: host04 and host05

I wanted to install RAC database software on the 3 hub nodes i.e. host01, host02 and host03.

I invoked the OUI as oracle user (Owner of Database Home)

[oracle@host01 database_12_1_0_2]$ ./runInstaller

and chose the option to install RAC database software only on the 3 hub nodes.

After all the pre-requisite checks were successful , I clicked the Install button.
I got the error: java.lang nullpointerexception

On clicking OK, the OUI aborted.

To troubleshoot, I ran the OUI in debug mode as :

[oracle@host01 database_12_1_0_2]$ ./runInstaller -debug -J-DTRACING.ENABLED=true -J-DTRACING.LEVEL=2 -J-DSRVM_TRACE_LEVEL=2 -J-DFULLTRACE

The trace file showed that the leaf node host05 was not responding:

[setup.flowWorker] [ 2016-04-18 14:33:22.771 IST ]
[CRSCTLUtil.getClusterNodeConfigRoles:1198] outputs[4] = CRS-4404: The following nodes did not reply within the allotted time:
[setup.flowWorker] [ 2016-04-18 14:33:22.772 IST ]
[CRSCTLUtil.getClusterNodeConfigRoles:1200] values length = 2
[setup.flowWorker] [ 2016-04-18 14:33:22.772 IST ]
[CRSCTLUtil.getClusterNodeConfigRoles:1198] outputs[5] = host05
[setup.flowWorker] [ 2016-04-18 14:33:22.772 IST ]
[CRSCTLUtil.getClusterNodeConfigRoles:1200] values length = 1

I realized that although I wanted to install database software on 3 nodes only, all the nodes
needed to be up and running whereas in my case, the node host05 was down at that time.

When I started the node host05 and reinvoked OUI, the database software was installed
successfully.

References:
http://www.hhutzler.de/blog/oracle-installer-and-rac-installation/



Tags:  

Del.icio.us
Digg

Comments:  3 comments on this item
You might be interested in this:  
Copyright © ORACLE IN ACTION [java.lang nullpointerexception While Installing 12.1.0.2 RAC Database Software], All Right Reserved. 2016.

The post java.lang nullpointerexception While Installing 12.1.0.2 RAC Database Software appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

PeopleSoft Update Image as a Service - Proof of Concept

Javier Delgado - Mon, 2016-04-18 01:24
Together with PeopleSoft 9.2, Oracle announced the new Continuous Delivery model for delivering patches, based on PeopleSoft Update Manager. The new model allows customers to choose which fixes or enhancements they are interested in, and then just extract the objects related to them and their prerequisites.

The following diagram outlines the process of applying an update using this new delivery model:



In order to apply a change in your Development environment, you would to follow these steps:


  1. Download the latest Update Image from My Oracle Support. The latest images have a size of around 35 Gb for PeopleSoft HCM and for FSCM.
  2. Run a script to unzip the downloaded files and combine them into a single virtual appliance file (.ova).
  3. Import the .ova file into Virtual Box in order to create the virtual machine.
  4. Start the virtual machine and follow the installation steps so it can be used.
  5. Using PeopleSoft Change Assistant, upload your environment information into PeopleSoft Update Manager.
  6. Connect to the virtual machine environment in order to choose which patches or enhancements are going to be applied. The selected patches conform an Change Package.
  7. Run the source steps in the Change Package against the Update Image.
  8. Run the target steps included in the Change Package against the target environment.


Many customers find that the first 4 steps in the process take too much time, particularly when the intention is to apply a single regulatory update or enhancement. In the past, the customer would just download the patch and its prerequisites (which in many cases, for regulatory patches, were already applied) and starts the updating process. The new process requires to invest at least a day in downloading, uncompressing and installing the virtual machine.

On top of the time issues, the virtual machine can only run in a physical box with certain prerequisites. In these times when many organizations have moved to virtualization, it is not always easy to allocate the needed hardware to host the virtual machines.

BNB has conducted a successful Proof of Concept for one of our customers to install the latest Update Image on an Amazon EC2 server and use it to apply the latest patches.

Initially, we had some concerns about if this idea could work with a good performance. After our first real run, we can conclude that the performance is more than acceptable. The new Change Assistant in PeopleTools 8.54 is helping a lot, as it allows to run source and target steps separately. In this way, the source steps can be run in a Windows Amazon EC2 server sitting close to the Update Image, with the obvious performance gains. Still there will be some connectivity needed between your site and the Amazon EC2 servers, mainly to:

  • Upload target environment information from your server into the Update Image.
  • Download the Oracle Database and PeopleTools clients.
  • Download the Change Package once the source steps are finished.

We also faced some challenges in converting the Virtual Box delivered virtual machine into an Amazon EC2 hosted machine. We could eventually solve them and now we have created an AMI to quickly deploy new instances for other PeopleSoft customers. We have also tested the conversion steps multiple times, so we should now be able to have the AMI available just one day after the Update Image is made available by Oracle.

Note: Since the introduction of PeopleSoft 8.55 and DPK, it has become considerably easier to deploy Update Images in cloud infrastructure. Please check this blog post for more information on this.

Installing Update Images in Amazon Web Services

Javier Delgado - Mon, 2016-04-18 01:22
The last PeopleSoft Update Manager (PUM) images have been delivered in two formats: the traditional VirtualBox image and a newly introduced format: NativeOS.



NativeOS takes advantage of PeopleTools 8.55 Deployment Packages (DPK), which is the cornerstone for the PeopleSoft Cloud Architecture. This new cloud architecture facilitates the deployment of PeopleSoft applications in the cloud, not only covering Oracle Public Cloud but also other providers such as Amazon Web Services (AWS), Google Cloud and Microsoft Azure.

Creating the AWS InstanceAt BNB we have been using Amazon Web Services for a while, so it was our natural choice for installing the PeopleSoft HCM Update Image #17. We have done so in a Windows 2012 server using the m4.large instance type, which allocates 2 vCPUs and 8 Gb of RAM. In terms of disk, we have allocated 200 Gb in order to have the needed space for the image download and installation.

Once the instance was created, we downloaded the NativeOS update image from My Oracle Support. Once of the good advantages of NativeOS deployments is that the size of the download is less than the traditional VirtualBox one. Still, the size is considerable, but the network throughput in AWS instances is quite good.

Before proceeding with the installation, you need to edit the c:\windows\system32\drivers\etc\hosts file in order to include the internal server name in it:

127.0.0.1 <server name>.<zone>.compute.internal

The full server name can normally be found in the desktop top right corner.

Once this is done, we are ready to proceed with the DPK installation. For further information on this, I suggest you check My Oracle Support.

Allowing External AccessIf you would like to access the PeopleSoft Update Image without connecting with remote desktop to the server, you will need to take some additional steps.

Firstly, you will need to edit the security group linked to your AWS instance so you allow incoming TCP connection at the 8000 port, which is the port used by the PeopleSoft Update Image web server by default.

On top of this, you will need to change the firewall setting in the Windows server itself. This is done within the Windows Firewall with Advance Security application, on which you need to define an inbound rule also allowing 8000 port TCP connections:


Finally, if you want to use the same IP address every time you use the AWS instance, you will need to define an Elastic IP and associate it with the server. This fixed IP address has an additional cost, but if you are planning to distribute the URL to access the PeopleSoft application to other people who does not have access to the AWS Console in order to check the current IP address, it may be the only way to go.

installing Tomcat on Docker

Pat Shuff - Mon, 2016-04-18 01:07
A different way of looking at running Tomcat is to ignore the cloud platform and install and configure everything inside a docker instance. Rather than picking a cloud instance we are going to run this inside VirtualBox and assume that all of the cloud vendors will allow you to run Docker or configure a Docker instance on a random operating system. What we did was to initially install and configure Oracle Enterprise Linux 7.0 from a iso into VirtualBox. We then installed Docker with the command and start the service
sudo yum install docker
sudo systemctl start docker

We can search for a Tomcat installation and pull it down to run. We find a Tomcat 7.0 version from the search and pull down the configuration

docker search tomcat
docker pull consol/tomcat-7.0

We can run the new image that we pulled down with the commands

docker run consol/tomcat-7.0
docker ps
The docker ps command allows us to look at the container id that is needed to find the ip address of the instance that is running in docker. In our example we see the container id is 1e381042bdd2. To pull the ip address we execute
docker inspect -f format='{{.NetworkSettings.IPAddress}}' 1e381042bdd2
This returns the ip address of 172.17.0.2 so we can open this ip address and port 8080 to see the Tomcat installation.

In summary, this was not much different than going through Bitnami. If you have access to docker containers in a cloud service then this might be an alternative. All three vendors not only support docker instances but all three have announced or have docker services available through IaaS. Time wise it did take a little longer because we had to download an operating system as well as Java and Tomcat. The key benefit is that we can create a master instance and create our own docker image to launch. We can script docker to restart if things fail and do more advanced options if we run out of resources. Overall, this might be worth researching as an alternative to provisioning and running services.

PHP OCI8 2.0.11 and 2.1.1 are available on PECL

Christopher Jones - Sun, 2016-04-17 22:51

I've released PHP OCI8 2.0.11 (for supported PHP 5.x versions) and 2.1.1 (for PHP 7) to PECL. Windows DLLs on PECL been built by the PHP release team. The updated OCI8 code has also been merged to the PHP source branches and should land in the future PHP 5.6.21 and PHP 7.0.7 source bundles, respectively.

PHP OCI8 2.1.1 fixes a bug triggered by using oci_fetch_all() with a query having more than eight columns. To install on PHP 7 via PECL, use pecl install oci8

PHP OCI8 2.0.11 has one fix for a bind regression with 32-bit PHP. To install on PHP 5.x use pecl install oci8-2.0.11

My old Underground PHP and Oracle Manual still contains a lot of useful information about using PHP with Oracle Database. Check it out!

Getting the current SQL statement from SYS_CONTEXT using Fine Grained Auditing

The Anti-Kyte - Sun, 2016-04-17 13:44

The stand-off between Apple and the FBI has moved on. In essence both sides have taken it in turns to refuse to tell each other how to hack an iPhone.

Something else that tends to tell little or nothing in the face of repeated interrogation is SYS_CONTEXT(‘userenv’, ‘current_sql’).
If you’re fortunate enough to be running on Enterprise Edition however, a Fine Grained Auditing Policy will loosen it’s tongue.

Consider the following scenario.
You’ve recently got a job as a database specialist with Spectre.
They’ve been expanding their IT department recently as the result of their “Global Surveillance Initiative”.

There’s not much of a view from your desk as there are no windows in the hollowed out volcano that serves as the Company’s HQ.
The company is using Oracle 12c Enterprise Edition.

Everything seems to be going along nicely until you suddenly get a “request” from the Head of Audit, a Mr Goldfinger.
The requirement is that any changes to employee data in the HR system are recorded, together with the statement executed to change each record.
Reading between the lines, you suspect that Mr White – head of HR – is not entirely trusted by the hierarchy.

Whilst journalling triggers are common enough, capturing the actual SQL used to make DML changes is a bit more of a challenge.
Explaining this to Mr Goldfinger is unlikely to be a career-enhancing move. You’re going to have to be a bit creative if you want to avoid the dreaded “Exit Interview” (followed by a visit to the Piranha tank).

First of all though….

Fine Grained Auditing Configuration

You need to do a quick check to make sure that Fine Grained Auditing is available and configured in the way you would expect.

Access to Fine Grained Auditing

FGA is a feature of Oracle Enterprise Edition.
If you were working on any other edition of the database, Oracle would tell you that FGA is not enabled. For example, running the following on Oracle Express Edition 11g…

begin
    dbms_fga.add_policy
    (
        object_schema => 'HR',
        object_name => 'DEPARTMENTS',
        policy_name => 'WATCHING YOU',
        audit_condition => null,
        statement_types => 'INSERT, UPDATE, DELETE'
    );
end;
/

… will result in the Oracle Database telling you your fish-food …

ERROR at line 1:
ORA-00439: feature not enabled: Fine-grained Auditing
ORA-06512: at "SYS.DBMS_FGA", line 20
ORA-06512: at line 2

You can avoid this embarrassment simply by checking what edition of Oracle you’re running :

select banner
from v$version
/

In the case of Oracle 12c, you’ll get :

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
PL/SQL Release 12.1.0.2.0 - Production
CORE	12.1.0.2.0	Production
TNS for Linux: Version 12.1.0.2.0 - Production
NLSRTL Version 12.1.0.2.0 - Production

If you don’t happen to work for a worldwide crime syndicate and/or don’t have access to an Enterprise Edition database, you can still have a play around by means of a Developer Day Virtual Box image.

Unified Auditing

The other thing you need to check is just where audit records are going to be written to. This is not so much a requirement for the solution being implemented here, but it is relevant to some of the examples that follow.

By default, unified auditing is not implemented in 12c and you can confirm this by running :

select value
from v$option
where parameter = 'Unified Auditing'
/

If the query returns FALSE, then Unified Auditing has not been enabled.
Otherwise, it’s probably worth taking a look at the documentation to see how this affects auditing behaviour in the database.

Initialization Parameters

Assuming Unified Auditing has not been configured, the location of the audit records will be dictated by the AUDIT_TRAIL initialization parameter. You can check this value as follows :

select value
from v$parameter
where name = 'audit_trail'
/

If the value is set to DB, or DB, EXTENDED then any FGA policies should write to the tables mentioned below.

Now to take a closer look at FGA…

How long before SYS_CONTEXT cracks ?

To test exactly when you will be able to retrieve the DML statement you’re interested in, you can knock up a quick test.

First, you need a table to audit against for testing purposes :

create table trigger_messages
(
    message varchar2(4000)
)
/

Next, a simple procedure to insert a record :

create or replace procedure add_message( i_msg in trigger_messages.message%type)
is
begin
    insert into trigger_messages(message) values( i_msg);
end;
/

Now for a DML trigger on the table :

create or replace trigger trg_msg
    for insert or update or delete 
    on trigger_messages
    compound trigger
    
    l_action varchar2(10);
    before statement is
    begin
        l_action := case when inserting then 'INSERT' when updating then 'UPDATE' else 'DELETE' end;
        dbms_output.put_line('Before Statement '||l_action);
        dbms_output.put_line( nvl( sys_context('userenv', 'current_sql'), 'My lips are sealed'));
    end before statement;
    
    before each row is
    begin
        dbms_output.put_line('Before Row '||l_action);
        dbms_output.put_line( nvl( sys_context('userenv', 'current_sql'), 'My lips are sealed'));
    end before each row;
    
    after each row is
    begin
        dbms_output.put_line('After Row '||l_action);
        dbms_output.put_line( nvl( sys_context('userenv', 'current_sql'), 'My lips are sealed'));
    end after each row;
    
    after statement is
    begin
        dbms_output.put_line('After Statement '||l_action);
        dbms_output.put_line( nvl( sys_context('userenv', 'current_sql'), 'My lips are sealed'));
    end after statement;
end trg_msg;
/

Next up, you need a procedure to serve as a handler for a Fine Grained Auditing event. The reason for this will become apparent when we run the test. Note that the signature for an FGA handler procedure is mandated :

create or replace procedure trg_msg_fga
(
    object_schema varchar2,
    object_name varchar2,
    policy_name varchar2
)
is
begin
    dbms_output.put_line('FGA Policy');
    dbms_output.put_line(sys_context('userenv', 'current_sql'));
    dbms_output.put_line(sys_context('userenv', 'current_bind'));
    dbms_output.put_line(sys_context('userenv', 'current_sql_length'));
end;
/

Now all that’s left to do is to create an FGA policy on the table :

begin
    dbms_fga.add_policy
    (
        object_schema => 'MIKE',
        object_name => 'TRIGGER_MESSAGES',
        policy_name => 'FIRING_ORDER',
        statement_types => 'INSERT, UPDATE, DELETE',
        handler_schema => 'MIKE',
        handler_module => 'TRG_MSG_FGA'
    );
end;
/

You can confirm that the policy has been created successfully and is enabled by querying DBA_AUDIT_POLICIES…

select object_schema, object_name, enabled,
    sel, ins, upd, del
from dba_audit_policies
where policy_owner = user
and policy_name = 'FIRING_ORDER'
/

OBJECT_SCHEMA	OBJECT_NAME	     ENABLED	SEL   INS   UPD   DEL
--------------- -------------------- ---------- ----- ----- ----- -----
MIKE		TRIGGER_MESSAGES     YES	NO    YES   YES   YES

Now you’re ready to test…

set serveroutput on size unlimited

begin 
    add_message('Spectre - at the cutting-edge of laser technology');
end;
/

update trigger_messages set message = 'Spectre - coming to a browser near you'
/

delete from trigger_messages
/

The results are quite interesting…

Before Statement INSERT
My lips are sealed
Before Row INSERT
My lips are sealed
FGA Policy
INSERT INTO TRIGGER_MESSAGES(MESSAGE) VALUES( :B1 )
#1(49):Spectre - at the cutting-edge of laser technology
51
After Row INSERT
My lips are sealed
After Statement INSERT
My lips are sealed

PL/SQL procedure successfully completed.

Before Statement UPDATE
My lips are sealed
Before Row UPDATE
My lips are sealed
FGA Policy
update trigger_messages set message = 'Spectre - coming to a browser near you'
78
After Row UPDATE
My lips are sealed
After Statement UPDATE
My lips are sealed

1 row updated.

Before Statement DELETE
My lips are sealed
Before Row DELETE
My lips are sealed
After Row DELETE
My lips are sealed
FGA Policy
delete from trigger_messages
28
After Statement DELETE
My lips are sealed

1 row deleted.

From this you conclude that :

  • sys_context is only populated with the current statement inside the fga handler procedure
  • the handler procedure is invoked prior to the after row event for inserts and updates, but not for deletes

At this point, you consider that it might just be simpler to interrogate the DBA_FGA_AUDIT_TRAIL view, which has also captured the DML statements we’ve just run :

select sql_text
from dba_fga_audit_trail
where policy_name = 'FIRING_ORDER'
order by timestamp
/  

SQL_TEXT
----------------------------------------------------------------------------------------------------------------------------------
INSERT INTO TRIGGER_MESSAGES(MESSAGE) VALUES( :B1 )
update trigger_messages set message = 'Spectre - coming to a browser near you'
delete from trigger_messages

Note – the bind values for the procedure call can be found in the SQL_BIND column of this view.

However, it’s worth noting that we haven’t actually commited the test transaction yet these records are still present.
They will remain there, even if the transaction is rolled back.

In the end, you decide that the best approach is a journalling trigger…

The Unnecessarily Slow Dipping Mechanism – the DML trigger

Due to the nature of the organization, Spectre doesn’t have employees. It has associates. This is reflected in the table that you need to audit :

create table associates
(
    emp_id number,
    emp_name varchar2(100),
    job_title varchar2(30)
)
/

The table to hold the audit trail will probably look something like this :

create table assoc_audit
(
    action varchar2(6),
    changed_by varchar2(30),
    change_ts timestamp,
    emp_id number,
    emp_name varchar2(100),
    job_title varchar2(30),
    statement varchar2(4000),
    binds varchar2(4000)
)
/

It’s worth pausing at this point to note that SYS_CONTEXT can report up to 32k of a statement.
It does this by splitting the statement into eight 4k chunks, available in the USERENV context variables CURRENT_SQL, CURRENT_SQL1…CURRENT_SQL7.
It also provides the length of the statement it currently holds in the CURRENT_SQL_LENGTH variable.
Therefore, you may consider having a 32k varchar statement column in the audit table ( if this is enabled on your database), or even a column for the contents of each of these variables.
For the sake of simplicity, plus the fact that none of the examples here are very large, you decide to stick with just the one 4k varchar column to hold the statement.

There’s a procedure for adding new records to the table :

create or replace procedure add_associate
(
    i_emp_id in associates.emp_id%type,
    i_name in associates.emp_name%type,
    i_job_title in associates.job_title%type
)
is
begin
    insert into associates( emp_id, emp_name, job_title)
    values( i_emp_id, i_name, i_job_title);
end;
/
    

In the real world this would probably be in a package, but hey, you’re working for Spectre.

Now we need a handler for the FGA policy that we’re going to implement. In order for the context values that are captured to be accessible to the trigger, this handler is going to be part of a package which includes a couple of package variables :

create or replace package assoc_fga_handler
as

    g_statement varchar2(4000);
    g_binds varchar2(4000);
    
    -- The procedure to be invoked by the FGA policy.
    -- Note that the signature for this procedure is mandatory
    procedure set_statement
    (
        object_schema varchar2,
        object_name varchar2,
        policy_name varchar2
    );
end assoc_fga_handler;
/

create or replace package body assoc_fga_handler
as

    procedure set_statement
    (
        object_schema varchar2,
        object_name varchar2,
        policy_name varchar2
    )
    is
    begin
        g_statement := sys_context('userenv', 'current_sql');
        g_binds := sys_context('userenv', 'current_bind');
    end set_statement;
end assoc_fga_handler;
/

Now for the trigger. You may notice some compromises here …

create or replace trigger assoc_aud
    for insert or update or delete on associates
    compound trigger

    type typ_audit is table of assoc_audit%rowtype index by pls_integer;
    tbl_audit typ_audit;
    l_idx pls_integer := 0;
    
    after each row is
    begin
        l_idx := tbl_audit.count + 1;
        tbl_audit(l_idx).action := case when inserting then 'INSERT' when updating then 'UPDATE' else 'DELETE' end;
        tbl_audit(l_idx).changed_by := user;
        tbl_audit(l_idx).change_ts := systimestamp;
        tbl_audit(l_idx).emp_id := case when inserting then :new.emp_id else :old.emp_id end;
        tbl_audit(l_idx).emp_name := case when inserting then :new.emp_name else :old.emp_name end;
        tbl_audit(l_idx).job_title := case when inserting then :new.job_title else :old.job_title end;
    end after each row;
    
    after statement is
    begin
        for i in 1..tbl_audit.count loop
            tbl_audit(i).statement := assoc_fga_handler.g_statement;
            tbl_audit(i).binds := assoc_fga_handler.g_binds;
        end loop;
        forall j in 1..tbl_audit.count
            insert into assoc_audit values tbl_audit(j);
        -- cleardown the array
        tbl_audit.delete;    
    end after statement;
end assoc_aud;
/

Due to the fact that the FGA policy is not fired until after an AFTER ROW trigger for a DELETE, we are only guaranteed to capture the CURRENT_SQL value in an AFTER STATEMENT trigger.
The upshot is that we’re left with a PL/SQL array which is not constrained by a LIMIT clause. In these circumstances it’s not too much of an issue, Spectre has quite a small number of employees…er…associates, so you’re not likely to end up with an array large enough to cause memory issues.
On a potentially larger volume of records you may well consider splitting the INSERT and UPDATE portions of the trigger so that you can limit the size of the arrays generated by these operations. For DELETEs however, it appears that we may well be stuck with this approach.
On a not entirely unrelated subject, Jeff Kemp has an interesting method of speeding up Journalling Triggers.

All that remains is for the FGA policy….

begin
    dbms_fga.add_policy
    (
        object_schema => 'MIKE',
        object_name => 'ASSOCIATES',
        policy_name => 'ASSOCIATES_DML',
        statement_types => 'INSERT, UPDATE, DELETE',
        handler_schema => 'MIKE',
        handler_module => 'ASSOC_FGA_HANDLER.SET_STATEMENT'
    );
end;
/

…and now you can test…

set serveroutput on size unlimited
--
-- Cleardown the tables before running the test
--
truncate table assoc_audit
/

truncate table associates
/

begin
    add_associate(1, 'Odd Job', 'HENCHMAN');
    add_associate(2, 'Jaws', 'HENCHMAN');
    add_associate(3, 'Mayday', 'HENCHWOMAN');
    add_associate(4, 'Ernst Stavro Blofeld', 'CRIMINAL MASTERMIND');
    add_associate(5, 'Emilio Largo', 'Deputy Evil Genius');
    
end;
/

insert into associates( emp_id, emp_name, job_title)
values(6, 'Hans', 'Bodyguard and Piranha keeper')
/

commit;

update associates
set job_title = 'VALET'
where emp_id = 1
/
commit;


delete from associates
where emp_id = 1
/

commit;

-- Spectre is an Equal Opportunities Employer...and I need a statement
-- affecting multiple rows to test so...
update associates
set job_title = 'HENCHPERSON'
where job_title in ('HENCHMAN', 'HENCHWOMAN')
/

commit;

It is with a sense of relief that, when you check the audit table after running this you find …

select action, emp_name, 
    statement, binds
from assoc_audit
order by change_ts
/

ACTION EMP_NAME 	    STATEMENT							 BINDS
------ -------------------- ------------------------------------------------------------ ----------------------------------------
INSERT Odd Job		    INSERT INTO ASSOCIATES( EMP_ID, EMP_NAME, JOB_TITLE) VALUES(  #1(1):1 #2(7):Odd Job #3(8):HENCHMAN
			     :B3 , :B2 , :B1 )

INSERT Jaws		    INSERT INTO ASSOCIATES( EMP_ID, EMP_NAME, JOB_TITLE) VALUES(  #1(1):2 #2(4):Jaws #3(8):HENCHMAN
			     :B3 , :B2 , :B1 )

INSERT Mayday		    INSERT INTO ASSOCIATES( EMP_ID, EMP_NAME, JOB_TITLE) VALUES(  #1(1):3 #2(6):Mayday #3(10):HENCHWOMAN
			     :B3 , :B2 , :B1 )

INSERT Ernst Stavro Blofeld INSERT INTO ASSOCIATES( EMP_ID, EMP_NAME, JOB_TITLE) VALUES(  #1(1):4 #2(20):Ernst Stavro Blofeld #3(
			     :B3 , :B2 , :B1 )						 19):CRIMINAL MASTERMIND

INSERT Emilio Largo	    INSERT INTO ASSOCIATES( EMP_ID, EMP_NAME, JOB_TITLE) VALUES(  #1(1):5 #2(12):Emilio Largo #3(18):Depu
			     :B3 , :B2 , :B1 )						 ty Evil Genius

INSERT Hans		    insert into associates( emp_id, emp_name, job_title)
			    values(6, 'Hans', 'Bodyguard and Piranha keeper')

UPDATE Odd Job		    update associates
			    set job_title = 'VALET'
			    where emp_id = 1

DELETE Odd Job		    delete from associates
			    where emp_id = 1

UPDATE Jaws		    update associates
			    set job_title = 'HENCHPERSON'
			    where job_title in ('HENCHMAN', 'HENCHWOMAN')

UPDATE Mayday		    update associates
			    set job_title = 'HENCHPERSON'
			    where job_title in ('HENCHMAN', 'HENCHWOMAN')


10 rows selected.

Looks like the Piranhas will be going hungry…for now !


Filed under: Oracle, PL/SQL, SQL Tagged: audit_trail initialization parameter, compound trigger, dba_fga_audit_trail, dbms_fga.add_policy, SYS_CONTEXT, sys_context current_bind, sys_context current_sql, sys_context current_sql_length, v$option, v$parameter, v$version

Signup For More Information On LMS Subscription Service

Michael Feldstein - Sun, 2016-04-17 12:54

By Phil HillMore Posts (402)

As we roll out our upcoming LMS subscription service here at e-Literate (see Michael’s post for initial description), we suspect that many of the e-Literate readers will be interested, but not all. We value the community here at e-Literate and want to ensure that the blog site itself remains as it’s always been – ad free, uncluttered, and with the same rough amount and breadth of content and discussions.

To help maintain the blog site’s feel, we have created a second email subscription for those people who would like more information on the LMS subscription service – when it’s going to be available, what the reports will look like, summaries of LMS analysis from the report and curated from the blog site, etc. You should see this new signup on the top of the right column in the desktop view, right under the signup for e-Literate posts. We will also include the form within posts that are relevant to analysis of the LMS market.

We are not going to automatically add current e-Literate subscribers to this new list, so if you’re interested in learning more on the service and getting content updates, sign up for the new subscription at this link.

Update: I apologize for any confusion as we test the signup button. The fields embedded in the post were not working in all cases, so we have replaced with a link to a web signup page. Thank you for your patience as we fix any remaining issues.

The post Signup For More Information On LMS Subscription Service appeared first on e-Literate.

Online Relocation of Database File : ASM to FileSystem and FileSystem to ASM

Hemant K Chitale - Sun, 2016-04-17 09:44
There have been few published examples of the online datafile relocation feature in 12c.  The examples I've seen are on filesystem.

Here I show online relocation to/from ASM and FileSystem.

SQL> connect system/oracle
Connected.
SQL> create tablespace test_relocate;

Tablespace created.

SQL> create table test_relocate_tbl
2 tablespace test_relocate
3 as select * from dba_objects;

Table created.

SQL> select tablespace_name, bytes/1024
2 from user_segments
3 where segment_name = 'TEST_RELOCATE_TBL';

TABLESPACE_NAME BYTES/1024
------------------------------ ----------
TEST_RELOCATE 13312

SQL> select file_name, bytes/1024
2 from dba_data_files
3 where tablespace_name = 'TEST_RELOCATE';

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1024
----------
+DATA/NONCDB/DATAFILE/test_relocate.260.909444793
102400


SQL>
SQL> alter database move datafile
2 '+DATA/NONCDB/DATAFILE/test_relocate.260.909444793'
3 to '/oradata/NONCDB/test_relocate_01.dbf';

Database altered.

SQL> !ls -l /oradata/NONCDB
total 102408
-rw-r----- 1 oracle asmdba 104865792 Apr 17 23:39 test_relocate_01.dbf

SQL> 
SQL> alter database move datafile  
2 '/oradata/NONCDB/test_relocate_01.dbf'
3 to '+DATA';

Database altered.

SQL> select file_name, bytes/1024
2 from dba_data_files
3 where tablespace_name = 'TEST_RELOCATE';

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1024
----------
+DATA/NONCDB/DATAFILE/test_relocate.260.909445261
102400


SQL>
SQL> !ls -l /oradata/NONCDB
total 0

SQL>


Note that I was courageous enough to not use the KEEP keyword (which is optional !).
.
.
.

Categories: DBA Blogs

ADF 12c Custom Property Groovy and AllowUntrustedScriptAccess Annotation

Andrejus Baranovski - Sun, 2016-04-17 03:41
To execute Groovy expression in ADF 12c (to call Java method from Groovy), you must specify trusted mode. Read more about it in my previous post - ADF BC 12c New Feature - Entity-Level Triggers. Setting mode to trusted, works in most of the cases. It doesn't work if we want to execute Groovy expression (calling Java method in ViewRow or Entity class) for custom property. In a case of custom property and Groovy calling custom method, we need to annotate Java class with AllowUntrustedScriptAccess. This makes a trick and Groovy expression can call custom method.

To demonstrate the use case, I was using mandatory property. This is standard property to control if attribute is required or no. By default, mandatory property is static, but we can make it dynamic with Groovy expression. I have implemented a rule, where Salary attribute is required, if value is more than 5000:


Salary is not required, if value is less than 5000. This is just example, you can implement more complex logic:


There is a method in ViewRow class, to calculate required property for Salary attribute:


Method is callable from custom property Groovy expression, as we have set annotation AllowUntrustedScriptAccess for ViewRow class. Annotation definition, must list all allowed methods:


Here you can see Groovy expression, to call ViewRow class method - mandatory. Expression is assigned for custom property, this will be referenced from ADF UI:


There is issue with JDEV 12c, it fails to parse custom properties set with Groovy expressions. It fails to parse and removes custom property from the code. To prevent such behavior, specify empty string for custom property value (this will allow to keep Groovy expression):


Luckily in this case of mandatory check, JDEV by default generates mandatory property for ADF UI component. We simply override its original value with custom property and calculate new value in custom trusted method, referenced by Groovy expression:


Download sample application - ADF12cGroovyCustomPropertyApp.zip.