Skip navigation.

Feed aggregator

Philosophy 22

Jonathan Lewis - Thu, 2014-07-03 02:59

Make sure you agree on the meaning of the jargon.

If you had to vote would you say that the expressions “more selective” and “higher selectivity” are different ways of expressing the same idea, or are they exact opposites of each other ? I think I can safely say that I have seen people waste a ludicrous amount of time arguing past each other and confusing each other because they didn’t clarify their terms (and one, or both, parties actually misunderstood the terms anyway).

Selectivity is a value between 0 and 1 that represents the fraction of data that will be selected – the higher the selectivity the more data you select.

If a test is “more selective” then it is a harsher, more stringent, test and returns less data  (e.g. Oxford University is more selective than Rutland College of Further Education): more selective means lower selectivity.

If there’s any doubt when you’re in the middle of a discussion – drop the jargon and explain the intention.

Footnote

If I ask:  “When you say ‘more selective’ do you mean ….”

The one answer which is absolutely, definitely, unquestionably the wrong reply is: “No, I mean it’s more selective.”

 


Taleo Interview Evaluations, Part 2

Oracle AppsLab - Thu, 2014-07-03 02:20

So, if you read Part 1, you’re all up to speed. If not, no worries. You might be a bit lost, but if you care, you can bounce over and come back for the thrilling conclusion.

I first showed the Taleo Interview Evaluation Glass app and Android app at a Taleo and HCM Cloud customer expo in late April, and as I showed it, my story evolved.

Demos are living organisms; the more you show them, the more you morph the story to fit the reactions you get. As I showed the Taleo Glass app, the demo became more about Glass and less about the story I was hoping to tell, which was about completing the interview evaluation more quickly to move along the hiring process.

So, I began telling that story in context of allowing any user, with any device, to complete these evaluations quickly, from the heads-up hotness of Google Glass, all the way down the technology coolness scale to a boring old dumbphone with just voice and text capabilities.

I used the latter example for two reasons. First, the juxtaposition of Google Glass and a dumbphone sending texts got a positive reaction and focused the demo around how we solved the problem vs. “is that Google Glass?”

And second, I was already designing an app to allow a user with a dumbphone to complete an interview evaluation.

Noel (@noelportugal) introduced me to Twilio (@twilio) years ago when he built the epic WebCenter Rock ‘em Sock ‘em Robots. Those robots punched based on text and voice input collected by Twilio.

Side note, Noel has long been a fan of Twilio’s, and happily, they are an Oracle Partner. Ultan (@ultan) is hard at work dreaming up cool stuff we can do with Twilio, so stay tuned.

Anyway, Twilio is the perfect service to power the app I had in mind. Shortly after the customer expo ended, I asked Raymond to build out this new piece, so I could have a full complement of demos to show that fit the full story.

In about a week, Raymond was done, and we now have a holistic story to tell.

The interface is dead simple. The user simply sends text messages to a specific number, using a small set of commands. First, sending “Taleo help” returns a list of the commands. Next, the user sends “Taleo eval requests” to retrieve a list of open interview evaluations.

Screenshot_2014-07-02-12-54-57

The user then sends a command to start one of the numbered evaluations, e.g. “Start eval 4″ and each question is sent as a separate message.

questions1

questions2

When the final question has been answered, a summary of the user’s answered is sent, and the user can submit the evaluation by sending “Confirm submit.”

summarySubmit

 

And that’s it. Elegant and simple and accessible to any manager, e.g. field managers who spend their days traveling between job sites. Coupled with the Glass app and the Android app, we’ve covered all the bases not already covered by Taleo’s web app and mobile apps.

As always, the disclaimer applies. This is not product. It’s simply a concept demo, built to show people the type of R&D we, Oracle Applications User Experience and this team, do. Not product, only research.

Find the comments.Possibly Related Posts:

GoldenGate and Oracle Data Integrator – A Perfect Match in 12c… Part 3: Setup Journalizing

Rittman Mead Consulting - Wed, 2014-07-02 23:39

After a short vacation, some exciting news, and a busy few weeks (including KScope14 in Seattle, WA), it’s time to get the “GoldenGate and Oracle Data Integrator – A Perfect Match in 12c” blog series rolling again. Hopefully readers can find some time between World Cup matches to try integrating ODI and GoldenGate on their own!

To recap my previous two posts on this subject, I first started by showing the latest Information Management Reference Architecture at a high-level (described in further detail by Mark Rittman) and worked through the JAgent configuration, necessary for communication between ODI and GoldenGate. In the second post, I walked through the changes made to the GoldenGate JKM in ODI 12c and laid out the necessary edits for loading the Foundation layer at a high-level. Now, it’s time to make the edits to the JKM and set up the ODI metadata.

Before I jump into the JKM customization, let’s go through a brief review of the foundation layer and its purpose. The foundation schema contains tables that are essentially duplicates of the source table structure, but with the addition of the foundation audit columns, described below, that allow for the storage of all transactional history in the tables.

FND_SCN (System Change Number)
FND_COMMIT_DATE (when the change was committed)
FND_DML_TYPE (DML type for the transaction: insert, update, delete)

The GoldenGate replicat parameter file must be setup to map the source transactions into the foundation tables using the INSERTALLRECORDS option. This is the same option that the replicat uses to load the J$ tables, allowing only inserts and no updates or deletes. A few changes to the JKM will allow us to choose whether or not we want to load the Foundation schema tables via GoldenGate.

Edit the Journalizing Knowledge Module

To start, make a copy of the “JKM Oracle to Oracle Consistent (OGG Online)” so we don’t modify the original. Now we’re ready to make our changes.

Add New Options

A couple of new Options will need to be added to enable the additional feature of loading the foundation schema, while still maintaining the original JKM code. Option values are set during the configuration of the JKM on the Model, but can also have a default in the JKM.

APPLY_FOUNDATION

new-option-apply-fnd

This option, when true, will enable this step during the Start Journal process, allowing it to generate the source-to-foundation mapping statement in the Replicat (apply) parameter file.

FND_LSCHEMA

new-option-fnd-schema

This option will be set with Logical Schema name for the Foundation layer, and will be used to find the physical database schema name when output in the GoldenGate replicat parameter file.

Add a New Task

With the options created, we can now add the additional task to the JKM that will create the source to foundation table mappings in the GoldenGate replicat parameter file. The quickest way to add the task is to duplicate a current task. Open the JKM to the Tasks tab and scroll down to the “Create apply prm (3)” step. Right click the task and select Duplicate. A copy of the task will be created and in the order that we want, just after the step we duplicated.

Rename the step to “Create apply prm (4) RM”, adding the additional RM tag so it’s easily identifiable as a custom step. From the properties, open the Edit Expression dialog for the Target Command. The map statement, just below the OdiOutFile line, will need to be modified. First, remove the IF statement code, as the execution of this step will be driven by the APPLY_FOUNDATION option being set to true.

Here’s a look at the final code after editing.

map <%= odiRef.getObjectName("L", odiRef.getJrnInfo("TABLE_NAME"), odiRef.getOggModelInfo("SRC_LSCHEMA"), "D") %>, TARGET <%= odiRef.getSchemaName("" + odiRef.getOption("FND_LSCHEMA") + "","D") %>.<%= odiRef.getJrnInfo("TABLE_NAME") %>, KEYCOLS (<%= odiRef.getColList("", "[COL_NAME]", ", ", "", "PK") %>, FND_SCN)<%if (!odiRef.getOption("NB_APPLY_PROCESS").equals("1")) {%>, FILTER (@RANGE(#ODI_APPLY_NUMBER,<%= nbApplyProcesses %>,<%= odiRef.getColList("", "[COL_NAME]", ", ", "", "PK") %>))<% } %> INSERTALLRECORDS,
COLMAP (
USEDEFAULTS,
FND_COMMIT_DATE = @GETENV('GGHEADER' , 'COMMITTIMESTAMP'),
FND_SCN = @GETENV('TRANSACTION' , 'CSN'),
FND_DML_TYPE = @GETENV('GGHEADER' , 'OPTYPE')
);

The output of this step is going to be a mapping for each source-to-foundation table in the GoldenGate replicat parameter file, similar to this:

map PM_SRC.SRC_CITY, TARGET EDW_FND.SRC_CITY, KEYCOLS (CITY_ID, FND_SCN) INSERTALLRECORDS,
COLMAP (
USEDEFAULTS,
FND_COMMIT_DATE = @GETENV('GGHEADER' , 'COMMITTIMESTAMP'),
FND_SCN = @GETENV('TRANSACTION' , 'CSN'),
FND_DML_TYPE = @GETENV('GGHEADER' , 'OPTYPE')
);

The column mappings (COLMAP clause) are hard-coded into the JKM, with the parameter USEDEFAULTS mapping each column one-to-one. We also hard-code each foundation audit column mapping to the appropriate environment variable from the GoldenGate trail file. Learn more about the GETENV GoldenGate function here.

The bulk of the editing on this step is done to the MAP statement. The out-of-the-box JKM is setup to apply transactional changes to both the J$, or change table, and fully replicated table. Now we need to add the mapping to the foundation table. In order to do so, we first need to identify the foundation schema and table name for the target table using the ODI Substitution API.

map ... TARGET <%= odiRef.getSchemaName("" + odiRef.getOption("FND_LSCHEMA") + "", "D") %> ...

The nested Substitution API call allows us to get the physical database schema name based on the ODI Logical Schema that we will set in the option FND_LSCHEMA, during setup of the JKM on the ODI Model. Then, we concatenate the target table name with a dot (.) in between to get the fully qualified table name (e.g. EDW_FND.SRC_CITY).

... KEYCOLS (<%= odiRef.getColList("", "[COL_NAME]", ", ", "", "PK") %>, FND_SCN) ...

We also added the FND_SCN to the KEYCOLS clause, forcing the uniqueness of each row in the foundation tables. Because we only insert records into this table, the natural key will most likely be duplicated numerous times should a record be updated or deleted on the source.

Set Options

The previously created task,  “Create apply prm (4) RM”, should be set to execute only when the APPLY_FOUNDATION option is “true”. On this step, go to the Properties window and choose the Options tab. Deselect all options except APPLY_FOUNDATION, and when Start Journal is run, this step will be skipped unless APPLY_FOUNDATION is true.

jkm-set-task-option

Edit Task

Finally, we need to make a simple change to the “Execute apply commands online” task. First, add the custom step indicator (in my example, RM) to the end of the task name. In the target command expression, comment out the “start replicat …” command by using a double-dash.

--start replicat ...

This prevents GoldenGate from starting the replicat process automatically, as we’ll first need to complete an initial load of the source data to the target before we can begin replication of new transactions.

Additional Setup

The GoldenGate Manager and JAgent are ready to go, as is the customized “JKM Oracle to Oracle Consistent (OGG Online)” Journalizing Knowledge Module. Now we need to setup the Topology for both GoldenGate and the data sources.

Setup GoldenGate Topology - Data Servers

In order to properly use the “online” integration between GoldenGate and Oracle Data Integrator, a connection must be setup for the GoldenGate source and target. These will be created as ODI Data Servers, just as you would create an Oracle database connection. But, rather than provide a JDBC url, we will enter connection information for the JAgent that we configured in the initial post in the series.

First, open up the Physical Architecture under the Topology navigator and find the Oracle GoldenGate technology. Right-click and create a new Data Server.

create-ogg-dataserver

Fill out the information regarding the GoldenGate JAgent and Manager. To find the JAgent port, browse to the GG_HOME/cfg directory and open “Config.properties” in a text viewer. Down towards the bottom, the “jagent.rmi.port”, which is used when OEM is enabled, can be found.

####################################################################
## jagent.rmi.port ###
## RMI Port which EM Agent will use to connect to JAgent ###
## RMI Port will only be used if agent.type.enabled=OEM ###
####################################################################
jagent.rmi.port=5572

The rest of the connection information can be recalled from the JAgent setup.

setup-ogg-dataserver

Once completed, test the connection to ensure all of the parameters are correct. Be sure to setup a Data Server for both the source and target, as each will have its own JAgent connection information.

Setup GoldenGate Topology - Schemas

Now that the connection is set, the Physical Schema for both the GoldenGate source and target must be created. These schemas tie directly to the GoldenGate process groups and will be the name of the generated parameter files. Under the source Data Server, create a new Physical Schema. Choose the process type of “Capture”, provide a name (8 characters or less due to GoldenGate restrictions), and enter the trail file paths for the source and target trail files.

Create the Logical Schema just as you would with any other ODI Technology, and the extract process group schema is set.

For the target, or replicat, process group, perform the same actions on the GoldenGate target Data Server. This time, we just need to specify the target trail file directory, the discard directory (where GoldenGate reporting and discarded records will be stored), and the source definitions directory. The source definitions file is a GoldenGate representation of the source table structure, used when the source and target table structures do not match. The Online JKM will create and place this file in the source definitions directory.

Again, setup the Logical Schema as usual and the connections and process group schemas are ready to go!

The final piece of the puzzle is to setup the source and target data warehouse Data Servers, Physical Schemas, and Logical Schemas. Use the standard best practices for this setup, and then it’s time to create ODI Models and start journalizing. In the next post, Part 4 of the series, we’ll walk through applying the JKM to the source Model and start journalizing using the Online approach to GoldenGate and ODI integration.

Categories: BI & Warehousing

Fall 2012 US Distance Education Enrollment: Now viewable by each state

Michael Feldstein - Wed, 2014-07-02 23:15

Starting in late 2013, the National Center for Educational Statistics (NCES) and its Integrated Postsecondary Education Data System (IPEDS) started providing preliminary data for the Fall 2012 term that for the first time includes online education. Using Tableau (thanks to Justin Menard for prompting me to use this), we can now see a profile of online education in the US for degree-granting colleges and university, broken out by sector and for each state.

Please note the following:

  • For the most part distance education and online education terms are interchangeable, but they are not equivalent as DE can include courses delivered by a medium other than the Internet (e.g. correspondence course).
  • There are three tabs below – the first shows totals for the US by sector and by level (grad, undergrad); the second also shows the data for each state (this is new); the third shows a map view.

Learn About Tableau

The post Fall 2012 US Distance Education Enrollment: Now viewable by each state appeared first on e-Literate.

Coherence Adapter Configuration

Antony Reynolds - Wed, 2014-07-02 23:05
SOA Suite 12c Coherence Adapter

The release of SOA Suite 12c sees the addition of a Coherence Adapter to the list of Technology Adapters that are licensed with the SOA Suite.  In this entry I provide an introduction to configuring the adapter and using the different operations it supports.

The Coherence Adapter provides access to Oracles Coherence Data Grid.  The adapter provides access to the cache capabilities of the grid, it does not currently support the many other features of the grid such as entry processors – more on this at the end of the blog.

Previously if you wanted to use Coherence from within SOA Suite you either used the built in caching capability of OSB or resorted to writing Java code wrapped as a Spring component.  The new adapter significantly simplifies simple cache access operations.

Configuration

When creating a SOA domain the Coherence adapter is shipped with a very basic configuration that you will probably want to enhance to support real requirements.  In this section I look at the configuration required to use Coherence adapter in the real world.

Activate Adapter

The Coherence Adapter is not targeted at the SOA server by default, so this targeting needs to be performed from within the WebLogic console before the adapter can be used.

Create a cache configuration file

The Coherence Adapter provides a default connection factory to connect to an out-of-box Coherence cache and also a cache called adapter-local.  This is helpful as an example but it is good practice to only have a single type of object within a Coherence cache, so we will need more than one.  Without having multiple caches then it is hard to clean out all the objects of a particular type.  Having multiple caches also allows us to specify different properties for each cache.  The following is a sample cache configuration file used in the example.

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
  <caching-scheme-mapping>
    <cache-mapping>
      <cache-name>TestCache</cache-name>
      <scheme-name>transactional</scheme-name>
    </cache-mapping>
  </caching-scheme-mapping>
  <caching-schemes>
    <transactional-scheme>
      <scheme-name>transactional</scheme-name>
      <service-name>DistributedCache</service-name>
      <autostart>true</autostart>
    </transactional-scheme>
  </caching-schemes>
</cache-config>

This defines a single cache called TestCache.  This is a distributed cache, meaning that the entries in the cache will distributed across the grid.  This enables you to scale the storage capacity of the grid by adding more servers.  Additional caches can be added to this configuration file by adding additional <cache-mapping> elements.

The cache configuration file is reference by the adapter connection factory and so needs to be on a file system accessed by all servers running the Coherence Adapter.  It is not referenced from the composite.

Create a Coherence Adapter Connection Factory

We find the correct cache configuration by using a Coherence Adapter connection factory.  The adapter ships with a few sample connection factories but we will create new one.  To create a new connection factory we do the following:

  1. On the Outbound Connection Pools tab of the Coherence Adapter deployment we select New to create the adapter.
  2. Choose the javax.resource.cci.ConnectionFactory group.
  3. Provide a JNDI name, although you can use any name something along the lines of eis/Coherence/Test is a good practice (EIS tells us this an adapter JNDI, Coherence tells us it is the Coherence Adapter, and then we can identify which adapter configuration we are using).
  4. If requested to create a Plan.xml then make sure that you save it in a location available to all servers.
  5. From the outbound connection pool tab select your new connection factory so that you can configure it from the properties tab.
    • Set the CacheConfigLocation to point to the cache configuration file created in the previous section.
    • Set the ClassLoaderMode to CUSTOM.
    • Set the ServiceName to the name of the service used by your cache in the cache configuration file created in the previous section.
    • Set the WLSExtendProxy to false unless your cache configuration file is using an extend proxy.
    • If you plan on using POJOs (Plain Old Java Objects) with the adapter rather than XML then you need to point the PojoJarFile at the location of a jar file containing your POJOs.
    • Make sure to press enter in each field after entering your data.  Remember to save your changes when done.

You may will need to stop and restart the adapter to get it to recognize the new connection factory.

Operations

To demonstrate the different operations I created a WSDL with the following operations:

  • put – put an object into the cache with a given key value.
  • get – retrieve an object from the cache by key value.
  • remove – delete an object from the cache by key value.
  • list – retrieve all the objects in the cache.
  • listKeys – retrieve all the keys of the objects in the cache.
  • removeAll – remove all the objects from the cache.

I created a composite based on this WSDL that calls a different adapter reference for each operation.  Details on configuring the adapter within a composite are provided in the Configuring the Coherence Adapter section of the documentation.

I used a Mediator to map the input WSDL operations to the individual adapter references.

Schema

The input schema is shown below.

This type of pattern is likely to be used in all XML types stored in a Coherence cache.  The XMLCacheKey element represents the cache key, in this schema it is a string, but could be another primitive type.  The other fields in the cached object are represented by a single XMLCacheContent field, but in a real example you are likely to have multiple fields at this level.  Wrapper elements are provided for lists of elements (XMLCacheEntryList) and lists of cache keys (XMLCacheEntryKeyList).  XMLEmpty is used for operation that don’t require an input.

Put Operation

The put operation takes an XMLCacheEntry as input and passes this straight through to the adapter.  The XMLCacheKey element in the entry is also assigned to the jca.coherence.key property.  This sets the key for the cached entry.  The adapter also supports automatically generating a key, which is useful if you don’t have a convenient field in the cached entity.  The cache key is always returned as the output of this operation.

Get Operation

The get operation takes an XMLCacheKey as input and assigns this to the jca.coherence.key property. This sets the key for the entry to be retrieved.

Remove Operation

The remove operation takes an XMLCacheKey as input and assigns this to the jca.coherence.key property. This sets the key for the entry to be deleted.

RemoveAll Operation

This is similar to the remove operation but instead of using a key as input to the remove operation it uses a filter.  The filter could be overridden by using the jca.coherence.filter property but for this operation it was permanently set in the adapter wizard to be the following query:

key() != ""

This selects all objects whose key is not equal to the empty string.  All objects should have a key so this query should select all objects for deletion.

Note that there appears to be a bug in the return value.  The return value is entry rather than having the expected RemoveResponse element with a Count child element.  Note the documentation states that

When using a filter for a Remove operation, the Coherence Adapter does not report the count of entries affected by the remove operation, regardless of whether the remove operation is successful.

When using a key to remove a specific entry, the Coherence Adapter does report the count, which is always 1 if a Coherence Remove operation is successful.

Although this could be interpreted as meaning an empty part is returned, an empty part is a violation of the WSDL contract.

List Operation

The list operation takes no input and returns the result list returned by the adapter.  The adapter also supports querying using a filter.  This filter is essentially the where clause of a Coherence Query Language statement.  When using XML types as cached entities then only the key() field can be tested, for example using a clause such as:

key() LIKE “Key%1”

This filter would match all entries whose key starts with “Key” and ends with “1”.

ListKeys Operation

The listKeys operation is essentially the same as the list operation except that only the keys are returned rather than the whole object.

Testing

To test the composite I used the new 12c Test Suite wizard to create a number of test suites.  The test suites should be executed in the following order:

  1. CleanupTestSuite has a single test that removes all the entries from the cache used by this composite.
  2. InitTestSuite has 3 tests that insert a single record into the cache.  The returned key is validated against the expected value.
  3. MainTestSuite has 5 tests that list the elements and keys in the cache and retrieve individual inserted elements.  This tests that the items inserted in the previous test are actually in the cache.  It also tests the get, list and listAll operations and makes sure they return the expected results.
  4. RemoveTestSuite has a single test that removes an element from the cache and tests that the count of removed elements is 1.
  5. ValidateRemoveTestSuite is similar to MainTestSuite but verifies that the element removed by the previous test suite has actually been removed.
Use Case

One example of using the Coherence Adapter is to create a shared memory region that allows SOA composites to share information.  An example of this is provided by Lucas Jellema in his blog entry First Steps with the Coherence Adapter to create cross instance state memory.

However there is a problem in creating global variables that can be updated by multiple instances at the same time.  In this case the get and put operations provided by the Coherence adapter support a last write wins model.  This can be avoided in Coherence by using an Entry Processor to update the entry in the cache, but currently entry processors are not supported by the Coherence Adapter.  In this case it is still necessary to use Java to invoke the entry processor.

Sample Code

The sample code I refer to above is available for download and consists of two JDeveloper projects, one with the cache config file and the other with the Coherence composite.

  • CoherenceConfig has the cache config file that must be referenced by the connection factory properties.
  • CoherenceSOA has a composite that supports the WSDL introduced at the start of this blog along with the test cases mentioned at the end of the blog.

The Coherence Adapter is a really exciting new addition to the SOA developers toolkit, hopefully this article will help you make use of it.

SQL Server 2014: Are DENY 'SELECT ALL USERS SECURABLES' permissions sufficient for DBAs?

Yann Neuhaus - Wed, 2014-07-02 20:09

SQL Server 2014 improves the segregation of duties by implementing new server permissions. The most important is the SELECT ALL USERS SECURABLES permission that will help to restrict database administrators from viewing data in all databases.

My article is a complement to David Barbarin's article 'SQL Server 2014: SELECT ALL USERS SECURABLES & DB admins'.

July - Oracle Product Support Advisor Webcasts

Chris Warticki - Wed, 2014-07-02 14:57

We are pleased to invite you to our Advisor Webcast series for July 2014. Subject matter experts prepare these presentations and deliver them through WebEx. Topics include information about Oracle support services and products. To learn more about the program or to access archived recordings, please follow the links.

There are currently two types of Advisor Webcasts; Product specific webcasts, which share best practices, troubleshooting guidance, release information, and Support Tools and Processes Webcasts, designed to help you better utilize Oracle's support tools and procedures.

If you prefer to read about current events, Oracle Premier Support News provides you information, technical content, and technical updates from the various Oracle Support teams. For a full list of Premier Support News, go to My Oracle Support and enter Document ID 222.1 in Knowledge Base search.

Are you My Oracle Support Accredited? Build on your existing My Oracle Support and product knowledge. If you have actively used My Oracle Support for 6-9 months, we encourage you to get accredited!

Sincerely,
Oracle Support

Application Technology Group ATG: EBS Reports & Printing Troubleshooting July 17 Enroll E-Business Suite EBS Proactive Key Tools you need to know: Business Process Advisors July 30 Enroll FIN: How To Use E-Business Tax Simulator July 16 Enroll Fin: Using Commitments, Deposits & Guarantees In Oracle Receivables July 31 Enroll MFG: How to Estimate LCM Shipments While Setting Agreements with Your Supplier July 8 Enroll MFG: Demantra Cluster Factor, Out of Order Ratio Plus+! Features that Improve Stability July 9 Enroll MFG: Oracle Database Features and Characteristics that Improve Stability and User Experience for VCP July 10 Enroll MFG: Unleash the Power of Order Import July 15 Enroll MFG: Process Manufacturing 11i to R12 Migration Issues July 16 Enroll MFG: EAM: Mobile Maintenance July 17 Enroll MFG: Discrete Job Definition and Transactions July 23 Enroll RFID, MHE and Voice enabled warehousing with Oracle Warehouse Management July 23 Enroll Exadata Exadata Disk Management and Troubleshooting tips July 15 Enroll Exadata Hybrid Columnar Compression - Overview July 16 Enroll 逐步掌握Exadata升级 - MANDARIN ONLY (Exadata Patching Overview Including Database and Storage Server Upgrade Demo ) July 16 Enroll Fusion Applications Customizing Roles in Fusion Applications July 10 Enroll Hyperion Want to Discover the New features for Calculation Manager 11.1.2.3.00 and 11.1.2.3.500 for Essbase? July 16 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne As Of Processing in EnterpriseOne Accounts Payable and Accounts Receivable July 23 Enroll JD Edwards World JD Edwards World to EnterpriseOne Migration Preparation: Application Data July 17 Enroll JD Edwards World to EnterpriseOne Migration Preparation: Technical Conversions July 24 Enroll PeopleSoft Finding Answers with My Oracle Support July 3 Enroll Prevent and Troubleshoot Systems and Performance Issues with PeopleSoft July 10 Enroll PeopleSoft Diagnostics Framework Overview and Demonstration July 17 Enroll Using My Oracle Support Community July 24 Enroll Logging High Quality Service Requests July 31 Enroll

How to find EBus Patches

Chris Warticki - Wed, 2014-07-02 14:49
Are you looking for E-Business Suite Patches and E-Business Suite Technology Stack Patches? Take a look at this document because it can assist you in finding all Patches to maintain a healthy system. This one document includes Recommended patches, Technology Stack patches, Performance patches, Critical Patch Updates, and more. Find all the details in Doc ID 1633974.2 - How to Find E-Business Suite & E-Business Suite Technology Stack Patches

Taleo Interview Evaluations, Part 1

Oracle AppsLab - Wed, 2014-07-02 10:50

Time to share a new concept demo, built earlier this Spring by Raymond and Anthony (@anthonyslai), both of whom are ex-Taleo.

Back in April, I got my first exposure to Taleo during a sales call. I was there with the AUX contingent, talking about Oracle HCM Cloud Release 8, featuring Simplified UI, our overall design philosophies and approaches, i.e. simplicity-mobility-extensibility, glance-scan-commit, and our emerging technologies work and future cool stuff.

I left that meeting with an idea for a concept demo, streamlining the interview evaluation process with a Google Glass app.

The basic pain point here is that recruiters have trouble urging the hiring managers they support through the hiring process because these managers have other job responsibilities.

It’s the classic Catch-22 of hiring; you need more people to help do work, but you’re so busy doing the actual work, you don’t have time to do the hiring.

Anyway, Taleo Recruiting has the standard controls, approvals and gating tasks that any hiring process does. One of these gating tasks is completing the interview evaluation; after interviewing a candidate, the interviewer, typically the hiring manager and possibly others, completes an evaluation of the candidate that determines her/his future path in the process.

Good evaluation, the candidate moves on in the process. Poor evaluation, the candidate does not.

Both Taleo’s web app and mobile app provide the ability to complete these evaluations, and I thought it would be cool to build a Glass app just for interview evaluations.

Having a hands-free way to complete an evaluation would be useful for a hiring manager walking between meetings on a large corporate campus or driving to a meeting. The goal here is to bring the interview evaluation closer to the actual end of the interview, while the chat is still fresh in the manager’s mind.

Imagine you’re the hiring manager. Rather than delaying the evaluation until later in the day (or week), walk out of an interview, command Glass to start the evaluation, have the questions read directly into your ear, dictate your responses and submit.

Since the Glass GDK dropped last Winter, Anthony has been looking for a new Glass project, and I figured he and Raymond would run with a Taleo project. They did.

The resulting concept demo is a Glass app and an accompanying Android app that can also be used as a dedicated interview evaluation app. Raymond and Anthony created a clever way to transfer data using the Bluetooth connection between Glass and its parent device.

Here’s the flow, starting with the Glass app. The user can either say “OK Glass” and then say “Start Taleo Glass,” or tap the home card, swipe through the cards and choose the Start Taleo Glass card.

startTaleo

The Glass app will then wait for its companion Android app to send the evaluation details.

Screenshot_2014-07-02-09-26-25

Next, the user opens the Android app to see all the evaluations s/he needs to complete, and then selects the appropriate one.

evalApp1

Tapping Talk to Google Glass sends the first question to the Glass over the Bluetooth connection. The user sees the question in a card, and Glass also dictates the question through its speaker.

Screenshot_2014-07-02-09-06-30

Tapping Glass’ touchpad turns on the microphone so the user can dictate a response, either choosing an option for a multiple choice question or dictating an answer for an open-ended question. As each answer is received by the Android app, the evaluation updates, which is pretty cool to watch.

answers

The Glass app goes through each question, and once the evaluation is complete, the user can review her/his answers on the Android app and submit the evaluation.

The guys built this for me to show at a Taleo and HCM Cloud customer expo, similar to the one AMIS hosted in March. After showing it there, I decided to expand the concept demo to tell a broader story. If you want to read about that, stay tuned for Part 2.

Itching to sound off on this post, find the comments.

Update: The standard disclaimer applies here. This is not product of any kind. It’s simply a concept demo, built to show people the type of R&D we, Oracle Applications User Experience and this team, do. Not product, only research.Possibly Related Posts:

Comparisons

Jonathan Lewis - Wed, 2014-07-02 10:09

Catching up (still) from the Trivadis CBO days, here’s a little detail which had never crossed my mind before.


where   (col1, col2) < (const1, const2)

This isn’t a legal construct in Oracle SQL, even though it’s legal in other dialects of SQL. The logic is simple (allowing for the usual anomaly with NULL): the predicate should evaluate to true if (col1 < const1), or if (col1 = const1 and col2 < const2). The thought that popped into my mind when Markus Winand showed a slide with this predicate on it – and then pointed out that equality was the only option that Oracle allowed for multi-column operators – was that, despite not enabling the syntax, Oracle does implement the mechanism.

If you’re struggling to think where, it’s in multi-column range partitioning: (value1, value2) belongs in the partition with high value (k1, k2) if (value1 < k1) or if (value1 = k1 and value2 < k2).


Restore datafile from service: A cool #Oracle 12c Feature

The Oracle Instructor - Wed, 2014-07-02 09:02

You can restore a datafile directly from a physical standby database to the primary. Over the network. With compressed backupsets. How cool is that?

Here’s a demo from my present class Oracle Database 12c: Data Guard Administration. prima is the primary database on host01, physt is a physical standby database on host03. There is an Oracle Net configuration on both hosts that enable host01 to tnsping physt and host03 to tnsping prima

 

[oracle@host01 ~]$ rman target sys/oracle@prima

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Jul 2 16:43:39 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PRIMA (DBID=2084081935)

RMAN> run
{
set newname for datafile 4 to '/home/oracle/stage/users01.dbf';
restore (datafile 4 from service physt) using compressed backupset;
catalog datafilecopy '/home/oracle/stage/users01.dbf';
}

executing command: SET NEWNAME

Starting restore at 02-JUL-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=47 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service physt
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00004 to /home/oracle/stage/users01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 02-JUL-14

cataloged datafile copy
datafile copy file name=/home/oracle/stage/users01.dbf RECID=8 STAMP=851877850

This does not require backups taken on the physical standby database.


Tagged: 12c New Features, Backup & Recovery, Data Guard
Categories: DBA Blogs

What’s New with Apps Password Change in R12.2 E-Business Suite ?

Pythian Group - Wed, 2014-07-02 08:39

Apps password change routine in Release 12.2 E-Business Suite changed a little bit. We have now extra options to change password, as well as some manual steps after changing the password using FNDCPASS.

There is a new utility introduced called AFPASSWD. This utility unlike FNDCPASS wont require you to enter apps and system user password, and makes it possible to separate duties between database administrator and application administrator. In most cases both these roles are done by same DBA. But in large organizations, there may be different teams that manage Database and Application. You can read about different options available in AFPASSWD in EBS Maintenance guide.

Whether you use FNDCPASS or AFPASSWD to change the APPLSYS/APPS password, you must also perform some additional steps. This is because in R12.2, the old AOL/J connection pooling is replaced with Weblogic Connection Pool ( JDBC Datasource ).  Currently this procedure is not yet automated. It would be good, if this can be automated using some WLS scripting.

  • Shut down the application tier services
  • Change the APPLSYS password, as described for the utility you are using.
  • Start AdminServer using the adadminsrvctl.sh script from your RUN filesystem
  • Do not start any other application tier services.
  • Update the “apps” password in WLS Datasource as follows:
    • Log in to WLS Administration Console.
    • Click Lock & Edit in Change Center.
    • In the Domain Structure tree, expand Services, then select Data Sources.
    • On the “Summary of JDBC Data Sources” page, select EBSDataSource.
    • On the “Settings for EBSDataSource” page, select the Connection Pool tab.
    • Enter the new password in the “Password” field.
    • Enter the new password in the “Confirm Password” field.
    • Click Save.
    • Click Activate Changes in Change Center.
  • Start all the application tier services using the adstrtal.sh script.

I will be posting more of these What’s new with R12.2 articles in future. Post your experiences changing passwords in Oracle EBS in the comments section. I will happy to hear your stories and give my inputs

Categories: DBA Blogs

Essential Hadoop Concepts for Systems Administrators

Pythian Group - Wed, 2014-07-02 08:38

Of course, everyone knows Hadoop as the solution to Big Data. What’s the problem with Big Data? Well, mostly it’s just that Big Data is too big to access and process in a timely fashion on a conventional enterprise system. Even a really large, optimally tuned, enterprise-class database system has conventional limits in terms of its maximum I/O, and there is a scale of data that outstrips this model and requires parallelism at a system level to make it accessible. While Hadoop is associated in many ways with advanced transaction processing pipelines, analytics and data sciences, these applications are sitting on top of a much simpler paradigm… that being that we can spread our data across a cluster and provision I/O and processor in a tunable ratio along with it. The tune-ability is directly related to the hardware specifications of the cluster nodes, since each node has processing, I/O and storage capabilities in a specific ratio. At this level, we don’t need Java software architects and data scientists to take advantage of Hadoop. We’re solving a fundamental infrastructure engineering issue, which is “how can we scale our I/O and processing capability along with our storage capacity”? In other words, how can we access our data?

The Hadoop ecosystem at it’s core is simply a set of RESTfully interacting Java processes communicating over a network. The base system services, such as the data node (HDFS) and task tracker (MapReduce) run on each node in the cluster, register with an associated service master and execute assigned tasks in parallel that would normally be localized on a single system (such as reading some data from disk and piping it to an application or script). The result of this approach is a loosely coupled system that scales in a very linear fashion. In real life, the service masters (famously, NameNode and JobTracker) are a single point of failure and potential performance bottleneck at very large scales, but much has been done to address these shortcomings. In principal, Hadoop uses the MapReduce algorithm to extend parallel execution from a single computer to an unlimited number of networked computers.

MapReduce is conceptually a very simple system. Here’s how it works. Given a large data set (usually serialized), broken into blocks (as for any filesystem) and spread among the HDFS cluster nodes, feed each record in a block to STDIN of a local script, command or application, and collect the records from STDOUT that are emitted. This is the “map” in MapReduce. Next, sort each record by key (usually just the first field in a tab-delimited record emitted by the mapper, but compound keys are easily specified). This is accomplished by fetching records matching each specific key over the network to a specific cluster node, and accounts for the majority of network I/O during a MapReduce job. Finally, process the sorted record sets by feeding the ordered records to STDIN of a second script, command or application, collecting the result from STDOUT and writing them back to HDFS. This is the “reduce” in MapReduce. The reduce phase is optional, and usually takes care of any aggregations such as sums, averages and record counts. We can just as easily pipe our sorted map output straight to HDFS.

Any Linux or Unix systems administrator will immediately recognize that using STDIO to pass data means that we can plug any piece of code into the stream that reads and writes to STDIO… which is pretty much everything! To be clear on this point, Java development experience is not required. We can take advantage of Linux pipelines to operate on very large amounts of data. We can use ‘grep’ as a mapper. We can use the same pipelines and commands that we would use on a single system to filter and process data that we’ve stored across the cluster. For example,

grep -i ${RECORD_FILTER} | cut -f2 | cut -d’=’ -f2 | tr [:upper:][:lower:]

We can use Python, Perl and any other languages with support configured on the task tracker systems in the cluster, as long as our scripts and applications read and write to STDIO. To execute these types of jobs, we use the Hadoop Streaming jar to wrap the script and submit it to the cluster for processing.

What does this mean for us enterprise sysadmins? Let’s look at a simple, high level example. I have centralized my company’s log data by writing it to a conventional enterprise storage system. There’s lots of data and lots of people want access to it, including operations teams, engineering teams, business intelligence and marketing analysts, developers and others. These folks need to search, filter, transform and remodel the data to shake out the information they’re looking for. Conventionally, I can scale up from here by copying my environment and managing two storage systems. Then 4. Then 8. We must share and mount the storage on each system that requires access, organize the data across multiple appliances and manage access controls on multiple systems. There are many business uses for the data, and so we have many people with simultaneous access requirements, and they’re probably using up each appliance’s limited I/O with read requests. In addition, we don’t have the processor available… we’re just serving the data at this point, and business departments are often providing their own processing platforms from a limited budget.

Hadoop solves this problem of scale above the limits of conventional enterprise storage beautifully. It’s a single system to manage that scales in a practical way to extraordinary capacities. But the real value is not the raw storage capacity or advanced algorithms and data analytics available for the platform… it’s about scaling our I/O and processing capabilities to provide accessibility to the data we’re storing, thereby increasing our ability to leverage it for the benefit of our business. The details of how we leverage our data is what we often leave for the data scientists to figure out, but every administrator should know that the basic framework and inherent advantages of Hadoop can be leveraged with the commands and scripting tools that we’re already familiar with.

Categories: DBA Blogs

Differences Between R12.1 and R12.2 Integration with OAM

Pythian Group - Wed, 2014-07-02 08:37

With the revamp of technology stack in R12.2 of Oracle E-Business Suite (EBS) , the way we integrate Oracle Access Manager (OAM) has changed. R12.2 now is built on Weblogic techstack, which drastically changed how it integrates with Other Fusion Middleware Products like OAM

Here is a overview of Steps to configure OAM with EBS R12.1

  • Install Oracle HTTP Server ( OHS)  11g
  • Deploy & Configure Webgate on OHS 11g
  • Install Weblogic
  • Deploy & Configure Accessgate on Weblogic
  • Integrate Webgate, Accessgate with EBS and OAM/OID

R12.2 has both OHS and Weblogic built-in. So we no longer have to Install OHS and Weblogic for Webgate and Accessgate. All we have to do is Deploy and Configure Webgate and Accessgate.  Webgate is deployed on top of R12.2 OHS 11g home. Accessgate is deployed as a separate managed server ( oaea_server1 )  on top of R12.2 weblogic.

Here is the pictorial version of the same

R12.1 and 11i EBS integration with OAM/OID

11iand12_Reference_Architecture

 

R12.2 Integration with OAM/OID

12.2_Reference_Architecture_2

Basically R12.2 reduces the number of moving parts in the OAM integration EBS. It saves DBAs lot of time, as it reduces the number of servers to manage.

References:

Integrating Oracle E-Business Suite Release 12.2 with Oracle Access Manager 11gR2 (11.1.2) using Oracle E-Business Suite AccessGate (Doc ID 1576425.1)

Integrating Oracle E-Business Suite Release 12 with Oracle Access Manager 11gR2 (11.1.2) using Oracle E-Business Suite AccessGate (Doc ID 1484024.1)

Images are courtesy of Oracle from note “Overview of Single Sign-On Integration Options for Oracle E-Business Suite (Doc ID 1388152.1)”

 

Categories: DBA Blogs

How Oracle WebCenter Customers Build Digital Businesses: Contending with Digital Disruption

WebCenter Team - Wed, 2014-07-02 07:00
Evernote Export body, td { font-family: Tahoma; font-size: 10pt; }
Guest Blog Post by: Geoffrey Bock, Principal, Bock & Company
Customer Conversations What are Oracle WebCenter customers doing to exploit innovative digital technologies and develop new sources of value? How are they mobilizing their enterprise applications and leveraging opportunities of the digital business revolution?
To better understand the landscape for digitally powered businesses, I talked to several Oracle WebCenter customers and systems integrators across a range of industries -- including hospitality, manufacturing, life sciences, and the public sector. Through in depth conversations with IT and business leaders, I collected a set of stories about their mobile journeys -- how they are developing next-generation enterprise applications that weave digital technologies into their ongoing operations.
In this and two subsequent blogs, I will highlight several important points from my overall roadmap for developing digital businesses.
Beyond an Aging Infrastructure As a first step, successful customers are contending with digital disruption, and leveraging their inherent strengths to transform operations. Today they are web-aware, if not already web-savvy. Most organizations launched their initial sites more than fifteen years ago. They have steadily added web-based applications to support targeted initiatives.
Yet the customers I interviewed are now at a crossroads. They realize that they need to refresh, modernize, and mobilize their enterprise application infrastructure to build successful digital businesses.
  • One IT leader describes how her firm implemented a cutting-edge enterprise portal ten years ago. Designed for order processing and resources management, the portal now runs outdated technologies and is unable to support needed employee-facing applications.
  • Another business leader has a similar story. The company still relies on a custom designed web-based application. The technology is obsolete and the people knowledgeable about maintaining the application are difficult to find.
  • A third IT leader describes how her organization collects information through several Cold Fusion sites, and needs to replace them in order to deliver more flexible self-service applications.
From my perspective, these leaders are recognizing the power of digital disruption. To create new value, they must deliver seamless customer-, partner-, and employee-facing experiences. They are confronting the limitations of their current application infrastructure and are turning to Oracle for long-term solutions.
Rather than simply enhance what they have, leaders are opting for modernization. They need to develop and deploy native digital experiences. Web-based applications that are bolted onto an aging infrastructure are no longer sufficient.
Change and Continuity Yet there is also continuity around integrating the end-to-end experiences. Let’s take the case of a large manufacturing firm now mobilizing its digital business around Oracle WebCenter. The business leaders identified the multiple steps in the buying process – the information customers and partners need to have to assess alternatives and make purchasing decisions.
The firm had developed multiple web sites to publish product information, offer design advice, and schedule follow-up meetings. But the end result was a fragmented and disconnected set of activities, relying first on information from one system, then from another, and lacking an end-to-end view for measuring results.
The leaders realized that they needed to connect the dots and deliver a seamless experience. In the case of this manufacturing firm, a key step blends online with real-time – helping customers schedule appointments with designers who advise them about design alternatives and product options. (From the manufacturer’s perspective, designers are channel partners who sell the finished goods and deliver support services.)
The breakthrough that accelerates the buying process focuses on these customer/designer interactions -- assembling all of the necessary information into a seamless experience, and making it easy for customers to engage with designers to finalize designs and place orders. As a result, this manufacturing firm mitigates the threat of digital disruption by mobilizing resources to complete a high-value task.

The firm empowers its partner channel by reinventing a key business process for the digital age. This becomes a win-win opportunity that increases customer satisfaction while also improving sales opportunities.

 Delivering Moments of Engagement Across the Enterprise

Leveraging Collective Knowledge and Subject Matter Experts to Improve the Quality of Database Support

Chris Foot - Wed, 2014-07-02 06:10

The database engine plays a strategic role in the majority of organizations. It provides the mechanism to store physical data along with business rules and executable business logic. The database’s area of influence has expanded to a point where it has become the heart of the modern IT infrastructure. Because of its importance, enterprises expect their databases to be reliable, secure and available.

Rapid advances in database technology combined with relatively high database licensing and support costs compel IT executives to ensure that their organization fully utilizes the database product’s entire feature set. The more solutions the database inherently provides, the more cost effective it becomes. These integrated features allow technicians to solve business problems without the additional costs of writing custom code and/or integrating multiple vendor solutions.

The issue then becomes one of database complexity. As database vendors incorporate new features into the database, it becomes more complex to administer. Modern database administrators require a high level of training to be able to effectively administer the environments they support. Without adequate training, problems are commonplace, availability suffers and the database’s inherent features are not fully utilized.

The Benefits of Collective Knowledge

Successful database administration units understand that providing better support to their customers not only comes from advances in technology but also from organizational innovation. The selection of support-related technologies is important, but it is the effective implementation and administration of those technologies that is critical to organizational success.

Database team managers should constantly leverage the collective knowledge of their entire support staff to improve the quality of support the team provides and reduce the amount of time required to solve problems.

One strategy to build the team’s expertise is to motivate individual team members to become Subject Matter Experts in key database disciplines. This strategy is performed informally hundreds of times in IT daily. A support professional is required to perform a given task and “gets stuck”. They spin their wheels and then decide to run down the hall and find someone they feel can provide them with advice. They consult with one or more fellow team members to solve the problem at hand.

The recommendation is to have a more formal strategy in place so that each team member, in addition to their daily support responsibilities, becomes a deep-dive specialist in a given database discipline. Their fellow team members are then able to draw from that expertise.

Increasing the Efficiency of Support- Subject Matter Experts

The database environment has become so complex that it precludes database administrators from becoming true experts in all facets of database technology. RDX’s large administrative staff allows it to increase efficiency by creating specialists in key database disciplines. In addition to expertise in providing day-to-day support, each of RDX’s support staff members is required to become an expert in one or more database disciplines including backup and recovery, highly available architectures, SQL tuning, database performance, database monitoring, UNIX/Windows scripting and database security.

RDX allocates the support person with the highest-level skill sets in that particular task to provide the service requested by the customer. This methodology ensures that the customer gets the most experienced person available to perform complex tasks. Who do you want to install that 5 node Oracle RAC cluster? A team member that has limited knowledge or one that has extensively studied Oracle’s high availability architecture and performs RAC installations on a daily basis?

Although your team may only consist of a ½ dozen administrators, that doesn’t mean that you aren’t able to leverage the benefits that the Subject Matter Experts strategy provides. Identify personnel on the team that are interested in a particular database support discipline (i.e. security, database performance, SQL Performance, scripting, etc.) and encourage them to build their expertise in those areas. If they are interested in high availability, send them to classes, offer to reimburse them for books on that topic and/or allocate time for them to review HA specific websites. Focus on the areas that are most critical to the needs of your shop. For instance, is your company having lots of SQL statement performance problems? A sound strategy is to have one of your team members focus on SQL tuning and support them throughout the entire educational process.

Also consider special skills during the DBA interview and selection process. At RDX, we always look for candidates that are able to provide deep-dive expertise in key database support disciplines. We have several DBAs on staff that have strong application development backgrounds including SQL performance tuning. This was in addition to possessing a strong background in database administration. We use the same strategy for HA architectures, and we look for candidates that have strong skills in any database advanced feature. We’re able to leverage that expertise for the customer’s benefit. The same strategy can be applied to any size team. Look for candidates that excel in database administration but are also strong in key areas that will improve your ability to support your internal customers.

In addition, you can also draw expertise from other teams. For example, you may have access to an application developer who is strong in SQL coding and tuning or an operating system administrator that excels in scripting. Build relationships with those personnel and leverage their experience and skill sets when needed. Ask them to provide recommendations on training to your team or assist when critical problems occur. Technicians are usually more than happy to be asked to help. Just make sure to be courteous when asking and thank them (and their manager) when they do help out.

Reducing Downtime Duration by Faster Problem Resolution

RDX’s large staff also reduces the amount of time spent on troubleshooting and problem solving. RDX is able to leverage the expertise of a very large staff of database, operating system and middle-tier administrators. Additionally, RDX is able to leverage the team’s expertise to provide faster resolution to database performance issues and outages. Since the support staff works with many different companies, they have seen a number of different approaches to most situations.

Ninety-nine percent of our support technicians work at the same physical site. This allows RDX to create a “war room” strategy for brainstorming activities and problem solving. All technicians needed to create a solution or solve a problem are quickly brought to bear when the need arises. Support technicians come from varied backgrounds and have many different skill sets. RDX is able to leverage these skills without having to search for the right person or wait for a return call. Work can take place immediately.

This “war room” strategy works for any size team. When a significant issue occurs, leverage the entire team’s skill sets. Appoint yourself to be the gate keeper to ensure that the team remains focused on the goal of quick problem resolution and that the conversation continues to be productive. Brainpower counts, and the more collective knowledge you have at your disposal, the more effective your problem resolution activities become.

Conclusion

Corporate information technology executives understand that their success relies upon their ability to cut costs and improve efficiency. Decreasing profit margins and increased competition in their market segment force them to continuously search for creative new solutions to reduce the cost of the services they provide. They also realize that this reduction in cost must not come at the expense of the quality of services their organization delivers.

RDX invites you to compare the benefits of our organizational architecture and quality improvement initiatives to our competitors, your in-house personnel or your on-site consultants. We firmly believe that our Collective Knowledge Support Model allows us to provide world class support.

The post Leveraging Collective Knowledge and Subject Matter Experts to Improve the Quality of Database Support appeared first on Remote DBA Experts.

Oracle E-Business Suite Security, Java 7 and Auto-Update

Maintaining a secure Oracle E-Business Suite implementation requires constant vigilance. For the desktop clients accessing Oracle E-Business Suite, Integrigy recommends running the latest version of Java 7 SE.  Java 7 is fully supported by Oracle with Public Updates through April 2015 and is patched with the latest security fixes. Most likely in late 2014 we anticipate that Oracle will have released and certified Java 8 with the Oracle E-Business Suite.

Most corporate environments utilize a standardized version of Java, tested and certified for corporate and mission critical applications. As such the Java auto-update functionality cannot be used to automatically upgrade Java on all desktops. These environments require new versions of Java to be periodically pushed to all desktops. For more information on how to push Java updates through software distribution see MOS Note 1439822.1. This note also describes how to download Java versions with the Java auto-update functionality disabled.

Keep in mind too that the version of Java used with the E-Business Suite should be obtained from My Oracle Support. Your Desktop support teams may or may not have Oracle support accounts.

Other points to keep in mind:

  • To support Java 7, the Oracle E-Business Suite application servers must be updated per the instructions in MOS Note 393931.1
  • “Non-Static Versioning” should be used the E-Business Suite to allow for later versions of the JRE Plug-in to be installed on the desktop client. For example, with Non-Static versioning JRE 7 will be invoked instead of JRE 6 if both are installed on a Windows desktop. With Non-Static versioning, the web server’s version of Java is the minimum version that can be used on the desktop client.
  • You will need to implement the Enhanced JAR File signing for the later versions of Java 7 (refer to Integrigy blog posting for more information)
  • Remember to remove all versions of Java that are no longer needed – for example JIinitiator

You may continue using Java 6.  As an Oracle E-Business Suite customer, you are entitled to Java 6 updates through Extended Support.  The latest Java 6 update (6u75) may be downloaded from My Oracle Support. This version (6u75) is equal to 7u55 for security fixes.

If you have questions, please contact us at info@integrigy.com

References

 

Tags: Security Strategy and StandardsOracle E-Business SuiteIT Security
Categories: APPS Blogs, Security Blogs

New Batch Configuration Wizard (BatchEdit) available

Anthony Shorten - Tue, 2014-07-01 16:36

A new configuration facility is available s part of Oracle Utilities Application Framework V4.2.0.2.0 called Batch Edit. One of the concerns customers and partners asked us to address is to make configuration of the batch architecture simpler and less error prone. A new command line utility has been introduced to allow customers to quickly and easily implement a robust technical architecture for batch. The feature provides the following:

  • A simple command interface to create and manage configurations for clusters, threadpools and submitters in batch.
  • A set of optimized templates to simplify configuration but also promote stability amongst configurations. Complex configurations can be error prone which can cause instability. These templates, using optimal configurations from customers, partners and Oracle's own performance engineer group, attempt to simply the configuration process, whilst supporting flexibility in configuration to cover implementation scenarios.
  • The cluster interface supports multi-cast and unicast configurations and adds a new template optimized for single server clusters. The single server cluster is ideal for use in non-production situations such as development, testing, conversion and demonstrations. The cluster templates have been optimized to support advanced facilities in Oracle Coherence to use the high availability and optimize network operations
  • The threadpoolworker interface allows implements to configure all the attributes from a single interface including for the first the the ability to create cache threadpools. These special type of threadpoolworker dod not run submitters but allow implementations to reduce the network overheads for individual components communicating across a cluster. They provide a mechanism for Coherence to store and relay state of all the components in a concentrated format and also serve as a convenient conduit for the Global JMX capability.
  • The submitter interface allows customers and implementors to create global and job specific properties files.
  • Tagging is supported in the utility to allow groups of threadpools and submitters to share attributes.
  • The utility provides helpful context sensitive help for all its functions, parameters and configurations with advanced help also available.

Over the next few weeks I will be publishing articles highlighting features and functions of this new facility.

More information about Batch Edit is available from the Batch Server Administration Guide shipped with your product and the Batch Best Practices (Doc Id: 836362.1) available from My Oracle Support.

Paper on PeopleSoft Search Technology

PeopleSoft Technology Blog - Tue, 2014-07-01 13:31

This paper has been around for a bit, but some people may not have seen it.  As customers move to the latest releases of PeopleTools they are adopting our new search technology, and this paper can help with the understanding of all aspects of the new search.  The paper covers the following subjects:

  1. Getting Started PeopleSoft Search Technology 
  2. Understanding PeopleSoft Search Framework
  3. Defining Search Definition Queries
  4. Creating Query and Connected Query Search Definitions
  5. Creating File Source Search Definitions
  6. Creating Web Source Search Definitions
  7. Creating Search Categories
  8. Administering PeopleSoft Search Framework
  9. Working with PeopleSoft Search Framework Utilities
  10. Working with PeopleSoft Search Framework Security Features
  11. Working with PeopleSoft Search (once it is deployed)

As you can see, this paper covers it all.  It is a valuable resource for anyone deploying and administering the new PeopleSoft search.

<b>Contributions by Angela Golla,

Oracle Infogram - Tue, 2014-07-01 12:38
Contributions by Angela Golla, Infogram Deputy Editor

Advisor Webcast Recordings
Did you know that Advisor Webcasts are recorded and available for download?  Topic covered include many Oracle products as well as My Oracle Support.  Note:740966.1 has the details on how the program works.