Skip navigation.

Feed aggregator

BusinessTown

Oracle AppsLab - Fri, 2015-01-23 12:53

Maybe you remember Busytown, Richard Scarry’s famous town, from your childhood or from reading it to your kids.

Tony Ruth has created the Silicon Valley equivalent, BusinessTown, (h/t The Verge) populated by the archetypes we all know and sometimes love. What do the inhabitants of BusinessTown do? “What Value-Creating Winners Do All Day,” natch.

brogrammers

Who’s up for a Silicon Valley marathon?Possibly Related Posts:

SQLCl - LDAP anyone?

Barry McGillin - Fri, 2015-01-23 09:02
since  we released our first preview of SDSQL, we've made  a lot of changes to it and enhanced a lot of things too in there so it would be more useable.  One specific one was the use of LDAP which some customers on SQLDeveloper are using in their organisations as a standard and our first release precluded them from working with this.

Well, to add this, we wanted a way that we could specify the LDAP strings and then use them in a connect statement.  We introduced a command called SET LDAPCON for setting the LDAP connection.  You can set it like this at the SQL> prompt
 set LDAPCON jdbc:oracle:thin:@ldap://scl58261.us.oracle.com:389/#ENTRY#,cn=OracleContext,dc=ldapcdc,dc=lcom  

or set it as an environment variable
 (~/sql) $export LDAPCON=jdbc:oracle:thin:@ldap://scl58261.us.oracle.com:389/#ENTRY#,cn=OracleContext,dc=ldapcdc,dc=lcom  

Then you can come along and as long as you know your service name, we're going to swap out the ENTRY delimiter in the LDAP connection with your service.  We're working on a more permanent way to allow these to be registered and used so they are more seamless.

In the meantime, you can then connect to your LDAP service like this
 BARRY@ORCL>set LDAPCON jdbc:oracle:thin:@ldap://scl58261.us.oracle.com:389/#ENTRY#,cn=OracleContext,dc=ldapcdc,dc=lcom  
BARRY@ORCL>connect barry/oracle@orclservice_test(Emily's Desktop)
Connected
BARRY@PDBOH12>tables
Command=tables
TABLES
TEST

Here's a qk little video of it in action!  You can then use  the 'SHOW JDBC' command to show what you are connected to.


This is the latest release which should be online soon, and you  can download it from here.

When to gather workload system statistics?

Yann Neuhaus - Fri, 2015-01-23 08:40

This month we started to give our Oracle Tuning Workshop. And with a new workshop comes new questions. We advise to give to the optimizer the most accurate statsistics that we can have. That suggests that WORKLOAD statistics are better than NOWORKLOAD ones because they gather the average number of blocks read in multiblock read, rather than using default values. But then, the question is: which time period do you choose to gather workload statistics, and with which interval duration?

Oracle on GitHub

Marco Gralike - Fri, 2015-01-23 07:09
There has been a lot of activity on https://github.com/oracle lately. Apparently a place to keep…

Oracle Audit Vault - Remedy and ArcSight Integration

Remedy Ticket System Integration

Oracle Audit Vault 12c includes a standard interface for BMC Remedy ticketing systems.  You can configure the Oracle Audit Vault to connect to BMC Remedy Action Request (AR) System Server 7.x.  This connection enables the Oracle Audit Vault to raise trouble tickets in response to Audit Vault alerts. 

Only one Remedy server can be configured for each Oracle Audit Vault installation.  After the interface has been configured, an Audit Vault auditor needs to create templates to map and handle the details of the alert.  Refer to the Oracle Audit Vault Administrator’s Guide Release 10.3, E23571-08, Oracle Corporation, August 2014, section 3.6 http://docs.oracle.com/cd/E23574_01/admin.103/e23571.pdf.

HP ArcSight Integration

HP’s ArcSight Security Information Event Management (SIEM) system is a centralized system for logging, analyzing, and managing messages from different sources.  Oracle Audit Vault can forward messages to ArcSight SIEM.

No additional software is needed to integrate with ArcSight.  Integration is done through configurations in the Audit Vault Server console.

Messages sent to the ArcSight SIEM Server are independent of any other messages sent from the Audit Vault (e.g., other Syslog feeds). 

There are three categories of messages sent –

  • System - syslog messages from subcomponents of the Audit Vault Sever
  • Info - specific change logging from the Database Firewall component of Oracle AVDF
  • Debug - a category that should only be used under the direction of Oracle Support

If you have questions, please contact us at mailto:info@integrigy.com

Reference Tags: AuditingSecurity Strategy and StandardsOracle Audit Vault
Categories: APPS Blogs, Security Blogs

Everybody Says “Hackathon”!

Tugdual Grall - Fri, 2015-01-23 04:23
TLTR: MongoDB & Sage organized an internal Hackathon We use the new X3 Platform based on MongoDB, Node.js and HTML to add cool features to the ERP This shows that “any” enterprise can (should) do it to: look differently at software development build strong team spirit have fun! Introduction I have like many of you participated to multiple Hackathons where developers, designer and Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com2

A Personal Victory: Oracle Database Sample Schemas are on GitHub

Christopher Jones - Fri, 2015-01-23 00:15

For anyone who ever deleted a row from a table in Oracle's Sample HR schema and wanted it back, help is nearby. You no longer have to download the full "Oracle Database 12c Release 1 Examples" zip (499,228,127 bytes worth for the Linux bundle) and run the Oracle installer. Now you can clone our GitHub db-sample-schema repository and run the creation SQL scripts in SQL*Plus.

This new repository installs these six sample schemas:

  • HR: Human Resources
  • OE: Order Entry
  • PM: Product Media
  • IX: Information Exchange
  • SH: Sales History
  • BI: Business Intelligence

Because of the widespread use of these schemas, we did minimal changes to the bundle. The install, as it is given, installs all schemas and needs to be done on a database server since file system access is needed from the database.

But now, if you want, you can fork the repo and modify it to install just the HR schema from a client machine. Or change your fork to install the HR schema into an arbitrary user name of your choice so multiple people can test the same data set. And what about modifying the script to do DROP TRIGGER SECURE_EMPLOYEES getting rid of that annoying time-based trigger which yells 'You may only make changes during normal office hours' if you try to make changes after 6pm or on weekends? This may be a great teaching tool about triggers but not useful when you are configuring demonstrations for big conferences late into the night!

And why is this a personal victory? Because as a client tool person, how to find these schema creation scripts has irked me in the past. The HR schema replaced SCOTT/TIGER in the Oracle documentation a long time ago but was not easily available to use. I've written a lot of examples using HR but never had a good way to instruct how to install the schema. I'm glad to have helped (being partially modest here about the legal and administrative things it required) getting this small set of scripts out on GitHub. If it makes it easier for someone to talk about features or issues by reference to a common data set, then my job is done. Having the scripts readily available is also a reminder to the Oracle community to share information and knowledge efficiently. Even as we head to a world of cloneable databases and snapshots, sometimes it is just easier to run a SQL script.

This repo is a piece of a jigsaw, and it can be used where it fits. The schemas could be now considered "traditional". In future, Oracle Database teams will continue to create fresh data sets to show off newer and upcoming database features, such as these analytical-sql-examples that you might be interested in.

JSON for APEX Developers (part 2)

Dimitri Gielis - Thu, 2015-01-22 17:30
In the previous post we created a service that allowed us to give our data in JSON format.
Now let's focus on consuming that JSON. In this post I want to show how to use JSON data in the client (your browser), in a future post I'll show how to use JSON on the server (in the database).
If you want to play with JSON, open the console of your browser and create some text in JSON format - easy to do - and use the JSON.parse() function to create an object from it:
var text = '{"items": {"emp":[{"empno":7369, "ename":"SMITH"},{"empno":7499, "ename":"ALLEN"} ]}}';var obj = JSON.parse(text);  
As you will probably do a call somewhere to get JSON, I'll move on with such an example, so we will call the service we created with ORDS in the previous post and we want to use that data in our APEX page.
So edit your APEX Page in the JavaScript section "Execute when Page Loads" and put
(note you can put your own url that generates the json):
$.getJSON("https://www.apexrnd.be/ords/training/emp_json/", function(json) {  console.log(json);});
We just called the url from JavaScript and output the result to the console of the browser so we see what we got back:

We see the JSON (javascript object/array) we get back from the url. Note that the array starts with 0 (and not 1).
We can now do anything we want with that data. Here we set some items on the page of the first employee:
  $('#P10_EMPNO').val(json.items[0].empno);  $('#P10_ENAME').val(json.items[0].ename);
A more interesting example is when you want to integrate Flickr foto's on your web page.The concept is the same, call a url, once received loop over the array (see .each) and create an image tag on the fly on your page:

Another example would be when you want to include a visualisation in your page and as data it needs the data in JSON format... You could do that with an AJAX call for example (application process, plugin, ...), but that is for another post.

Hopefully this post showed how you can interact with JSON within your APEX page by using JavaScript.You find the online example at https://www.apexrnd.be/ords/f?p=DGIELIS_BLOG:JSON
Categories: Development

JSON for APEX Developers (part 1)

Dimitri Gielis - Thu, 2015-01-22 15:30
After my post Generate nested JSON from SQL with ORDS and APEX 5 I got some requests to explain more about REST and JSON, so let me start with JSON. I'll go more into REST in some future posts.

JSON stands for JavaScript Object Notation, it's a text based format to store and transport data.

It all comes from exchanging data, and finding a format that can easily be used by the "client" who needs to do something with the data. In the past XML (and SOAP) was used a lot to fill that need, between tags you found your data. With JSON it's kinda the same, but because many "clients" are now web pages, it makes sense to use something that is very easy to use by a browser.

Here's an example of how it was with XML:


The above XML looks like this in JSON:
{"items": {  "emp":[    {"empno":7369, "ename":"SMITH"},    {"empno":7499, "ename":"ALLEN"}      ]}}
To generate the XML, Oracle build that straight into the database. Here's a SQL statement that does it for you:
SELECT XMLELEMENT("items", XMLAGG(         XMLELEMENT("emp",           XMLFOREST(             e.empno AS "empno",             e.ename AS "ename")         )       )) AS employeesFROM   emp e
To generate the JSON from within the Oracle database takes a bit more effort. You find some nice posts by Morton and Lucas with examples to generate JSON with SQL. If we use the listagg technique our query looks like this:

select '{"items": { "emp":[' 
       || listagg( '{ '
       ||' "empno":"'||empno||'"'
       ||',"ename":'||ename
       ||'} ', ',') within group (order by 1) 
       || ']} }'as  json
  from   emp

Oracle database 12.1.0.2 has JSON support, but that is more to consume JSON, not to generate. As said in my previous post, APEX 5 has a nice package to generate the JSON or you can use ORDS to generate the JSON.
Let's look step-by-step how we can generate the JSON by using ORDS.

In APEX, go to SQL Workshop > RESTful Services and hit the CREATE button and fill in the details as below:


Once you hit Create Module it has generated a REST Webservice, but more important for this post is that you have now a url that you can provide to somebody to get the data in JSON format:

There are many options for this service, but if you don't want Pagination, put a 0 in Pagination Size and if you don't run in HTTPS, put a No in Require Secure Access.

By running the url https://www.apexrnd.be/ords/training/emp_json/ we now see our data in JSON.
In the next post we will consume that data in our web page.
Categories: Development

Oracle Priority Support Infogram for 22-JAN-2015

Oracle Infogram - Thu, 2015-01-22 14:59

RDBMS
Non-CDB architecture of Oracle databases is DEPRECATED since Oracle Database 12.1.0.2, from Upgrade your Database – NOW!
From Oracle related stuff: Video Tutorial: XPLAN_ASH Active Session History - Part 2.
From Martin’s Blog: How to resolve the text behind v$views?
Exalogic
Exalogic Elastic Cloud 12c Software and X5-2 Hardware Launch, from the Oracle Exalogic blog.
IoT
From the Oracle PartnerNetwork Strategy Blog: Top Five IoT Predictions for 2015 – Part I
OpenWorld
Looking back at OOW 2014: Oracle Open World, 2014 – Free Behind the Scenes Videos, from Oracle University.
OpsCenter
Enabling and Testing ASR, from the OpsCenter blog.
Fusion
Fusion Middleware Proactive Patches Released, from Oracle Fusion Middleware Proactive Support Delivery.
Developers
From Developing using Oracle technologies: Node.js and Oracle Database
Java
From The Java Source: Learn About Wearables and Java
Better CDI Alignment in JPA 2.1/Java EE 7, from The Aquarium.
BI
Oracle Business Intelligence Applications Version 11g Performance Recommendations, from the Business Analytics – Proactive Support blog.
BPM
BPM 10g-12c Migration: Handling Excel Files as Input by Mark Foster, from the SOA & BPM Partner Community Blog.
WebCenter
Oracle WebCenter Content (WCC) 11.1.1.8.9 Bundle Patch, from Proactive Support – WebCenter Content.
EBS
From Oracle E-Business Suite Technology:
Using SHA-2 Signed Certificates with EBS
JRE 1.8.0_31 Certified with Oracle E-Business Suite
JRE 1.7.0_75 and 1.7.0_76 Certified with Oracle E-Business Suite
Java JRE 1.6.0_91 Certified with Oracle E-Business Suite
Critical Patch Update for January 2015 Now Available
Internet Explorer 11 Certified with E-Business Suite 12.2 and 12.0
From Oracle E-Business Suite Support Blog:
Webcast: Mastering Component Pick Release Process in Work in Process

Webcast: OPM Financials (GMF) Period Close Process

Mash up Oracle Cloud Application Web Services with Web APIs and HTML5 APIs

Oracle AppsLab - Thu, 2015-01-22 13:48

No more an “honorary” but now a full-blown member of the AppsLab team, I gave a presentation at the Chicago & Dubai Oracle Usability Advisory Board in November on REST and Web APIs and how they can facilitate the transition from on-premise software to cloud-based solutions (the content of which can be fodder for a future post).

As we all are transitioning from on-premise implementations to cloud-based solutions, there seems to be a growing fear among customers and partners (ISV, OEM) alike that they will lose the capability to extend these cloud-based applications.  After all, they do not have access to the server anymore to deploy and run their own reports/forms/scripts.

I knocked up a very simple JavaScript client side application as part of my presentation to prove my point, which was that (well-designed) REST APIs and these JavaScript frameworks make it trivial to create new applications on top of existing backend infrastructure and add functionality that is not present in the original application.

My example application is based on existing Oracle Sales Cloud Web Services.  I added the capability to tweet, send text messages (SMS) and make phone calls straight from my application and speech-enable the UI.  Although you can debate the usefulness of how I am using  some of these feature, that was obviously not the purpose of this exercise.

Instead, I wanted to show that, with just a few lines of code, you can easily add these extremely complex features to an existing application. When was the last time you wrote a bridge to the Public Switched Telephone Network or a Speech synthesizer that can speak 25 different languages?

Here’s a 40,000 foot view of the architecture:

High level view of Demo APP Architecture

High level view of Demo APP Architecture

The application itself is written as a Single Page Application (SPA) in plain JavaScript.  It relies heavily on open source JavaScript libraries that are available for free to add functionality like declarative DOM binding and templating (knockout.js), ES6 style Promises (es6-promise.js), AMD loading (require.js) etc.  I didn’t have to do anything to add all this functionality (other than including the libraries).

It makes use of the HTML5 Speech Synthesis API, which is now available in most modern browsers to add Text-to-Speech functionality to my application.  I didn’t have to do anything to add all this functionality.

I also used the Twitter APIs to be able to send tweets from my application and the Twilio APIs to be able to make phone calls and send SMS text messages from my application.  I didn’t have to do anything to add all this functionality.  Can you see a theme emerging here?

Finally I used the Oracle Sales Cloud Web Services to display all the Business Objects I wanted to be present in my application, Opportunities, Interactions and Customers.  As with the other pieces of functionality, I didn’t have to do anything to add this functionality!

You basically get access to all the functionality of your CRM system through these web services where available, i.e. not every piece of functionality is exposed through web services.

Note that I am not accessing the Web Services directly from my JS but I go through a proxy server in order to adhere to browser’s same-origin policy restrictions.  The proxy also decorates the Oracle Applications SOAP Services as REST end-points.  If you are interested in how to do this, you can have a look at mine, it’s freely available.

For looks I am using some CSS that makes the application look like a regular ADF application.  Of course you don’t have to do this, you can e.g. use bootstrap if you prefer.  The point being is that you can make this application look however you want.  As I am trying to present this as an extension to an Oracle Cloud Application, I would like it to look like any other Oracle Cloud Application.

With all these pieces in place, it is now relatively easy to create a new application that makes use of all this functionality.  I created a single index.html page that bootstraps the JS application on first load.  Depending on the menu item that is clicked, a list of Customers, Opportunities or Interactions is requested from Oracle Sales Cloud, and on return, those are laid out in a simple table.

For demonstration purposes, I provided switches to enable or disable each feature.  Whenever a feature is enabled and the user would click on something in the table, I would trigger either the phone call, SMS sending, speech or tweet, whichever is enabled, e.g. here is the code to do Text-to-Speech using the HTML5 Speech Synthesis API, currently available in webkit browsers so use Safari or Chrome (mobile or desktop), and yes I have feature detection in the original code, I just left it out to keep the code simple:

.gist table { margin-bottom: 0; }

Ditto for the SMS sending using the Twilio API:

.gist table { margin-bottom: 0; }

And calling somebody, using the Phone Call API from Twilio, using the same user and twilio object from above:

.gist table { margin-bottom: 0; }

The tweeting is done by adding the tweet button to the HTML, dynamically filling in the tweet’s content with some text from the Opportunity or Interaction.

Here is a screencast of the application in action:

As I mentioned earlier, how I am using the APIs might not be particularly useful, but the point is to show how easy it is to integrate this functionality with Oracle Cloud Applications to extend the functionality beyond what is delivered out of the box.  It probably makes more sense to use Twilio to actually call or text a contact attached to the opportunity or interaction, rather than me.  Or to tweet when an opportunity moves to a “win” status, the possibilities are literally endless, but I leave that up to you.

Happy Coding!

Mark.Possibly Related Posts:

Video Tutorial: XPLAN_ASH Active Session History - Part 2

Randolf Geist - Thu, 2015-01-22 13:45
The next part of the video tutorial explaining the XPLAN_ASH Active Session History functionality has been published. In this part I begin the actual walk-through of the script output.

More parts to follow.


New Version Of XPLAN_ASH Utility - In-Memory Support

Randolf Geist - Thu, 2015-01-22 13:42
A new version 4.21 of the XPLAN_ASH utility is available for download. I publish this version because it will be used in the recent video tutorials explaining the Active Session History functionality of the script.

As usual the latest version can be downloaded here.

This is mainly a maintenance release that fixes some incompatibilities of the 4.2 version with less recent versions (10.2 and 11.2.0.1).

As an extra however, this version now differentiates between general CPU usage and in-memory CPU usage (similar to 12.1.0.2 Real-Time SQL Monitoring). This is not done in all possible sections of the output yet, but the most important ones are already covered.

So if you already use the 12.1.0.2 in-memory option this might be helpful to understand how much of your CPU time is spent on in-memory operations vs. non in-memory. Depending on your query profile you might be surprised by the results.

Here are the notes from the change log:

 - Forgot to address a minor issue where the SET_COUNT determined per DFO_TREE (either one or two slave sets) is incorrect in the special case of DFO trees having only S->P distributions (pre-12c style). Previous versions used a SET_COUNT of 2 in such a case which is incorrect, since there is only one slave set. 12c changes this behaviour with the new PX SELECTOR operator and requires again two sets.

- For RAC Cross Instance Parallel Execution specific output some formatting and readability was improved (more linebreaks etc.)

- Minor SQL issue fixed in "SQL statement execution ASH Summary" that prevented execution in 10.2 (ORA-32035)

- The NO_STATEMENT_QUEUING hint prevented the "OPTIMIZER_FEATURES_ENABLE" hint from being recognized, therefore some queries failed in 11.2.0.1 again with ORA-03113. Fixed

- "ON CPU" now distinguishes between "ON CPU INMEMORY" and "ON CPU" for in-memory scans

Oracle EBS SYS.DUAL PUBLIC Privileges Security Issue Analysis (CVE-2015-0393)

Oracle E-Business Suite environments may be vulnerable due to excessive privileges granted on the SYS.DUAL table to PUBLIC.  This security issue has been resolved in the January 2015 Oracle Critical Patch Update (CPU) and has been assigned the CVE tracking identifier CVE-2015-0393.  The problem may impact all Oracle E-Business Suite versions including 11.5, 12.0, 12.1, and 12.2.  Recent press reports have labeled this vulnerability as a “major misconfiguration flaw.”  The security issue is actually broader than just the INDEX privilege that is being reported in the press and there may be at least four independent attack vectors depending on the granted privileges.  Fortunately, this issue does not affect all Oracle E-Business Suite environments - Integrigy has only identified this issue in a few number of Oracle E-Business Suite environments in the last three years.

Integrigy has published information on how to validate if this security flaw exists in your environment and how to remediate the issue.  The remediation can be done without apply the January 2015 CPU.

For more information, see Integrigy’s in-depth security analysis "Oracle EBS SYS.DUAL PUBLIC Privileges Security Issue Analysis (CVE-2015-0393)" for more information.

 

Tags: Oracle E-Business Suite
Categories: APPS Blogs, Security Blogs

Now Available: On Demand Webcast - Next-Gen ECM in the Cloud

WebCenter Team - Thu, 2015-01-22 10:50

Earlier this week, we shared with you a recently published video showcasing why digital businesses are looking at next-generation Enterprise Content Management (ECM) solutions that bridge the gap between cloud and on-premise, providing a single source of record and yet the flexibility to collaborate and engage both within and outside the firewall, securely and anytime. Details on what a next gen ECM solution is, were also shared on a webcast we did a few weeks back. In case you missed it or would like to catch it again, we now have the webcast available on-demand. Check it out and, well.....engage! 

Oracle Corporation  Key to Digital Business Transformation Getting Content Management Right
in the Cloud


Digital business is the catalyst for the next wave of revenue growth, business efficiency and service excellence. Business success depends on collaboration both inside and outside of the organization – with customers, partners, suppliers, remote employees – at anytime, anywhere, on any device.

At the forefront are organizations that are going beyond 1st generation Content Cloud solutions and adopting the Next-Gen of ECM in the Cloud to:
  • Speed decisions by quickly and easily liberating enterprise content and “mobilizing” information in the cloud
  • Deliver best-in-class customer experiences that are frictionless, innovative and secure
  • Avoid unnecessary security risks and proliferation of new information silos associated with first-gen content cloud offerings
Watch our on demand webcast to discover the unique benefits
Oracle Hybrid Enterprise Content Management provides for today’s
digital business.

Red Button Top Register Now Red Button Bottom Hardware and Software Engineered to Work Together Copyright © 2014, Oracle Corporation and/or its affiliates.
All rights reserved.
Contact Us | Legal Notices and Terms of Use | Privacy Statement

Memory management, OOM issues with SQL Server 2014 In-memory OLTP

Yann Neuhaus - Thu, 2015-01-22 08:47

Last week I gave a workshop about SQL Server 2014 and the new features. The first day we worked on new In-memory OLTP and different topics such as the new internal storage, the new transaction processing behavior or the new checkpointing process. During this day, one of the attendees asked me about the memory management with In-Memory OLTP feature. It was a very interesting question but unfortunately I didn’t have the time to discuss about it with him, so I decided to publish something concerning this topic. This subject may be extensive and time consuming, so I will try to give only a good overview to understand correctly how memory management works against memory-optimized objects and how important the monitoring aspect is in this particular context.

First of all, keep in mind that memory-optimized tables are memory-oriented feature. It means that memory-optimized structures (indexes and data rows) will reside exclusively in memory. This is by design and this point is very important. Let me explain why later in this blog post.

For the moment, let’s focus on memory aspects of this new feature. In-memory OLTP is not different from other memory consumers on SQL Server. Indeed, In-memory OLTP objects have their own memory clerk MEMORYCLERK_XTP. Let’s have a look at the sys.dm_os_memory_clerks DMV to show information concerning In-Memory OLTP allocated memory.


select        [type],        name,        memory_node_id,        page_size_in_bytes,        pages_kb/1024 as size_MB from sys.dm_os_memory_clerks where type like '%xtp%'; go

 

blog_27_1_memory_clerk


In my case we may notice that the database dbi_hk (DB_ID = 24) contains memory-optimized objects with a dedicated memory clerk on it. Others xtp memory clerks are dedicated for system threads (first line) and DAC (last line) but let’s focus on my user database memory clerk which has 2336MB of page memory allocated.

On my lab environment, I have only one memory-optimized table named bigTransactionHistory_xtp inside the dbi_hk database. Let’s have a look at the new DMV sys.dm_db_xtp_table_memory_stats to show memory information for this table:

 

SELECT        object_name(object_id) AS table_name,        memory_allocated_for_indexes_kb / 1024 as mem_alloc_index_mb,        memory_allocated_for_table_kb / 1024 as mem_alloc_table_mb,        memory_used_by_indexes_kb / 1024 as mem_used_index_mb,        memory_used_by_table_kb / 1024 as mem_used_table_mb,        (memory_allocated_for_table_kb + memory_allocated_for_indexes_kb) / 1024 as mem_alloc_total_mb,        (memory_used_by_table_kb + memory_used_by_indexes_kb) /1024 as mem_used_total_mb FROM sys.dm_db_xtp_table_memory_stats where object_id = object_id('bigTransactionHistory_xtp'); go

 

 

blog_27_2_xtp_table_memory_stats


We may expect to retrieve the same amount of memory page allocated here and in the dedicated memory clerk of the dbi_hk database. This is approximatively the case. The difference we found concerns probably memory allocated for system internal structures. We may have a look at the concerned DMV sys.dm_db_xtp_memory_consumers but I will focus on it in a next blog post.

At this point we know where to find information concerning the memory consumption for memory-optimized objects but I still have one question in mind: how does SQL Server memory manager deal with memory concurrent activities between memory-optimized tables and their disk-based table counterparts? Like any other memory consumer, the in-memory OLTP engine responds to memory-pressure, but to a limited degree because memory consumed by data and indexes can’t be released even under memory pressure.

To deal correctly with In-Memory OLTP engine and others consumers we have to turn on the resource governor (RG) side. Indeed, by default all databases are mapped to the default resource pool regardless the RG is enabled. In the same way, workloads issued from both disk-based tables and memory-optimized tables will run concurrently on the default resource pool if any special configuration is performed. In such case, RG will use an internal threshold for In-Memory OLTP to avoid conflicts over pool usage. The threshold is depending on the memory size configured for SQL Server and especially to the target commit memory for the SQL Server instance. You can refer to the Microsoft documentation here for more details.

So, in my case the max memory setting value is configured to 6144MB and the target committed memory is as follows:

 

select        committed_target_kb / 1024 as committed_target_mb from sys.dm_os_sys_info;

 

blog_27_3_target_commit_memory

 

According the Microsoft documentation (cf. link above) the percent available for in-memory tables will be 70% or 0.7 * 4898 = 3429MB. I may retrieve this information by using the DMV related on the RG. You can find an original version of this script on MSSQLTIPs.com website.

 

;with cte as (        select                RP.pool_id ,              RP.Name ,              RP.min_memory_percent ,              RP.max_memory_percent ,              cast(RP.max_memory_kb / 1024. as numeric(12, 2)) AS max_memory_mb ,              cast(RP.used_memory_kb / 1024. as numeric(12, 2)) AS used_memory_mb ,              cast(RP.target_memory_kb / 1024. as numeric(12,2)) AS target_memory_mb,              cast(SI.committed_target_kb / 1024. as numeric(12, 2)) AS committed_target_mb    from sys.dm_resource_governor_resource_pools RP    cross join sys.dm_os_sys_info SI ) select        c.pool_id ,        c.Name ,        c.min_memory_percent ,        c.max_memory_percent ,        c.max_memory_mb ,        c.used_memory_mb ,        c.target_memory_mb ,        c.committed_target_mb,        CAST(c.committed_target_mb * case when c.committed_target_mb then 0.7                                                                when c.committed_target_mb < 16384 then 0.75                                                                when c.committed_target_mb < 32768 then 0.8                                                                when c.committed_target_mb then 0.85                                                                when c.committed_target_mb > 98304 then 0.9                                                       end * c.max_memory_percent / 100 as numeric(12,2)) as [Max_for_InMemory_Objects_mb],        CAST(c.committed_target_mb * case when c.committed_target_mb then 0.7                                                                when c.committed_target_mb < 16384 then 0.75                                                                when c.committed_target_mb < 32768 then 0.8                                                                when c.committed_target_mb then 0.85                                                                when c.committed_target_mb > 98304 then 0.9                                                       end * c.max_memory_percent / 100 as numeric(12,2)) - c.used_memory_mb as Free_for_InMemory_Objects_mb FROM cte c; go

 

blog_27_4_RG_memory_available_for_xtp_tables


Ok I retrieve (approximatively) this value by looking at the Max_for_InMemory_Objects_mb column on the default pool line record. Notice that we have already 2008MB used in the default resource pool.

At this point the In-memory OLTP and the disk-based OLTP run concurrently on the same resource pool and of course, this is not a recommended situation. Indeed, we may be in a situation where In-Memory OLTP consumes all the available memory from this pool. In such situation, SQL Server will be forced to flush data pages from disk-based tables and you know the performance impact of this process.

Go ahead and let’s create an issue you can faced with In-Memory OLTP and misconfigured environments. First we decrease the max memory setting value to 4096MB and then we load another bunch of data into bigTransactionHistory_xtp table to consume an important part of the available memory dedicated to memory-optimized objects in the default resource pool. Finally let’s have again a look at the RG memory configuration by using the previous script. We have now a good picture of changes applied after our reconfiguration:

 

blog_27_5_RG_memory_available_for_xtp_tables 

As expected, several values have changed for target memory, memory available for memory-optimized tables and memory used memory into the default resource pool. The new available memory value for the resource pool is now 1605MB (3891MB – 2286MB). I let you think about a bad situation where your memory-optimized table will consume all available memory inside the default resource pool in order of magnitude … the consequences are obvious (even if they depend on the context): probably a lot of memory pressures between the buffer pool consumer and In-Memory OLTP consumer and in the worst case a potential OOM issue like as follows:

 

blog_27_6_RG_default_pool_oom_condition

 

After loading data into the bigTransactionHistory_xtp we can notice we have consumed all available memory for In-memory objects into the default resource pool. However as said earlier, RG guarantees a certain amount of memory for disk-based tables.

 

blog_27_6_RG_memory_available_oom_condition


Ok now let’s simulate a crash recovery scenario by restarting the SQL Server instance. In my case the SQL Server engine service didn’t restart correctly… ouch... What’s going on? Of course my first though was to take a look directly on the error log of my SQL Server instance. The first error message I encountered was as follows:

 

blog_27_7_xtp_start_instance_error_fail_page_allocation

 

Ok... it seems there is an issue during the dbi_hk recovery database process. In fact during the recovery process one step consists in building the index structure and link the data rows to this structure. But you can see that this step fails with an OOM (Out Of Memory) issue.

 

blog_27_8_xtp_start_instance_oom_1


blog_27_8_xtp_start_instance_oom_2


In this second part, we have interesting information concerning our OOM issue. First of all in the “process / system counts” section we may notice that SQL Server had to deal with internal memory pressures (process physical memory low = 1) so we can exclude external memory pressure. Then, in the “memory manager” section we have two additional sections Last OOM Factor and Page Alloc Potential. The former confirms an OOM (out of memory) issue into the Memory manager. The latter shows a negative value that indicates that the buffer pool does not have any free memory so our assumption that it was an internal memory pressure is correct. As a reminder Page Alloc Potential is similar to Stolen Potential in previous versions of SQL Server.

Let’s continue and point out the memory clerks which are responsible for the memory pressure. By investigating down into the log file, I found two relevant memory clerks with a lot of pages allocated as shown above:

 

blog_27_9_xtp_clerk_status


blog_27_9_xtp_clerk_status_2


As expected, the first memory clerk concerns In-memory OLTP (XTP as Extreme Transaction Processing) and the second is related on the log pool manager that is heavily used during recovery processing. The both memory clerks, at the time of the OOM issue, have a total size of 3.7GB. This does not leave much room for the caches left in the default resource pool. Finally the end of the error log contains the following error messages that confirm that SQL Server is missing memory for its default resource pool.

 

blog_27_9_xtp_clerk_status_3


According to the Microsoft documentation that’s the resolution of OOM issues with In-Memory table’s scenarios, the number of solutions are very limited. In my case, I started the SQL Server engine with –f parameter to load minimal configuration and then I increased the amount of memory dedicated to In-memory OLTP by increasing the max server memory option in the server side. This fix will avoid to face the same issue on the next restart of my SQL Server engine service.

Is it possible to fix definitely this OOM condition? The response is yes and we have to configure a resource pool with memory limitations and bind it with our memory-optimized database. This is another story and I let you check the Microsoft document! My intention in this blog post is only to create awareness of the importance of a good memory management with new In-memory OLTP feature.

Happy configuration!

EBS 12.2 Essential Bundle Fixes for AD Delta 5 and TXK Delta 5 (Doc ID 1934471.1)

Senthil Rajendran - Thu, 2015-01-22 07:38
EBS 12.2 Essential Bundle Fixes for AD Delta 5 and TXK Delta 5 (Doc ID 1934471.1)

if any of the below features are interesting to your deployment then please review the doc and apply the essential bundle patches on 12.2.5 environment. Hope this helps to stabilize your environment.

Section 4: Features and Fixes in the Current Code level
The bundle fixes include implementation of the following AD and TXK features and fixes.

4.1: AD Features and Fixes

  • The database connection module has been enhanced such that the former multiple connections during adop execution have been reduced to only two connections for all embedded SQL actions.
  • Concurrency issues during multi-node configuration have been fixed.
  • Redundancy issues have been addressed:
    • When calling validation on all nodes.
    • Unnecessary calls to the TXK API, have been removed from the cleanup phase.
    • Time-consuming database actions have been centralized, instead of being performed on all nodes.
  • Multinode logic has been changed to depend on a new table, adop_valid_nodes, instead of fnd_nodes.
  • An issue where AD Admin and AD Splice actions were not synchronized on shared slave nodes has been fixed.
  • Reporting capabilities have been improved for:
    • Abandon nodes and failed nodes.
    • Uncovered objects not being displayed after actualize_all in adopreports.
    • Out of sync nodes during fs_clone and abort.
  • Cutover improvements:
    • Restartability of cutover.
    • An obsoleted materialized view has been removed from processing during cutover.
  • xdfgen.pl has been enhanced to support execution against Oracle RAC databases where ipscan is enabled.
  • Support for valid comma-separated adop phases has been provided.
  • Several database-related performance issues have been fixed.
  • Improvements have been made in supporting hybrid, DMZ, non-shared, and shared configurations.
  • The adop utility has been enhanced to support host name containing the domain name.

4.2: TXK New Features and Fixes

  • Enhancements have been made to the provisioning tools used in multi-tier environments to perform operations such as adding or deleting nodes and adding or deleting managed servers.
  • An enhancement has been made to allow customization of the s_webport and s_http_listen_parameter context variables when adding a new node.
  • Performance improvements have been made for cloning application tier nodes, particularly in the pre-clone and post-clone phases.
  • Fixes related to cloning support for Oracle 12c Database have been provided.
  • Performance improvements have been made for managing application tier services, including implementation of the Managed Server Independence Mode feature (-msimode parameter to adstrtal.sh) to allow application tier services to be started or stopped without the WebLogic Administration Server running.
  • On a multi-node application tier system configuration, remote connectivity is no longer required for packaging the Oracle E-Business Suite WebLogic Server domain.
  • JVM heap size (-Xms and -Xmx) has been increased to 1 GB for the WebLogic Administration Server and all managed servers.


Backdoor vulnerability puts Oracle database users at risk

Chris Foot - Thu, 2015-01-22 00:37

Companies using Oracle's database engine to support their enterprise application and information storage needs should consider consulting Oracle experts to help them patch a bug that could allow infiltrators to completely take over their systems. 

Researcher identifies misconfiguration
Forbes contributor Thomas Fox-Brewster noted that Australian security researcher and hacker David Litchfield discovered a vulnerability that would allow any user to receive privileges that are only reserved for system administrators. This means a hacker could change user passwords, transfer financial information across the Web and perform a number of other actions.

"They have no record of the change, no documentation as to why one of their devs did it," said Litchfield in an email to Forbes.

It is likely Oracle is conducting an investigation as to how this flaw managed to fall through the cracks. Apparently, this bug and 10 others were fixed on Jan. 21, 2015. For enterprises using Oracle's e-Business suite, having an outside party conduct a thorough assessment of all user activity is a safe step to take. Any hints of malicious activity that may have been sanctioned by an index created in the DUAL table could indicate an instance in which a public user managed to manipulate the engine. 

What defines a "backdoor" vulnerability? 
The flaw discovered by Litchfield is classified as a "backdoor" flaw. This particular kind of bug allows a malicious actor to ignore normal authentication protocols and obtain remote access to a machine or application while remaining unprotected. Some of these backdoor vulnerabilities are relatively easy to exploit, which exacerbates the severity of these flaws. 

Software receives the brunt of attention from organizations in regard to backdoor flaws, but hardware isn't exempt either. The Next Web contributor Josh Ong noted that espionage agencies in the United Kingdom, Australia and the United States apparently banned the use of Lenovo PCs due to remote access bugs. However, this conclusion has been regarded as unsubstantiated.

Yet Ong cited a paper released by the Australian Financial Review that said intelligence entities banned the machines in the mid-2000s "after intensive laboratory testing of its equipment allegedly documented 'back-door' hardware and 'firmware' vulnerabilities in Lenovo chips." Specifics regarding these flaws or the alleged bans have not been disclosed to the public. 

Either real or perceived, it's important to have a team of analysts specializing in databases, operating systems and business applications sweep these assets for backdoor flaws. 

The post Backdoor vulnerability puts Oracle database users at risk appeared first on Remote DBA Experts.

Map OS Groups To Administrative Privileges After Installation

Oracle in Action - Wed, 2015-01-21 23:10

RSS content

During installing database software, user is prompted to enter names of various operating system groups mapping to various administrative privileges (SYSDBA, SYSOPER, SYSBACKUP, SYSKM, SYSDG). One might map one operating system group to multiple administrative privileges if role separation is not desired.  In case the need for role separation arises later, the mapping can be specified by updating  $ORACLE_HOME/rdbms/lib/config.c file and then relinking it. This post explains the various steps.

While installing database 12.1.0.2 software on linux, I had not  created OS groups corresponding to administrative privileges SYSBACKUP, SYSKM, SYSDG.  Now I want OS groups dgdba, backupdba and kmdba to map to SYSDG, SYSBACKUP and SYSKM administrative privileges respectively.

-- Check that groups dgdba, backupdba and kmdba do not exist

[root@host01 etc]# cat /etc/group | grep dba
dba:x:501:oracle

– Create groups dgdba, backupdba and kmdba

#groupadd -g 54321 dgdba
groupadd -g 54322 backupdba
groupadd -g 54323 kmdba

– Check that groups dgdba, backupdba and kmdba have been created

[root@host01 etc]# cat /etc/group | grep dba
dba:x:501:oracle
dgdba:x:54321:
 backupdba:x:54322:
 kmdba:x:54323:

– Create a user test which is a member of dgdba group

[root@host01 /]# useradd test -g oinstall -G dgdba

[root@host01 /]# passwd test
Changing password for user test.
New UNIX password:

– Login as test user

[root@host01 /]# su - test

[test@host01 ~]$ . oraenv
ORACLE_SID = [test] ? orcl

– As test user try to connect as sysdg – fails as dgdba group
has not been mapped to SYSDG administrative privilege

[test@host01 ~]$ dgmgrl

DGMGRL> connect sysdg/xx
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.

– Verify in configuration file that currently OS group dba corresponds to administrative priviliges SYSDBA, SYSKM, SYSDG and SYSBACKUP

[oracle@host01 ~]$ cat $ORACLE_HOME/rdbms/lib/config.c |grep define
/* SS_DBA_GRP defines the UNIX group ID for sqldba adminstrative access. */
#define SS_DBA_GRP "dba"
#define SS_OPER_GRP "oper"
#define SS_ASM_GRP ""
#define SS_BKP_GRP "dba"
#define SS_DGD_GRP "dba"
#define SS_KMT_GRP "dba"

– Edit configuration file so that OS groups dgdba, backupdba and kmdba to map to SYSDG, SYSBACKUP and SYSKM administrative privileges respectively.

[oracle@host01 ~]$ vi $ORACLE_HOME/rdbms/lib/config.c
#define SS_DBA_GRP "dba"
#define SS_OPER_GRP "oper"
#define SS_ASM_GRP ""
#define SS_BKP_GRP "backupdba"
 #define SS_DGD_GRP "dgdba"
 #define SS_KMT_GRP "kmdba"

– To relink oracle binaries, Shut down all Oracle processes of all instances

a. Shut down the listener.

$ lsnrctl stop

b. Shut down all instances.

$ ps -ef |grep pmon |grep -v grep
oracle 11832 1 0 15:21 ? 00:00:00 ora_pmon_orcl

ORCL> shutdown immediate

— Relink binaries

[oracle@host01 ~]$ cd $ORACLE_HOME/bin; relink all

writing relink log to: /u01/app/oracle/product/12.1.0.2/dbhome_1/install/relink.log

– Now as test user connect as sysdg – succeeds

[test@host01 bin]$ dgmgrl

DGMGRL> connect sysdg/xx
Connected as SYSDG.

– Optionally modify existing OS user oracle to become part of new groups

#usermod -a -G dgdba,backupdba,kmdba oracle

[root@host01 /]# su - oracle

[oracle@host01 ~]$ id
uid=500(oracle) gid=500(oinstall) groups=500(oinstall),501(dba),502(oper),503(asmadmin),54321(dgdba),54322(backupdba),54323(kmdba)

Hope it helps!

Your comments and suggestions are always welcome.

References:

https://community.oracle.com/message/12806120?et=watches.email.thread#12806120

https://www.linkedin.com/groups/Map-OS-Groups-Administrative-Privileges-3698383.S.5964260145260216320?view=&item=5964260145260216320&type=member&gid=3698383&trk=eml-b2_anet_digest-hero-1-hero-disc-disc-0&midToken=AQE9SYOdN_UFjg&fromEmail=fromEmail&ut=1fAfQMlI9DO6A1
==============================================================

Related Links:

Home

Oracle 12c Index



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [ Map OS Groups To Administrative Privileges After Installation], All Right Reserved. 2015.

The post Map OS Groups To Administrative Privileges After Installation appeared first on ORACLE IN ACTION.

Categories: DBA Blogs