Skip navigation.

Feed aggregator

Cedar Wins Gold – PeopleSoft Partner of the Year

Duncan Davies - Tue, 2014-10-28 08:00

If you follow any number of those in the Partner community on LinkedIn you’ll have seen many of us asking that you vote for us in the UKOUG’s annual Partner of the Year competition. All of the partners are really grateful for your votes as winning an award selected by end-users carries significant prestige.

I’m delighted that the company which I now work for – Cedar Consulting – were awarded 1st place (Gold) for PeopleSoft Partner of the Year for 2014/2015.

Simon cropped

Simon (right), collecting the award from David Warburton-Broadhurst – the UKOUG’s President

Cedar were also thrilled to win Silver in the Fusion Partner of the Year awards, further establishing our reputation as the go-to partner for Fusion/Taleo for all existing PeopleSoft customers.

SimonWr (normal)Simon Wragg, Director at Cedar Consulting said, “We are honoured to receive both the PeopleSoft and Fusion Partner of the Year awards amongst such a strong group of finalists. Cedar Consulting are delighted to be recognised as one of the leading partners within the UK Oracle User Group community. Winning these awards and knowing that so many votes were cast from Oracle customers is a real testament to the service we have provided over the last 12 months”  

We’d like to thank all of you who took the time to vote for us, we’re very grateful for your support.

 


Getting started with XQuery Update Facility 1.0

Marco Gralike - Tue, 2014-10-28 07:15
DeleteXML, InsertXML, UpdateXML, appendChildXML, insertChildXML, insertchildXMLafter, insertChildXMLbefore, insertXMLafter and insertXMLbefore are dead (& deprecated) from…

Demo User Community in BPM 12c Quickstart

Darwin IT - Tue, 2014-10-28 05:34
When you want to do demo-ing or perform the BPM12c workshops on a BPM12c QuickStart developers installation you'll need the Oracle BPM Demo User community. You know: with Charles Dickens, John Steinbeck and friends.

How to do so, you can find on the page following this link. You'll need the demo-community scripting that can be found following this link, and then download 'workflow-001-DemoCommunitySeedApp'.

However, besides adapting the build.properties file, there are a few changes to make in the build.xml.


First, find the property declaration for 'wls.home', and change it to:
<property name="wls.home" value="${bea.home}/wlserver"/>
This is needed, since they renamed the folder for the weblogic server in the FMW12c home. Then, after the comment of 'End Default values for params', add the following

<!-- Import task def for antcontrib functions as if -->
<property name="ant-contrib.jar" value="${bea.home}/oracle_common/modules/net.sf.antcontrib_1.1.0.0_1-0b3/lib/ant-contrib.jar"/>
<taskdef resource="net/sf/antcontrib/antlib.xml">
<classpath>
<pathelement location="${ant-contrib.jar}"/>
</classpath>
</taskdef>
This is because the script lacks a definition for the ant-contrib lib, needed amongst others for the 'if' activities. After this change, it worked for me.

github DMCA takedown notices in the rise!

Nilesh Jethwa - Tue, 2014-10-28 04:54

GitHub maintains a list of all DMCA takedown notices along with counteractions and retractions if any.

Analysing all the notices from 2011, it seems that the takedown notices are on the rise.

Year View : Notice the sharp increase in 2014

Image

 

Quarterly view : Now looking at the quarterly breakup, seems like the takedowns are cooling off in the later quarters.

Image

 

So who is issuing these DMCA takedowns?

Here is the complete list of all companies who issued DMCA takedowns

Image

NOTE: The names were extracted from the description text

And here are the counteractions and retractions

Image

See the full list of companies with notice type

So the important question is “Why the DMCA takedown notices have increased?”

One important thing to note is sites like Stackoverflow encourage to replicate the content of the web page from where the original idea/algorithm or source code is copied from. To be honest it is a good thing because lot of times these referring sites become zombies and you don’t want to lose this knowledge. But could it be the case that such non-referenceable source codes end up in GitHub and hence causing the increase in the takedown notices as companies start discovering them?

ShareThis

Oracle Grid Infrastructure: fixing the PRVG-10122 error during installation

Yann Neuhaus - Tue, 2014-10-28 02:22

Making errors is human and when you configure a new Oracle Grid Infrastructure environment (especially one with a large number of nodes), mistakes can happen when configuring ASMLIB on all nodes. If you get an error looking like"PRVG-10122 : ASMLib configuration...does not match with cluster nodes", there is a simple solution to fix it.

When you are installing Grid Infrastructure, the following error can occor in cluvfy output or directly in the pre-requisites check step of the OUI:

 

ERROR:
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_UID" on the node "srvora01" does not match with cluster nodes
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_GID" on the node "srvora01" does not match with cluster nodes
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_ENABLED" on the node "srvora01" does not match with cluster nodes
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_UID" on the node "srvora02" does not match with cluster nodes
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_GID" on the node "srvora02" does not match with cluster nodes
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_ENABLED" on the node "srvora02" does not match with cluster nodes
Result: Check for ASMLib configuration failed.

 

The three parameters ORACLEASM_UID, ORACLEASM_GID and ORACLEASM_ENABLED displayed in the error should be defined when configuring ASMLIB on the system (see Grid Infrastructure 12c pre-requisites for installation, configure ASM step). To check if the configuration is coherent between the nodes specified in the error above, run the following command as root on all concerned nodes. In my case, srvora01 and srvora02 are the involved servers:

 

On srvora01

[root@srvora01 ~]# oracleasm configure
ORACLEASM_ENABLED=true
ORACLEASM_UID=grid
ORACLEASM_GID=asmadmin
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"

 

On srvora02

[root@srvora02 ~]# oracleasm configure
ORACLEASM_ENABLED=false
ORACLEASM_UID=..
ORACLEASM_GID=..
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"

 

As we can see, it seems that ASMLIB has not been configured on srvora02: ORACLEASM_ENABLED is false, and no UID or GID are provided. Thse are the default values! The parameters are different between the two nodes.

To solve the issue, simply reconfigure ASMLIB on the second node by running the following command with the right parameters:

 

On srvora02

 

[root@srvora02 ~]# oracleasm configure -i

 

Configuring the Oracle ASM library driver.This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting without typing an
answer will keep that current value.  Ctrl-C will abort.

 

Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done

 

Now, the parameters are the same between the two nodes:

 

[root@srvora02 ~]# oracleasm configure
ORACLEASM_ENABLED=true
ORACLEASM_UID=grid
ORACLEASM_GID=asmadmin
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"

 

The Grid Infrastructure installation can now continue. I know that the error message was quite explicit, but after having spent hours to configure all nodes of my cluster, it took me some time to understand my big mistake. I hope this will help you.

First Rows

Jonathan Lewis - Tue, 2014-10-28 01:01

Following on from the short note I published about the first_rows optimizer mode yesterday here’s a note that I wrote on the topic more than 2 years ago but somehow forgot to publish.

I can get quite gloomy when I read some of the material that gets published about Oracle; not so much because it’s misleading or wrong, but because it’s clearly been written without any real effort being made to check whether it’s true. For example, a couple of days ago [ed: actually some time around May 2012] I came across an article about optimisation in 11g that seemed to be claiming that first_rows optimisation somehow “defaulted” to first_rows(1) , or first_rows_1, optimisation if you didn’t supply a final integer value.

For at least 10 years the manuals have described first_rows (whether as a hint or as a parameter value) as being available for backwards compatibility; so if it’s really just a synonym for first_rows_1 (or first_rows(1)) you might think that the manuals would actually mention this. Even if the manuals didn’t mention it you might just consider a very simple little test before making such a contrary claim, and if you did make such a test and found that your claim was correct you might actually demonstrate (or describe) the test so that other people could check your results.

It’s rather important, of course, that people realise (should it ever happen) that first_rows has silently changed into first_rows_1 because any code that’s using it for backwards compatibility might suddenly change execution path when you did the critical upgrade where the optimizer changed from “backwards compatibility” mode to “completely different optimisation strategy” mode. So here’s a simple check (run from 11.2.0.4 – to make sure I haven’t missed the switch):

begin
        dbms_stats.set_system_stats('MBRC',16);
        dbms_stats.set_system_stats('MREADTIM',10);
        dbms_stats.set_system_stats('SREADTIM',5);
        dbms_stats.set_system_stats('CPUSPEED',1000);
end;
/

create table t2 as
select
        mod(rownum,200)         n1,
        mod(rownum,200)         n2,
        rpad(rownum,180)        v1
from    all_objects
where rownum <= 3000
;

begin
        dbms_stats.gather_table_stats(
                user,
                't2',
                method_opt => 'for all columns size 1'
 
        );
end;
/

create index t2_i1 on t2(n1);

SQL> select /*+ all_rows */ n2 from t2 where n1 = 15;

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    15 |   120 |    12   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T2   |    15 |   120 |    12   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("N1"=15)

You’ll notice that I’ve created my data in a way that means I’ll have 15 rows with the value 15, scattered evenly through the table. As a result of the scattering the clustering_factor on my index is going to be similar to the number of rows in the table, and the cost of fetching all the rows by index is going to be relatively high. Using all_rows optimization Oracle has chosen a tablescan.

So what happens if I use the first_rows(1) hint, and how does this compare with using the first_rows hint ?

SQL> select /*+ first_rows(1) */ n2 from t2 where n1 = 15;

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     2 |    16 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T2    |     2 |    16 |     3   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T2_I1 |       |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("N1"=15)

SQL> select /*+ first_rows */ n2 from t2 where n1 = 15;

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |    15 |   120 |    16   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T2    |    15 |   120 |    16   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T2_I1 |    15 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("N1"=15)

You might not find it too surprising to see that Oracle used the indexed access path in both cases. Does this mean that first_rows really means (or defaults to) first_rows(1) ?

Of course it doesn’t – you need only look at the estimated cost and cardinality to see this. The two mechanisms are clearly implemented through difference code paths. The first_rows method uses some heuristics to restrict the options it examines, but still gives us the estimated cost and cardinality of fetching ALL the rows using the path it has chosen. The first_rows(1) method uses arithmetic to decide on the best path for getting the first row, and adjusts the cost accordingly to show how much work it thinks it will have to do to fetch just that one row.

Of course, no matter how inane a comment may seem to be, there’s always a chance that it might be based on some (unstated) insight. Is there any way in which first_rows(n) and first_rows are related ? If so could you possibly manage to claim that this establishes a “default value” link?

Funnily enough there is a special case: if you try hinting with first_rows(0) – that’s the number zero – Oracle will use the old first_rows optimisation method – you can infer this from the cost and cardinality figures, or you can check the 10053 trace file, or use a call to dbms_xplan() to report the outline.  It’s an interesting exercise (left to the reader) to decide whether this is the lexical analyzer deciding to treat the “(0)” as a new – and meaningless – token following the token “first_rows”, or whether it is the optimizer recognising the lexical analyzer allowing “first_rows(0)” as a token which the optimizer is then required to treat as first_rows.

Mind you, if you only want the first zero rows of the result set there’s a much better trick you can use to optimise the query – don’t run the query.

 


SQL Server & memory leak: Are you sure?

Yann Neuhaus - Mon, 2014-10-27 22:02

I have recently come across an issue with one of my customer that told me that their SQL Server 2008 R2 instance had a memory leak. These are strong words! The server has 64 GB of RAM and the concerned SQL Server instance is limited to 54 GB according to the max server memory (GB) server option. However, he noticed that SQL Server used 60 GB of memory which did not correspond to the configured max server memory option. What’s going on? Let’s investigate what is causing this memory overhead.

I admit to work mainly on SQL Server 2012 now because most of my customers have upgraded to this version and I had to revisit my skills about memory architecture with SQL Server 2008 R2. Smile

Let’s start by the following question: Is it possible for SQL Server to consume more memory than a configured memory limit? For SQL Server versions older than 2012 the answer is yes and this is directly related to the memory architecture design. From this point on, we will talk exclusively about the SQL Server 2005 / 2008 versions. So, as you certainly know, SQL Server caps only the buffer pool memory area that mainly concerns the single page allocator mechanism.

Let’s have a little deep dive on SQL Server memory architecture here to understand where the single page allocator comes from. My goal is not to provide a complete explanation of the SQL Server memory management with SQLOS but just the concept to understand how to troubleshoot the issue presented in this article.

First of all, threads with SQL Server cannot directly interface with memory. They must go through memory allocators routines from a memory node that know which Windows APIs to use to honor a memory request. The memory node is a hidden component and provides a locality of allocation. Page allocators are one of the memory allocator types and most commonly used with the SQLOS memory manager because they allocate memory in multiple of SQLOS's page – (payload = 8KB). Because memory node is a hidden component threads cannot use it directly and it must create a memory object that has its own memory clerk depending on its type.

A memory clerk is another component that provides data caching, memory control (in case of memory pressure for instance), memory usage statistics tracking capabilities and supports the same type of memory allocators than a memory node. There are several type of memory allocators like single page allocator that can only provide one page at the time, multipage allocator that provide a set of pages at time and others.

To summarize, when a request is asking memory to create a memory object, it goes to the concerned memory clerk which in turn requests the concerned memory allocator. Here a simple representation of memory allocation:

 

{Thread} -> {Memory object} --> {Memory Clerk} --> {Memory allocator in Memory Node}

 

The most interesting part here is that the buffer pool in SQL Server versions older than 2012 acts as both a memory clerk and a consumer. It means that it can provide single pages from others consumers and track its own memory consumption. You can read the detailed explainations in the excellent articles from Slava Oks here.

This is why we have only control of the buffer pool size by configuring the min. / max. server memory options because it provide single page allocations unlike other memory clerks with multi-page allocations. Note that SQL Server 2012 memory management is completely different and fortunately, we have a better control of the memory limitation with SQL Server.

So, now that we know exactly what is capped by the “max. server memory server” level option, let's go back to my problem: Unfortunately I cannot reveal the real context of my customer here, but no problem: I am able to simulate the same problem.

In my scenario, I have a SQL Server 2008 R2 instance capped to 2560 MB and the total memory on my server is 8192MB. If I take a look at the task manager panel like my customer I can see that SQL Server uses more than my configured value MB:

 

Blog_19_-_1_-_task_manager_

 

A quick math tells us that SQL Server is using 3.4 GB rather than 2.5 GB. Well, where then does the remaining part come from? If we take a look at the DMV sys.dm_os_sys_info, we can confirm that the buffer pool is capped to 2560MB as expected (single page allocations):

 

SELECT        physical_memory_in_bytes / 1024 / 1024 AS physical_memory_MB,        bpool_committed / 128 AS bpool_MB,        bpool_commit_target / 128 AS bpool_target_MB,        bpool_visible / 128 AS bpool_visible_MB FROM sys.dm_os_sys_info(nolock)

 

 

Blog_19_-_2_-_sys_dm_os_sys_info_

 

Go ahead and take a look at the multipage allocators statistics information related on memory clerks by using the DMV sys.dm_os_memory_clerks (remember that memory clerks have memory usage capabilities):

 

SELECT        name AS clerk_name,        memory_node_id,        sum(single_pages_kb) / 1024 as single_page_total_size_mb,        sum(multi_pages_kb) / 1024 as multi_page_total_size_mb,        sum(awe_allocated_kb) / 1024 as awe_allocaed_size_MB FROM sys.dm_os_memory_clerks(nolock) WHERE memory_node_id 64 group by memory_node_id, name HAVING SUM(multi_pages_kb) > 0 ORDER BY sum(single_pages_kb) + sum(multi_pages_kb) + sum(awe_allocated_kb) DESC;

 

Blog_19_-_3_-_sys_dm_os_memory_clerks_

 

Note that in my case (same case as for my customer, but with a different order of magnitude), we have a particular memory clerk with a multipage allocator size greater than others. A quick math (2560 + 863 = 3423 MB) can confirm most part of the memory overhead in this case. So, at this point we can claim that SQL Server does not suffer from any memory leaks. This is a normal behavior 'by design'.

Finally, let's go back to the root cause of my customer that comes from the memory clerk related to the TokenAndPermUserStore cache that grows with time (approximatively 6GB of memory overhead). What exactly is the TokenAndPermUserStore cache?

Well, this is a security cache that maintains the different security token types like LoginToken, TokenPerm, UserToken, SecContextToken, and TokenAccessResult generated when a user executes a query. The problem can occur when this cache store grows, the time to search for existing security entries to reuse increases, causing potentially slower query times because access to this cache is controlled by only one thread (please refer to the Microsoft KB 927396). In my demo we can also notice a lot of different entries related on the TokenAndPermUserStore cache by using the following query:

 

WITH memory_cache_entries AS (        SELECT              name AS entry_name,              [type],              in_use_count,              pages_allocated_count,              CAST(entry_data AS XML) AS entry_data        FROM sys.dm_os_memory_cache_entries(nolock)        WHERE type = 'USERSTORE_TOKENPERM' ), memory_cache_entries_details AS (        SELECT              entry_data.value('(/entry/@class)[1]', 'bigint') AS class,              entry_data.value('(/entry/@subclass)[1]', 'int') AS subclass,              entry_data.value('(/entry/@name)[1]', 'varchar(100)') AS token_name,              pages_allocated_count,              in_use_count        FROM memory_cache_entries ) SELECT        class,        subclass,        token_name,        COUNT(*) AS nb_entries FROM memory_cache_entries_details GROUP BY token_name, class, subclass ORDER BY nb_entries DESC;

 

 

Blog_19_-_4_-_TokenAndPermUserStoreDetails_

 

The situation above was very similar to my customer issue but we didn’t find any relevant related performance problems, only an important number of SQL Server logins and ad-hoc queries used by the concerned application.

The fix consisted of flushing entries from the TokenAndPermUserStore cache according to the workaround provided by Microsoft in the Microsoft KB 927396. I hope it will be a temporary solution, the time to investigate from the side of the application code, but this is another story!

However here the moral of my story: check carefully how SQL Server uses memory before saying that it suffers from memory leak Smile

New Central Resource for Info on PeopleSoft Update Manager

PeopleSoft Technology Blog - Mon, 2014-10-27 18:39
You probably have already heard that PeopleSoft is moving to a new model for providing updates and maintenance to customers.  This new life cycle model enables customers to adopt changes incrementally and on their schedule.  It is much less disruptive, is cheaper and simpler, makes updates available sooner, and gives customers greater control.  An essential part of the new model is PeopleSoft Update Manager.  Now there is a new central resource for everything about PUM and PeopleSoft Update Images.  With this resource you can find information about the methodology and purpose of PUM, the update image home pages, and specific information about each image, troubleshooting links, best practices, and more.  Check it out!

How To Change The Priority Of Oracle Background Processes

How To Change The Priority Of Oracle Background Processes
Before you get in a huf, it can be done! You can change an Oracle Database background process

priority through an instance parameter! I'm not saying it's a good idea, but it can be done.
In this post I explore how to make the change, just how far you can take it and when you may want to consider changing an Oracle background process priority.
To get your adrenaline going, check out the instance parameter _high_priority_processes from one of your production Oracle system with a version of 11 or greater. Here is an example using my OSM tool, ipx.sql on my Oracle Database version 12.1.0.2.0.
SQL> @ipx _highest_priority_processes
Database: prod40 27-OCT-14 02:22pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_highest_priority_processes = VKTM Highest Priority TRUE
Process Name Mask
Then at the Linux prompt, I did:
$ ps -eo pid,class,pri,nice,time,args | grep prod40
2879 TS 19 0 00:00:00 ora_pmon_prod40
2881 TS 19 0 00:00:01 ora_psp0_prod40
2883 RR 41 - 00:02:31 ora_vktm_prod40
2889 TS 19 0 00:00:01 ora_mman_prod40
2903 TS 19 0 00:00:00 ora_lgwr_prod40
2905 TS 19 0 00:00:01 ora_ckpt_prod40
2907 TS 19 0 00:00:00 ora_lg00_prod40
2911 TS 19 0 00:00:00 ora_lg01_prod40
...
Notice the "pri" for priority of the ora_vktm_prod40 process? It is set to 41 while all the rest of the Oracle background processes are set to the default of 19. Very cool, eh?

Surprised By What I Found
Surprised? Yes, surprised because changing Oracle process priority is a pandoras box. Just imagine if an Oracle server (i.e., foreground) process has its priority lowered just a little and then attempts to acquire a latch or a mutex? If it doesn't get the latch quickly, I might never ever get it!

From a user experience perspective, sometimes performance really quick and other times the application just hangs.

This actually happened to a customer of mine years ago when the OS started reducing a process's priority after it consumed a certain amount of CPU. I learned that when it comes to Oracle processes, they are programed to expect an even process priority playing field. If you try to "game" the situation, do so at your own risk... not Oracle's.

Then why did Oracle Corporation allow background process priority to be changed. And why did Oracle Corporation actually change a background processes priority?!

Doing A Little Exploration
It turns out there are a number of "priority" related underscore instance parameters! On my 11.2.0.1.0 system there 6 "priority" parameters. On my 12.1.0.1.0 system there are 8 "priority" parameters. On my 12.1.0.2.0 system there are 13 "priority" parameters! So clearly Oracle is making changes! In all cases, the parameter I'm focusing on, "_high_priority_processes" exists.

In this posting, I'm going to focus on my Oracle Database 12c version 12.1.0.2.0 system. While you may see something different in your environment, the theme will be the same.

While I'll be blogging about all four of the below parameters, in this posting my focus will be on the _high_priority_processes parameter. Below are the defaults on my system:
_high_priority_processes        LMS*
_highest_priority_processes VKTM
_os_sched_high_priority 1
_os_sched_highest_priority 1

Messing With The LGWR Background Processes
I'm not testing this on a RAC system, so I don't have an LMS background process. When I saw the "LMS*" I immediately thought, "regular expression." Hmmm... I wonder if I can change the LGWR background process. So I made the instance parameter change and recycled the instance. Below shows the instance parameter change:
SQL> @ipx _high_priority_processes
Database: prod40 27-OCT-14 02:36pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_high_priority_processes = LMS*|LGWR High Priority FALSE
Process Name Mask

Below is an operating system perspective using the ps command:

ps -eo pid,class,pri,nice,time,args | grep prod40
...
5521 RR 41 - 00:00:00 ora_vktm_prod40
5539 TS 19 0 00:00:00 ora_dbw0_prod40
5541 RR 41 - 00:00:00 ora_lgwr_prod40
5545 TS 19 0 00:00:00 ora_ckpt_prod40
5547 TS 19 0 00:00:00 ora_lg00_prod40
5551 TS 19 0 00:00:00 ora_lg01_prod40
...

How Far Can I Take This?
At this point in my journey, my mind was a blaze! The log file sync wait event can be really difficult to deal with and especially so when there is a CPU bottleneck. Hmmm... Perhaps I can increase the priority of all the log writer background processes?

So I made the instance parameter change and recycled the instance. Below shows the instance parameter change:
SQL> @ipx _high_priority_processes
Database: prod40 27-OCT-14 02:44pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_high_priority_processes = LMS*|LG* High Priority FALSE
Process Name Mask

Below is an operating system perspective using the ps command:

ps -eo pid,class,pri,nice,time,args | grep prod40
...
5974 TS 19 0 00:00:00 ora_psp0_prod40
5976 RR 41 - 00:00:00 ora_vktm_prod40
5994 TS 19 0 00:00:00 ora_dbw0_prod40
5996 RR 41 - 00:00:00 ora_lgwr_prod40
6000 TS 19 0 00:00:00 ora_ckpt_prod40
6002 RR 41 - 00:00:00 ora_lg00_prod40
6008 RR 41 - 00:00:00 ora_lg01_prod40
6014 TS 19 0 00:00:00 ora_lreg_prod40
...

So now all the log writer background processes have a high priority. My hope would be that if there is an OS CPU bottleneck and the log writer background processes wanted more CPU, I now have the power to give that to them! Another tool in my performance tuning arsenal!

Security Hole?
At this point, my exuberance began to turn into paranoia. I thought, "Perhaps I can increase the priority of an Oracle server process or perhaps any process." If so, that would be a major Oracle Database security hole.

With fingers trembling, I changed the instance parameters to match an Oracle server process and recycled the instance. Below shows the instance parameter change:

SQL> @ipx _high_priority_processes
Database: prod40 27-OCT-14 02:52pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_high_priority_processes = High Priority FALSE
LMS*|LG*|oracleprod40 Process Name Mask

Below is an operating system perspective using the ps command:

$ ps -eo pid,class,pri,nice,time,args | grep prod40
...
6360 TS 19 0 00:00:00 ora_psp0_prod40
6362 RR 41 - 00:00:00 ora_vktm_prod40
6366 TS 19 0 00:00:00 ora_gen0_prod40
6382 RR 41 - 00:00:00 ora_lgwr_prod40
6386 TS 19 0 00:00:00 ora_ckpt_prod40
6388 RR 41 - 00:00:00 ora_lg00_prod40
6394 RR 41 - 00:00:00 ora_lg01_prod40
6398 TS 19 0 00:00:00 ora_reco_prod40
...
6644 TS 19 0 00:00:00 oracleprod40...
...

OK, that didn't work so how about this?

SQL> @ipx _high_priority_processes
Database: prod40 27-OCT-14 02:55pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_high_priority_processes = High Priority FALSE
LMS*|LG*|*oracle* Process Name Mask

Let's see what happened at the OS.

$ ps -eo pid,class,pri,nice,time,args | grep prod40
...
6701 RR 41 - 00:00:00 ora_vktm_prod40
6705 RR 41 - 00:00:00 ora_gen0_prod40
6709 RR 41 - 00:00:00 ora_mman_prod40
6717 RR 41 - 00:00:00 ora_diag_prod40
6721 RR 41 - 00:00:00 ora_dbrm_prod40
6725 RR 41 - 00:00:00 ora_vkrm_prod40
6729 RR 41 - 00:00:00 ora_dia0_prod40
6733 RR 41 - 00:00:00 ora_dbw0_prod40
...
6927 RR 41 - 00:00:00 ora_p00m_prod40
6931 RR 41 - 00:00:00 ora_p00n_prod40
7122 TS 19 0 00:00:00 oracleprod40 ...
7124 RR 41 - 00:00:00 ora_qm02_prod40
7128 RR 41 - 00:00:00 ora_qm03_prod40

Oh Oh... That's not good! Now EVERY Oracle background process has a higher priority and my Oracle server process does not.

So my "*" wildcard caused all the Oracle processes to be included. If all the processes a high prioirty, then the log writer processes have no advantage over the others. And to make matters even worse, my goal of increasing the server process priority did not occur.

However, this is actually very good news because it appears this is not an Oracle Database security hole! To me, it looks like the priority parameter is applied during the instance startup for just the background processes. Since my server process was started after the instance was started and for sure not included in the list of background processes, its priority was not affected. Good news for security, not as good of news for a performance optimizing fanatic such as myself.

Should I Ever Increase A Background Process Priority?
Now that we know how to increase an Oracle Database background process priority, when would we ever want to do this? The short answer is probably never. But the long answer is the classic, "it depends."

Let me give you an example. Suppose there is an OS CPU bottleneck and the log writer background processes are consuming lots of CPU while handling all the associated memory management when server process issues a commit. In this situation, performance may benefit by making it easier for the log writer processes to get CPU cycles, therefore improving performance. But don't even think about doing this unless there is a CPU bottleneck. And even then, be very very careful.

In my next block posting, I'll detail an experiment where I changed the log writer background processes priority.

Thanks for reading!

Craig.



Categories: DBA Blogs

<b>Contributions by Angela Golla,

Oracle Infogram - Mon, 2014-10-27 13:22
Contributions by Angela Golla, Infogram Deputy Editor

Texas Delivers Citizen Services in Secure Cloud
The first US state to deliver a complete portfolio of citizen services in the Cloud, Texas is the leader in Open Government. With a "my government, my way" promise, Texas relies on Oracle to help 30+ agencies deliver services to nearly 30M citizens.  This is a brief video on how Texas accomplished this. 

MariaDB 10.0 Multi-source Replication at Percona Live UK 2014

Pythian Group - Mon, 2014-10-27 12:12

Percona Live UK is upon us and I have the privilege to present a tutorial on setting up multi-source replication in MariaDB 10.0 on Nov 3, 2014.

If you’re joining me at PLUK14, we will go over setting up two different topologies that incorporates the features in MariaDB. The first is a mirrored topology:

Replication Topologies - Mirrored

Replication Topologies – Mirrored

This basically makes use of an existing DR environment by setting it up to be able to write to either master. Please be advised, this is normally not recommended due to the complexity of making your application able to resolve conflicts and data sync issues that might arise from writing to multiple masters.

The second topology is a basic fan-in topology:

Replication Topologies - Fan-in

Replication Topologies – Fan-in

This use-case is more common, especially for unrelated datasets that can be gathered into a single machine for reporting purposes or as part of a backup strategy. It was also previously available in MySQL only through external tools such as Tungsten Replicator

As promised in the description of the tutorial, I am providing a Vagrantfile for the tutorial. This can be downloaded/cloned from my PLUK14 repository

The vagrant environment requires at least Vagrant 1.5 to make use of Vagrant Cloud.

I hope to see you next week!

Categories: DBA Blogs

Developing Your First Oracle Alta UI page with Oracle ADF Faces

Shay Shmeltzer - Mon, 2014-10-27 11:20

At Oracle OpenWorld this year Oracle announced the new Oracle Alta UI - a set of UI guidelines that will help you create better looking and functioning applications. We use these guidelines to build all our modern cloud based applications and products - and you can use it too today if you are on JDeveloper 12.1.3. 

The Alta UI site is at http://bit.ly/oraclealta

Take a look for example at one page from the master details pattern page:

altapage

You might be wondering how do I go about starting to create such an Alta page layout?

Below is a quick video that shows you how to build such a page from scratch.


A few things you'll see during the demo:

  • Basic work with the panelGridLayout - for easier page structure
  • Working with the new tablet first page template 
  • Enabling selection on a listView component
  • Working with the circular status meter
  • The new AFAppNavbarButton style class
  •  Hot-swap usage to reduce page re-runs

One point to raise about this video is that it focuses more on getting the layout and look rather then the Alta way of designing an application flow and content. In a more complete Alta mind-set you'll also figure out things like fields that probably don't need to be shown (such as the employee_id), you'll think more about "why is the user on this page and what will he want to do here?" which might mean you'll add things like a way to see all the employees in a department in a hierarchy viewer rather than a form that scroll a record at a time.  There are more things you can do to this page to get even better functionality, but on those in future blog entries... 

Categories: Development

Fastest growing and rapidly declining job industry

Nilesh Jethwa - Mon, 2014-10-27 09:17

Data source : http://www.bls.gov/emp/tables.htm#occtables

 

Fastest growing job industry

Image

Original Visualization

Most rapidly declining job industry

Image

Rapidly declining jobs link

ShareThis

The Power of the Oracle Database Proxy Authenticated Connections

Pythian Group - Mon, 2014-10-27 08:46

We recently received this inquiry from a client:

“Can an Oracle database account support two passwords at once so we can roll out updated credentials to application servers gradually rather than having to change them all at the same time? Then once all of the application servers have been configured to use the new/second password we can change or remove the first one?

The short answer is no. Like most computer technologies, an Oracle database user has only one password that is valid at any given time. However, a very powerful and under-appreciated feature of the Oracle database could be used in this case: It is called proxy authentication.

 

How proxy authenication works

Introduced with Oracle Database 10g, the basic premise of proxy authentication is that a user with the required permission can connect to the Oracle database using their own credentials, but proxy into another user in the database.  To put it more plainly: connect as USER_A but using the password of USER_B !

The proxy permission is granted through the “CONNECT THROUGH” privilege.  Interestingly, it is granted through an ALTER USER command as really it’s an “authorization” and property of the user and not truly a privilege like the traditional privileges we’re used to:

SQL> connect / as sysdba
Connected.
SQL> alter user USER_A grant connect through USER_B;

User altered.

SQL>

 

Now USER_B who may not know the password for USER_A can connect as USER_A by specifying the proxy account in square brackets in the connection string:

SQL> connect USER_B[USER_A]/passw0rd
Connected.
SQL> show user
USER is "USER_A"
SQL>

 

The password specified was the one for USER_B, not USER_A.  Hence the credentials for USER_B were used but the end result is that the session is connected as USER_A!

Specifically when a proxy authenticated connection is made the USERENV namespace parameters are updated as follows:

  • The “SESSION_USER” becomes USER_A
  • The “SESSION_SCHEMA” also becomes USER_A
  • The “PROXY_USER” remains USER_B who initiated the connection and who’s credentials were used

Since the syscontext(‘USERENV’,’PROXY_USER’) remains unchanged, the connection is properly audited and information on who made the initial connection can still recorded in audit records.  However for all other purposes, USER_B has effectively connected to the database as USER_A without having to know USER_A’s password.

So back to the original question, a possible approach to their problem would be to create a second USER_B that has permission to proxy into their application user account APP_USER.  Then they could gradually roll out the credential change to use the new USER_B and proxy into APP_USER to all of their app servers.  Once all app servers have been updated it would then be safe to change the password on the base application account APP_USER.

 

A similar feature is the ability to change the current session’s schema. For example as USER_B issuing:

alter session set current_schema = USER_A;

 

This is a very quick and simple approach, but isn’t quite the same. Doing this only changes the “CURRENT_SCHEMA” which is the currently active default schema. Hence any queries issued without specifying the schema name will default to “CURRENT_SCHEMA”. But there are many cases when actually connecting to another user is required. For example, if the DBA needs to drop and re-create a database link then the “current_schema” approach will not suffice.  But the proxy authenticated connection alternative will work perfectly.

Another case where the “current_schema” approach may be an issue is if the application is user aware.  What I mean by this is that possibly the application has some logic such as “if user = USER_A then do suff“.  If you connect as USER_B and simply changes the current schema then the boolean logic of this condition will evaluate to FALSE.  However if you use a proxy authenticated connection user USER_A, the condition will evaluate to TRUE.

Previously if the DBA needed to connect to the database as a specific user (maybe to re-create a DB link for example) they might employ the old trick of temporarily changing the user’s password, quickly connecting, and then quickly changing it back using the extracted/saved password hash.  However there are numerous serious problems with this approach:

  1. The schema may be locked
  2. The password may be controlled by a PROFILE that may also need to be adjusted.
  3. Account intrusion detection tools may detect the connection.
  4. The connection may not be properly audited via Oracle or external auditing tools.
  5. The application may unsuccessfully try to connect while the password is temporarily changed causing an application failure!

Hence that approach should never be used. The proxy authenticated connection alternative doesn’t have any of those issues and is perfectly safe.

 

A simple example

Putting it all together into a small example to show how the userenv properties are affected:

SQL> alter user USER_A grant connect through USER_B;

User altered.

SQL> connect USER_B[USER_A]/passw0rd
Connected.
SQL> alter session set current_schema = SCOTT;

Session altered.

SQL> select sys_context('USERENV','SESSION_USER') as session_user,
  2  sys_context('USERENV','SESSION_SCHEMA') as session_schema,
  3  sys_context('USERENV','CURRENT_SCHEMA') as current_schema,
  4  sys_context('USERENV','PROXY_USER') as proxy_id,
  5  user
  6  from dual;

SESSION_USER   SESSION_SCHEMA CURRENT_SCHEMA PROXY_ID       USER
-------------- -------------- -------------- -------------- ------------
USER_A         SCOTT          SCOTT          USER_B         USER_A

SQL>

 

As can be seen above, for all intensive purposes the connection has been made to USER_A but using the credentials of USER_B.  USER_A’s password did not need to be known nor was the USER_A account affected or adjusted in any way.

FAQs

What if USER_A’s password is locked or expired?

The answer is that the connection will still report the same error as it would have if a direct connection to USER_A was made:

SQL> connect / as sysdba
Connected.
SQL> alter user USER_A account lock;

User altered.

SQL> connect USER_B[USER_A]/passw0rd
ERROR:
ORA-28000: the account is locked


Warning: You are no longer connected to ORACLE.
SQL>

Can this be used with other tools such as Data Pump?

(Importing as the actual user instead of a DBA user was necessary with Oracle 10g under specific circumstances such as importing JOBs and REFRESH GROUPS).  The answer is yes it works with Data Pump and other similar tools:

$ impdp dumpfile=temp.dmp nologfile=y include=JOB

Import: Release 11.2.0.4.0 - Production on Wed Oct 15 19:10:13 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Username: USER_B[USER_A]/passw0rd

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "USER_A"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "USER_A"."SYS_IMPORT_FULL_01":  USER_B[USER_A]/******** dumpfile=temp.dmp nologfile=y include=JOB
Processing object type SCHEMA_EXPORT/JOB
Job "USER_A"."SYS_IMPORT_FULL_01" successfully completed at Wed Oct 15 19:10:22 2014 elapsed 0 00:00:01

$

Notice that the Data Pump master table is created in the USER_A schema even though we connected using the USER_B credentials.

Is proxy authentication supported by JDBC/JDBC thin driver?

Yes, it works through almost any OCI connection including JDBC connections.

What about Oracle Wallets?

The answer again is yes, they can support it too! See below for an example using an Oracle Wallet:

$ mkstore -wrl "/u01/app/oracle/wallet" -createCredential ORCL USER_A passw0rd
Oracle Secret Store Tool : Version 11.2.0.4.0 - Production
Copyright (c) 2004, 2013, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
Create credential oracle.security.client.connect_string1
$ mkstore -wrl "/u01/app/oracle/wallet" -listCredential
Oracle Secret Store Tool : Version 11.2.0.4.0 - Production
Copyright (c) 2004, 2013, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
1: ORCL USER_A
$

$ sqlplus /@ORCL

SQL*Plus: Release 11.2.0.4.0 Production on Wed Oct 15 13:45:04 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> show user
USER is "USER_A"
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

$ sqlplus [app_user]/@ORCL

SQL*Plus: Release 11.2.0.4.0 Production on Wed Oct 15 13:45:14 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> show user
USER is "APP_USER"
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
$

 

Reporting and Revoking

Finally, how can we report on what proxy authentication authorizations we’ve granted as we need to clean them up (revoke them)?  Or perhaps we just need to report on or audit what’s out there?  Fortunately, it’s as simple as querying a catalog view to see what’s been set and we can remove/revoke through another simple ALTER USER command:

SQL> select * from PROXY_USERS;

PROXY        CLIENT       AUT FLAGS
------------ ------------ --- -----------------------------------
USER_B       USER_A       NO  PROXY MAY ACTIVATE ALL CLIENT ROLES

SQL> alter user USER_A revoke connect through USER_B;

User altered.

SQL> select * from PROXY_USERS;

no rows selected

SQL>

 

Conclusion

Since the introduction of the GRANT ANY OBJECT privilege with Oracle9i, the number of times that the DBA needs to actually connect as other users has been reduced.  However, there still are some distinct situations such as those mentioned in the examples above when the connection as another user may be absolutely necessary.

Thanks to the proxy authenticated connection capabilities introduced with Oracle Database 10g, connecting as another user when you don’t know the other account’s password has become a breeze.  And even if you do know the password, connecting through proxy authentication can still add value with the additional audit information.

Have any other situations where connecting as another user is absolutely necessary? Share them in the comments section below.

 

References

http://docs.oracle.com/cd/E25054_01/network.1111/e16543/authentication.htm
http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions184.htm

Categories: DBA Blogs

Webinar: TekStream Offers Breakthrough in Contract Lifecycle Management for Oracle WebCenter

WebCenter Team - Mon, 2014-10-27 08:23

Contracts rule B2B relationships. Whether you’re a growing mid-market company or a large-scale global organization, you need an effective system to manage surges in contract volumes and ensure accuracy in reporting.


TekStream and Oracle would like to invite you to an introduction webinar on a exciting new engineered solution we are offering around Contract Lifecycle Management for Oracle WebCenter. This technology provides organizations with a consolidated and secure platform to logically ingest, and organize contracts and supporting documents with intuitive workflow processing for the total contract lifecycle. The Webinar is schedule for Thursday, October 30th at 2PM EST.
Solution Summary
TekStream’s Contract Lifecycle Management (CLM) software is built on Oracle’s industry leading document management system, WebCenter Content, and is designed to seamlessly integrate with enterprise applications like JD Edwards, PeopleSoft and Oracle’s Enterprise Business Suite (EBS). Combining Oracle’s enterprise level applications with TekStream’s deep understanding of managing essential business information, delivers a contract management tool powerful enough to facilitate even the most complex processes. TekStream’s solution tracks and manages all aspects of your contract work streams from creation and approval to completion and expiration. Companies can rely on TekStream’s CLM to ensure compliance and close deals faster.
Join us to learn more about how our innovative new solution can address the cost and complexity of Contract Lifecycle Management and provide the following benefits:
Solution Benefits
  • Centralized repository for all in-process and executed contracts.
  • Increase efficiency through better control of the contract process.
  • Support for “Evergreen” contracts help to improve contract renewal rates.
  • Improve compliance to regulations and standards by providing clear and concise reporting of procedures and controls.
30 button 2

Log Buffer #394, A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2014-10-27 08:22

This week’s log buffer edition collects some of the insightful blog posts from Oracle, SQL Server, and MySQL.

Oracle:

Oracle StorageTek T10000D Tape Drive achieves FIPS 140-2 Validation.

Oracle is a Leader in IDC MarketScape for Global Trade Management.

The Benefits of Integrating a Google Search Appliance with an Oracle WebCenter or Liferay Portal.

Maintenance Windows is too small? Autotask Jobs fail.

SOA Suite 12c: Querying LDAP directories using the LDAP Adapter.

SQL Server:

Regaining access to SQL server after changing the domain.

NuGet has transformed the ease of getting and installing the latest version of .NET packages, tools and frameworks.

The Mindset of the Enterprise DBA: Delegating Work.

Stairway to SQL PowerShell Level 8: SQL Server PowerShell Provider.

Finding a table name from a page ID.

MySQL:

A few weeks ago, Gharieb received an interesting Galera Cluster support case from one of customers that the application is not working well and they face a lot of troubles in their Galera Cluster setup.

Resources for Database Clusters: 9 DevOps Tips, ClusterControl 1.2.8 Release, HAProxy Webinar Replay & More.

Refactoring replication topology with Pseudo GTID.

Improvements to STRICT MODE in MySQL.

Monitoring progress and temporal memory usage of Online DDL in InnoDB.

Categories: DBA Blogs

OCP 12C – Oracle Data Redaction

DBA Scripts and Articles - Mon, 2014-10-27 07:00

What is Oracle Data Redaction ? Oracle Data Redaction is meant to mask (redact) sensitive data returned from application queries. Oracle Data Redaction doesn’t make change to data on disk, the sensitive data is redacted on the fly before it is returned to the application. You can redact column data by using one of the following methods: Full [...]

The post OCP 12C – Oracle Data Redaction appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

PeopleTools 8.54 Feature: Application Engine Trace File Enhancements

Javier Delgado - Mon, 2014-10-27 03:01
In this blog, we have been reviewing the new features of PeopleTools 8.54. Today is the turn of Application Engine, particularly on its troubleshooting. This release of PeopleTools include several enhancements on Application Engine tracing, which are outlined below:

  • The .AET trace file can now include the PeopleCode trace. This removes the need of checking the .AET file for the the non-PeopleCode steps and the .TRC file for the PeopleCode steps. Surely, .TRC files could also contain the SQL executed in non-PeopleCode steps if needed, but it was significantly more difficult to read as the SQL statements were not formatted.


This new feature is enabled by setting the TraceAECombineOutput parameter in the server configuration file for Application Server or Process Scheduler Server.

TraceAECombineOutput=Y

  • You can set the file size of the Application Engine Trace file. This way, if the trace file exceeds the threshold, it splits into a different file. For certain processes, this could be quite handy, as sometimes the trace sizes become unmanageable.

This new feature is enabled by setting the AETFileSize parameter in the server configuration file for Application Server or Process Scheduler Server. The size is measured in Megabytes.
AETFileSize=20

  • You can actually select which sections of an Application Engine program should be traced and which not. This can contribute to reduce unneeded trace information, just focusing on the potential error areas.


This new feature is enabled by setting the TraceAEEnableSection parameter in the server configuration file for Application Server or Process Scheduler Server.

TraceAEEnableSection=Y

Then, using Application Designed, you should mark the sections you want to trace. Keep in mind that by default all sections are unmarked:
In order to enable the flag in Application Designer, the Enable Section Trace(g) setting has to be enabled in Configuration Manager:


Note: As far as I can tell, you can only set this flag when you create a new section. If you need to modify an existing one, you would need to copy and paste, and then remove the original one. Have any of you found a more efficient way of setting the flag?

  • The Application Engine Trace file name now includes the Date/Time stamp.


These enhancements should simplify troubleshooting of Application Engine program issues, particularly those ones containing a significant amount of PeopleCode processing or generating very large trace files.

Oracle upgrades MySQL’s capabilities

Chris Foot - Mon, 2014-10-27 01:26

In the era of big data, database administration services are finding ways to work with NoSQL environments, Hadoop and other solutions. 

Yet, these professionals aren't so quick to write off the capabilities of open source relational databases. When Oracle purchased the rights to distribute MySQL, a few critics were a bit skeptical of the software giant's intentions. Evidently, the company intends to refine and improve MySQL as much as possible. 

MySQL Fabric 1.5
I Programmer noted one feature that allows DBAs to better manage collections of MySQL databases, dubbed MySQL Fabric 1.5. Through OpenStack, an open source cloud computing software platform used to support Infrastructure-as-a-Service solutions, Fabric 1.5 users can employ a wider range of sharding keys.

One of the most notable functions Fabric 1.5 has to offer is its ability to automatically detect failures and then employ failover through MySQL Replication. Basically, if the server for the master database unexpectedly shuts down, Fabric 1.5 chooses a slave database to become the new master. 

Cloud provisions 
For various reasons, more organizations are hosting databases in cloud environments. Managing servers distributed across broad infrastructures requires tools that can quickly identify and assign tasks to particular machines, and Oracle has recognized this.

Christopher Tozzi, a contributor to The VAR Guy, acknowledged that Oracle's Enterprise Manager can now support MySQL, allowing enterprises using the solution to better monitor and administrate database functions for public and private cloud deployments. A statement released by the developer asserted enterprise manager also allows users to migrate to MySQL technology. 

Ulf Wendel's contribution 
Another feature DBAs are sure to benefit from is a plugin that enables Web and JavaScript developers to connect to MySQL using HTTP. Oracle's Senior Software Engineer Ulf Wendel developed the add-on, which allows users to deliver SQL queries through URLs, which are then returned as a JSON document that displays information regarding the columns, rows and sets in specific databases. 

The post Oracle upgrades MySQL’s capabilities appeared first on Remote DBA Experts.

First Rows

Jonathan Lewis - Mon, 2014-10-27 01:21

I received an email earlier on this year asking me my opinion of the first_rows option for the optimizer mode. My correspondent was looking at a database with the following settings:

optimizer_mode=first_rows
_sort_elimination_cost_ratio=4

He felt that first_rows was a very old optimizer instruction that might cause suboptimal execution plans in it’s attempt to avoid blocking operations. As for the cost ratio, no-one seemed to be able to explain why it was there.

He was correct; I’ve written the first_rows option a few times in the past – it was left in for backwards compatibility, and reported as such from 9i onwards!

As for the _sort_elimination_cost_ratio – it’s (probably) there to work around the problems caused by first_rows optimisation when you have an ORDER BY clause that could be met by walking an index in competition with a WHERE clause that could be met from another index. Under first_rows the optimizer is highly likely to choose the index for the order by to avoid sorting; but the _sort_elimination_cost_ratio says:  “if the cost of using the index for the ORDER BY is more than N times the cost of using the other index for the WHERE clause then use an index on the WHERE clause and sort the result.”

 The fact that the parameter has been set so low in this case suggests that the end-user:
  1. set first_rows because “it’s an OLTP system” – a myth that even the manuals promoted
  2. found lots of queries taking silly index-driven execution plans because they’d use (say) a primary key index to access and discard vast amounts of  data in the required order, instead of picking up a small amount of data using a better choice of index and then sorting it.
I’ve said many times it in the past: you probably don’t need any first_rows(n) or first_rows_N optimisation, but if you’re using first_rows (i.e. the old option) you really ought to get away from it. Depending on the time you have for testing and your aversion to risk, you might go straight to all_rows, or switch to first_rows_10.  (First_rows_1 can be over-aggressive and introduce some of the same side effects as first_rows).