Skip navigation.

Feed aggregator

list database monitoring users

Laurent Schneider - Wed, 2015-06-10 10:34

I am quite familiar with the SYSMAN tables but this one required me some googling beyond the Oracle documentation.

The list of targets in your Oracle Enterprise Manager is in SYSMAN.MGMT_TARGETS. Each database target is monitored by a database user, typically DBSNMP.

To retrieve this information, you need some to hijack your database, read this : Gökhan Atil

  1. you copy your encryption key to your repository database, on the OMS server
    $ emctl config emkey -copy_to_repos
    Enter Enterprise Manager Root (SYSMAN) Password :

    Now anyone with select any table on your repository will see all passwords. You don’t want to do this, but unfortunately you have to do this because even the username is encrpyted.

  3. you decrypt the credentials for db monitoring
    SELECT *
    FROM (
      SELECT target_name,
        sysman.em_crypto.decrypt (
          c.cred_salt) cred,
        cred_attr_name attr
      JOIN SYSMAN.mgmt_targets t USING (target_guid)
      JOIN sysman.EM_NC_CRED_COLUMNS c USING (cred_guid)
      WHERE c.target_type = 'oracle_database'
      AND c.set_name = 'DBCredsMonitoring' 
    ) PIVOT ( 
      MAX (cred)
      FOR (attr) IN (
        'DBUserName' AS USERNAME, 
        'DBRole' AS "ROLE")

    ----------- -------- ------
    DB01        dbsnmp   NORMAL
    DB02        dbsnmp   NORMAL
    DB03        sys      SYSDBA

  5. remove the security leak
    $ emctl config emkey -remove_from_repos
    Enter Enterprise Manager Root (SYSMAN) Password :

Now the em_crypto won’t work any more

from dual
Error at line 2
ORA-28239: no key provided
ORA-06512: at "SYS.DBMS_CRYPTO_FFI", line 67
ORA-06512: at "SYS.DBMS_CRYPTO", line 44
ORA-06512: at "SYSMAN.EM_CRYPTO", line 250
ORA-06512: at line 1

This information could be used to change the password dynamically accross all databases.

emcli login \
  -username=sysman \
emcli update_db_password \
  -target_name=DB01 \
  -user_name=dbsnmp \
  -change_at_target=yes \
  -old_password=oldpw \
  -new_password=newpw \

APEX Connect Presentation and Download of the sample application

Dietmar Aust - Wed, 2015-06-10 10:30
Hi guys,

I have just finished my presentation on the smaller new features of Oracle Application Express 5.0 here at the APEX Connect conference in Düsseldorf ... it was a blast :).

You can download the slides and the sample application here:

Cheers, ~Dietmar. 

Can you have pending system statistics?

Yann Neuhaus - Wed, 2015-06-10 09:08

Your system statistics seems to be wrong and you want to gather or set more relevant ones. But you don't want to see all your application execution plans changing between nested loops and hash joins. For object statistics, we can gather statistics in a pending mode, test them in a few sessions, and publish them when we are ok with them. But for system statistics, can you do the same? It can be risky to try it, so I've done it for you in my lab.

Test case in 11g


SQL> select banner from v$version where rownum=1;

Oracle Database 11g Enterprise Edition Release - Production

SQL> create table DEMO as
           select rownum id , ora_hash(rownum,10) a , ora_hash(rownum,10) b , lpad('x',650,'x') c 
           from xmltable('1 to 100000');

Table created.

Here are my system statistics:

SQL> select '' savtime,sname,pname,pval1,pval2 from sys.aux_stats$ where pval1 is not null or pval2
is not null order by 1,2 desc,3;

SAVTIME              SNAME            PNAME           PVAL1 PVAL2
-------------------- ---------------- ---------- ---------- --------------------
                     SYSSTATS_MAIN    CPUSPEEDNW       2719
                     SYSSTATS_MAIN    IOSEEKTIM          10
                     SYSSTATS_MAIN    IOTFRSPEED       4096
                     SYSSTATS_INFO    DSTART                06-10-2015 08:11
                     SYSSTATS_INFO    DSTOP                 06-10-2015 08:11
                     SYSSTATS_INFO    FLAGS               0
                     SYSSTATS_INFO    STATUS                COMPLETED

I check a full table scan cost:

SQL> set autotrace trace explain
SQL> select * from DEMO DEMO1;

Execution Plan
Plan hash value: 4000794843

| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |      | 88550 |    30M|  2752   (1)| 00:00:34 |
|   1 |  TABLE ACCESS FULL| DEMO | 88550 |    30M|  2752   (1)| 00:00:34 |

No surprise here. I've 10000 blocks in my tables, SREATDIM= IOSEEKTIM + db_block_size / IOTFRSPEED= 12 ms and MREADTIM= IOSEEKTIM + db_block_size * MBRC / IOTFRSPEED = 26 ms. Then the cost based on a MBRC of 8 is ( 26 * 10000 / 8 ) / 12 = 2700


Pending stats in 11g

I set 'PUBLISH' to false in order to have pending statistics:

SQL> exec dbms_stats.SET_GLOBAL_PREFS('PUBLISH', 'FALSE') ;

PL/SQL procedure successfully completed.

Then I set some system statistics manually to simulate a fast storage:

17:14:38 SQL> exec dbms_stats.set_system_stats('IOSEEKTIM',1);

PL/SQL procedure successfully completed.

17:14:38 SQL> exec dbms_stats.set_system_stats('IOTFRSPEED','204800');

PL/SQL procedure successfully completed.

and I run the same explain plan:

Execution Plan
Plan hash value: 4000794843

| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |      | 88550 |    30M|  1643   (2)| 00:00:02 |
|   1 |  TABLE ACCESS FULL| DEMO | 88550 |    30M|  1643   (2)| 00:00:02 |

The cost is better. I'm not using pending statistics, which means that the published stats have been changed - despie the PUBLISH global preference set to FALSE:

SQL> select '' savtime,sname,pname,pval1,pval2 from sys.aux_stats$ where pval1 is not null or pval2 i
s not null order by 1,2 desc,3;

SAVTIME              SNAME            PNAME           PVAL1 PVAL2
-------------------- ---------------- ---------- ---------- --------------------
                     SYSSTATS_MAIN    CPUSPEEDNW       2719
                     SYSSTATS_MAIN    IOSEEKTIM 1
                     SYSSTATS_MAIN    IOTFRSPEED 204800
                     SYSSTATS_INFO    DSTART                06-10-2015 08:14
                     SYSSTATS_INFO    DSTOP                 06-10-2015 08:14
                     SYSSTATS_INFO    FLAGS               1
                     SYSSTATS_INFO    STATUS                COMPLETED

As you see, the SYS-AUX_STATS$ show my modified values (note that the date/time did not change by the way). So be careful, when you set or gather or delete system statistics in 11g you don't have the pending/publish mechanism. It's the kind of change that may have a wide impact changing all your execution plans.


With the values I've set the SREADTIM is near 1 ms and MREADTIM is about 1.3 ms so the cost is ( 1.3 * 10000 / 8 ) / 1 = 1625 which is roughly what has been calculated by the CBO on my new not-so-pending statistics.


If you look at 12c you will see new procedures in dbms_stats which suggest that you can have pending system statistics:

SQL> select banner from v$version where rownum=1;

Oracle Database 12c Enterprise Edition Release - 64bit Production

SQL> select procedure_name from dba_procedures where object_name='DBMS_STATS' and procedure_name like '%PENDIN


but be careful, they are not documented. Let's try it anyway. I start as I did above, with a demo table and default statistics:

SQL> select '' savtime,sname,pname,pval1,pval2 from sys.aux_stats$ where pval1 is not null or pval2 is not nul
l order by 1,2 desc,3;

SAVTIME              SNAME            PNAME           PVAL1 PVAL2
-------------------- ---------------- ---------- ---------- --------------------
                     SYSSTATS_MAIN    CPUSPEEDNW       2725
                     SYSSTATS_MAIN    IOSEEKTIM          10
                     SYSSTATS_MAIN    IOTFRSPEED       4096
                     SYSSTATS_INFO    DSTART                06-10-2015 17:25
                     SYSSTATS_INFO    DSTOP                 06-10-2015 17:25
                     SYSSTATS_INFO    FLAGS               0
                     SYSSTATS_INFO    STATUS                COMPLETED

SQL> set autotrace trace explain
SQL> select * from DEMO DEMO1;

Execution Plan
Plan hash value: 4000794843

| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |      | 80500 |    28M|  2752   (1)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| DEMO | 80500 |    28M|  2752   (1)| 00:00:01 |

I set PUBLISH to false and set manual system stats:

SQL> exec dbms_stats.SET_GLOBAL_PREFS('PUBLISH', 'FALSE') ;

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.set_system_stats('IOSEEKTIM',1);

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.set_system_stats('IOTFRSPEED','204800');

PL/SQL procedure successfully completed.

and I check the SYS.AUX_STATS$ table:

SQL> select '' savtime,sname,pname,pval1,pval2 from sys.aux_stats$ where pval1 is not null or pval2 is not nul
l order by 1,2 desc,3;

SAVTIME              SNAME            PNAME           PVAL1 PVAL2
-------------------- ---------------- ---------- ---------- --------------------
                     SYSSTATS_MAIN    CPUSPEEDNW       2725
                     SYSSTATS_MAIN    IOSEEKTIM          10
                     SYSSTATS_MAIN    IOTFRSPEED       4096
                     SYSSTATS_INFO    DSTART                06-10-2015 17:25
                     SYSSTATS_INFO    DSTOP                 06-10-2015 17:25
                     SYSSTATS_INFO    FLAGS               0
                     SYSSTATS_INFO    STATUS                COMPLETED

Good ! I still have the previous values here. The new stats have not been published.


The pending stats are stored in the history table, with a date in the future:

SQL> select savtime,sname,pname,pval1,pval2 from sys.wri$_optstat_aux_history where pval1 is not null or pval2
 is not null and savtime>sysdate-30/24/60/60 order by 1,2 desc,3;

SAVTIME              SNAME            PNAME           PVAL1 PVAL2
-------------------- ---------------- ---------- ---------- --------------------
01-dec-3000 01:00:00 SYSSTATS_MAIN    CPUSPEEDNW       2725
01-dec-3000 01:00:00 SYSSTATS_MAIN IOSEEKTIM 10
01-dec-3000 01:00:00 SYSSTATS_MAIN IOTFRSPEED 204800
01-dec-3000 01:00:00 SYSSTATS_INFO    DSTART                06-10-2015 17:29
01-dec-3000 01:00:00 SYSSTATS_INFO    DSTOP                 06-10-2015 17:29
01-dec-3000 01:00:00 SYSSTATS_INFO    FLAGS               1
01-dec-3000 01:00:00 SYSSTATS_INFO    STATUS                COMPLETED

That's perfect. It seems that I can gather system statistics without publishing them. And I don't care about the Y3K bug yet.


12c use pending stats = true

First, I'll check that a session can use the pending stats if chosen explicitly:

SQL> alter session set optimizer_use_pending_statistics=true;

Session altered.

the I run the query:

SQL> set autotrace trace explain
SQL> select * from DEMO DEMO2;

Execution Plan
Plan hash value: 4000794843

| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |      | 80500 |    28M|  1308   (1)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| DEMO | 80500 |    28M|  1308   (1)| 00:00:01 |

Cost is lower. This is exacly what I expected with my new - unpublished - statistics. Good. I don't know what it's lower than in 11g. Maybe the formula has changed. This is another place for comments ;)


12c use pending stats = false

Ok I checked that the published statistics are the same as before, but let's try to use them:

SQL> alter session set optimizer_use_pending_statistics=false;

Session altered.

and once again run the same query:

SQL> set autotrace trace explain

SQL> select * from DEMO DEMO3;

Execution Plan
Plan hash value: 4000794843

| Id  | Operation         | Name | Rows  | Bytes | Cost  |
|   0 | SELECT STATEMENT  |      | 80500 |    28M|  1541 |
|   1 |  TABLE ACCESS FULL| DEMO | 80500 |    28M|  1541 |

   - cpu costing is off (consider enabling it)

Oh. There is a problem here. 'cpu costing is off' means that there are no system statistics. The cost has been calculated as it were in old versions whithout system statistics. This is bad. I have gathered pending statistics, not published, but all sessions have their costing changed now.



Just a look at the 10053 trace show that I have a problem:

System Stats are INVALID.
  Table: DEMO  Alias: DEMO3
    Card: Original: 80500.000000  Rounded: 80500  Computed: 80500.000000  Non Adjusted: 80500.000000
  Scan IO  Cost (Disk) =   1541.000000
  Scan CPU Cost (Disk) =   0.000000
  Total Scan IO  Cost  =   1541.000000 (scan (Disk))
                       =   1541.000000
  Total Scan CPU  Cost =   0.000000 (scan (Disk))
                       =   0.000000
  Access Path: TableScan
    Cost:  1541.000000  Resp: 1541.000000  Degree: 0
      Cost_io: 1541.000000  Cost_cpu: 0
      Resp_io: 1541.000000  Resp_cpu: 0
  Best:: AccessPath: TableScan
         Cost: 1541.000000  Degree: 1  Resp: 1541.000000  Card: 80500.000000  Bytes: 0.000000

It seems that with pending statistics the optimizer can't simply get the published values, and falls back as if there were no system statistics. This is a bug obviously. I've not used the undocumented new functions. They were used in the background, but it's totally supported to set PUBLISH to FALSE and the gather system statistics. The behavior should be either the same as in 11g - publishing the gathered stats - or gathering into pending stats only and session continue to use the published ones by default.



In 11g, be careful, system statistic changes are always published.

In 12c, don't gather system statistics when PUBLISH is set to false. We can expect that nice new feature in further versions, but for the moment it messes up everything. I'll not open an SR yet but hope it'll be fixed in future versions.


Further investigations done by Stefan Koehler on this twitter conversation:

@FranckPachot IOSEEKTIM=1 is not accepted/set. Reason for cost drop to 1308 in case of pending SYS stats … 1/2

— Stefan Koehler (@OracleSK) June 11, 2015

Pillars of PowerShell: Windows OS

Pythian Group - Wed, 2015-06-10 06:42

This is the fifth blog post continuing the series on the Pillars of PowerShell. The previous post in the series are:

  1. Interacting
  2. Commanding
  3. Debugging
  4. Profiling

The Windows Operating System (OS) is something a DBA should know and be familiar with since SQL Server has to run on top of it. I would say that on average most DBAs interact with the OS for troubleshooting purposes. In this post I just want to point out a few snippets of how PowerShell can help you do this type of work.

 Services Console Manager

In the SQL Server 2000 days DBAs became very familiar with typing in “services.msc” in the run prompt. Scrolling through the list of services to find out what state it is, or what the login is configured for with a particular service. Now, if you are performing administrative tasks against SQL Server services it is always advised that you use SQL Server Configuration Manager. However, if you are looking to check the status of the service or performing a restart of just the service, PowerShell can help out.


This cmdlet has a few discrepancies that it can help to understand upfront when you start using PowerShell instead of the Services Console. In the Services Console you find the service by the “Name”, this is the “DisplayName in the Get-Service cmdlet. The “Name” in Get-Service is actually the “Service Name” in the Service Console, do you follow? OK. So with SQL Server the DisplayName for a default instance would be “SQL Server (MSSQLSERVER)”, and the “Name” would be “mssqlserver”. This cmdlet allows you to filter by either field so the below two commands will return the same thing:

Get-Service 'SQL Server (MSSQLSERVER)'
Get-Service mssqlserver

You can obviously see which one is easier to type right off. So with SQL Server you will likely know that a default instance’s name would be queried using “mssqlserver”, and a named instance would be “mssql$myinstance”. So if you wanted to find all of the instances running on a server you could use this one-liner:

Get-Service mssql*

This does exactly what you think it will, so you have to be careful. You can call this cmdlet by itself and restart a service by referencing the “name” just as you did with Get-Service. I want to show you how the pipeline can work for you in this situation. You will find some cmdlets in PowerShell that have a few “special” features. The service cmdlets are included in this category, they allow an array as an input object to the cmdlet for the property or via the pipeline.

So, let’s use the example that I have a server with multiple instances of SQL Server, and all the additional components like SSRS and SSIS. I only want to work with the named instance “SQL12″. I can get the status of all component services with this command:

Get-Service -Name 'MSSQL$SQL12','ReportServer$SQL12','SQLAgent$SQL12','MsDtsServer110'

Now if I need to do a controlled restart of all of those services I can just do this command:

Get-Service -Name 'MSSQL$SQL12','ReportServer$SQL12','SQLAgent$SQL12','MsDtsServer110' |
Restart-Service -Force -WhatIf

The added “-WhatIf” will not actually perform the operation but tell you what it would end up doing. Once I remove that the restart would actually occur. All of this would look something like this in the console:

Get-Service_Restart-Service Win32_Service

Some of you may recognize this one as a WMI class, and it is. Using WMI offers you a bit more information than the Get-Service cmdlet. You can see that by just running this code:

Get-Service mssqlserver
Get-WmiObject win32_service | where {$ -eq 'mssqlserver'}

The two commands above equate to the same referenced service but return slightly different bits of information by default:


However, if you run the command below, you will see how gathering service info with WMI offers much more potential:

Get-WmiObject win32_service | where {$ -eq 'mssqlserver'} | select *

Get-Service will not actually give you the service account. So here is one function I use often (saved in my profile):

function Get-SQLServiceStatus ([string[]]$server)
 foreach ($s in $server) {
 Get-WmiObject win32_service -ComputerName $s |
	where {$_.DisplayName -match "SQL "} |
	select @{Label="ServerName";Expression={$s}},
	DisplayName, Name, State, Status, StartMode, StartName

One specific thing I did in this function is declaring the type of parameter you pass into this function. When you use “[string[]]”, it means the parameter accepts an array or multiple objects. You can set your variable to do this, but you also have to ensure the function is written in a manner that can process the array. I did this simply by wrapping the commands into a “foreach” loop. So an example use of this against a single server would be:
If you wanted to run this against multiple servers it would go something like this:

Get-SQLServerStatus -server 'MyServer','MyServer2','MyServer3' | Out-GridView
#another option
$serverList = 'MyServer','MyServer2','MyServer3'
Get-SQLServerStatus -server $serverList | Out-GridView
Disk Manager

Every DBA should be very familiar with this management console and can probably get to it blind folded. You might use this or “My Computer” when you need to see how much free space there is on a drive. If you happen to be working in an environment that only has Window Server 2012 and Windows 8 or higher, wish I was there with you. PowerShell 4.0 and higher offers storage cmdlets that let you get information about your disk and volume much easier, and cleaner. They actually use CIM (Common Information Model), which is what WMI is built upon. I read somewhere that basically “WMI is just Microsoft’s way of implementing CIM”. They are obviously going back to the standard, as they have done with other areas. It is worth learning more about, and it actually allows you to connect to a PowerShell 2.0 machine to get the same amount of information.

Anyway back to the task at hand. If you are working on PowerShell 3.0 or lower you can use Get-WmiObject and win32_Volume to get similar information that the storage cmdlet Get-Volume returns in 4.0:

Get-WmiObject win32_volume | select DriveLetter, Label, FileSystem,
@{Label="SizeRemaining";Expression={"{0:N2}" -f($_.FreeSpace/1GB)}},
@{Label="Size";Expression={"{0:N2}" -f($_.Capacity/1GB)}} | Format-Table
win32_volume  Windows Event Viewer

Almost everyone is familiar with and knows their way around the Windows Event Viewer. I actually left this last for a reason. I want to walk you through an example that I think will help “put it all together” on what PowerShell can do for you. Our scenario is dealing with a server that had an unexpected restart, at least for me. There are times that I will get paged by our Avail Monitoring product for a customer’s site, and I need to find out who or why the server restarted. The most common place you are going to go for this will be the Event Log.


If you just want to go through Event Viewer and manually find events, and it is a remote server, I find this to be the quickest method:

Show-EventLog -ComputerName Server1

This command will open Event Viewer and go through the process of connecting you to “Server1″. No more right-clicking and selecting “connect to another computer”!


I prefer to just dig into searching for events, this is where Get-EventLog comes in handy. You can call this cmdlet and provide:

  1. Specific Log to look in (system, application, or security most commonly)
  2. Specify a time range
  3. Look just for specific entry type (error, information, warning, etc.)

In Windows Server 2003 Microsoft added a group policy “Shutdown Event Tracker” that if enabled writes particular events to the System Log when a server restarts, either planned or unplanned. In an unplanned event the first user that logs into the server will get a prompt about the unexpected shutdown. When you are dealing with planned, they are prompted for a similar prompt for restart and it has to be filled in before the restart will occur. What you can do with this cmdlet is search for those messages in the System Log.

To find the planned you would use:

Get-EventLog -LogName System -Message "*restart*" -ComputerName Server1 |
select * -First 1

Then to find the unplanned simply change “*restart*” to “*shutdown*”:


In this instance I find that SSIS and SSRS did not start back up and failed to start. I found this because I checked the status of the services for SQL Server using my custom function, Get-SQLServiceStatus:


To search for events after the shutdown I need to find the first event that is written to the Event Log when a server starts up, the EventLog source. I can then use that time stamp as a starting point to search for messages on the SQL Server services that did not start up correctly. I just need the time stamp of the event and pass that into the Get-EventLog cmdlet to pull up error events. I am going to do that with this bit of code:

$t = Get-EventLog -LogName System -Source EventLog -Message "*shutdown*" | select * -First 1
Get-EventLog -LogName System -Before $t.TimeGenerated -Newest 5 -EntryType Error |
select TimeGenerated, Source, Message | Format-Table -Wrap
troubleshoot_service2a Summary

I hope you found this post useful and it gets you excited about digging deeper into PowerShell. In the next post I am going to close up the series digging into SQL Server and a few areas where PowerShell can help.


Learn more about our expertise in SQL Server.

Categories: DBA Blogs

Hadoop generalities

DBMS2 - Wed, 2015-06-10 06:33

Occasionally I talk with an astute reporter — there are still a few left :) — and get led toward angles I hadn’t considered before, or at least hadn’t written up. A blog post may then ensue. This is one such post.

There is a group of questions going around that includes:

  • Is Hadoop overhyped?
  • Has Hadoop adoption stalled?
  • Is Hadoop adoption being delayed by skills shortages?
  • What is Hadoop really good for anyway?
  • Which adoption curves for previous technologies are the best analogies for Hadoop?

To a first approximation, my responses are: 

  • The Hadoop hype is generally justified, but …
  • … what exactly constitutes “Hadoop” is trickier than one might think, in at least two ways:
    • Hadoop is much more than just a few core projects.
    • Even the core of Hadoop is repeatedly re-imagined.
  • RDBMS are a good analogy for Hadoop.
  • As a general rule, Hadoop adoption is happening earlier for new applications, rather than in replacement or rehosting of old ones. That kind of thing is standard for any comparable technology, both because enabling new applications can be valuable and because migration is a pain.
  • Data transformation, as pre-processing for analytic RDBMS use, is an exception to that general rule. That said …
  • … it’s been adopted quickly because it saves costs. But of course a business that’s only about cost savings may not generate a lot of revenue.
  • Dumping data into a Hadoop-centric “data lake” is a smart decision, even if you haven’t figured out yet what to do with it. But of course, …
  • … even if zero-application adoption makes sense, it isn’t exactly a high-value proposition.
  • I’m generally a skeptic about market numbers. Specific to Hadoop, I note that:
    • The most reliable numbers about Hadoop adoption come from Hortonworks, since it is the only pure-play public company in the market. (Compare, for example, the negligible amounts of information put out by MapR.) But Hortonworks’ experiences are not necessarily identical to those of other vendors, who may compete more on the basis of value-added service and technology rather than on open source purity or price.
    • Hadoop (and the same is true of NoSQL) are most widely adopted at digital companies rather than at traditional enterprises.
    • That said, while all traditional enterprises have some kind of digital presence, not all have ones of the scope that would mandate a heavy investment in internet technologies. Large consumer-oriented companies probably do, but companies with more limited customer bases might not be there yet.
  • Concerns about skill shortages are exaggerated.
    • The point of distributing processing frameworks such as Spark or MapReduce is to make distributed analytic or application programming not be much harder than any other kind.
    • If a new programming language or framework needs to be adopted — well, programmers nowadays love learning that kind of stuff.
    • The industry is moving quickly to make distributed systems easier to administer. Any skill shortages in operations should prove quite temporary.
Categories: Other

Flame Graph for quick identification of Oracle bug

Yann Neuhaus - Wed, 2015-06-10 04:28

Most of my performance stores start with a screenshot of Orachrome Lighty my preferred tool to have a graphical view of the database performance, in Standard and Enterprise Edition without any options:


Bluemix - Adding a Spring Boot application to IBM Bluemix DevOps project

Pas Apicella - Wed, 2015-06-10 00:15
I have a few Spring Boot applications which I would like to add to my IBM DevOps Jazzhub projects. The following shows how to do this.

Note: It's assumed you have the following to do this.
  • Jazzhub DevOps account.
  • Existing Spring Boot application project
  • Git client installed
1. Log into Jazz Hub using the URL below.

2. Create a new project using the "+ Create Project" button

3. Call it BluemixSpringBootJPA, of course you can call your project whatever you like.

4. Click the "Create a New Repository"

5. Select "Create a Git Repo on Bluemix"

Now go to the file system where your project exists and start the process to add it to GIT locally
and finally push it to the remote git url we created above

pas@pass-mbp:~/ibm/DemoProjects/spring-starter/jazzhub/BluemixSpringBootJPA$ git init
Initialized empty Git repository in /Users/pas/ibm/DemoProjects/spring-starter/jazzhub/BluemixSpringBootJPA/.git/

pas@pass-mbp:~/ibm/DemoProjects/spring-starter/jazzhub/BluemixSpringBootJPA$ git add .

pas@pass-mbp:~/ibm/DemoProjects/spring-starter/jazzhub/BluemixSpringBootJPA$ git commit -m 'First commit'
[master (root-commit) 332755e] First commit
 35 files changed, 866 insertions(+)
 create mode 100644
 create mode 100644 manifest.yml
 create mode 100644 pom.xml
 create mode 100644 src/main/java/pas/cloud/webapp/
 create mode 100644 src/main/java/pas/cloud/webapp/
 create mode 100644 src/main/java/pas/cloud/webapp/controllers/
 create mode 100644 src/main/java/pas/cloud/webapp/controllers/
 create mode 100644 src/main/java/pas/cloud/webapp/domain/
 create mode 100644 src/main/java/pas/cloud/webapp/domain/
 create mode 100644 src/main/java/pas/cloud/webapp/domain/
 create mode 100644 src/main/java/pas/cloud/webapp/repositories/
 create mode 100644 src/main/resources/
 create mode 100644 src/main/resources/data.sql
 create mode 100644 src/main/resources/
 create mode 100644 src/main/resources/static/images/Create.png
 create mode 100755 src/main/resources/static/images/Execute.png
 create mode 100644 src/main/resources/static/images/PoweredByPivotal1.png
 create mode 100755 src/main/resources/static/images/Search.png
 create mode 100755 src/main/resources/static/images/add16.gif
 create mode 100755 src/main/resources/static/images/b_drop.png
 create mode 100644 src/main/resources/static/images/b_home.png
 create mode 100644 src/main/resources/static/images/b_props.png
 create mode 100755 src/main/resources/static/images/key.png
 create mode 100755 src/main/resources/static/images/s_error.png
 create mode 100755 src/main/resources/static/images/s_info.png
 create mode 100755 src/main/resources/static/images/s_notice.png
 create mode 100755 src/main/resources/static/images/s_success.png
 create mode 100644 src/main/resources/static/images/s_tbl.png
 create mode 100644 src/main/resources/templates/albums.html
 create mode 100644 src/main/resources/templates/editalbum.html
 create mode 100644 src/main/resources/templates/error.html
 create mode 100644 src/main/resources/templates/footer.html
 create mode 100644 src/main/resources/templates/newalbum.html
 create mode 100644 src/main/resources/templates/welcome.html
 create mode 100644 src/test/java/pas/cloud/webapp/

pas@pass-mbp:~/ibm/DemoProjects/spring-starter/jazzhub/BluemixSpringBootJPA$ git remote add origin

pas@pass-mbp:~/ibm/DemoProjects/spring-starter/jazzhub/BluemixSpringBootJPA$ git remote -v
origin (fetch)
origin (push)

pas@pass-mbp:~/ibm/DemoProjects/spring-starter/jazzhub/BluemixSpringBootJPA$ git commit -m 'Update'
[master 5c32ea7] Update
pas@pass-mbp:~/ibm/DemoProjects/spring-starter/jazzhub/BluemixSpringBootJPA$ git push origin master
Username for '': pasapples
Password for '':
Counting objects: 58, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (49/49), done.
Writing objects: 100% (58/58), 32.88 KiB | 0 bytes/s, done.
Total 58 (delta 6), reused 0 (delta 0)
remote: Resolving deltas: 100% (6/6)
remote: Processing changes: refs: 1, done
   8bcea42..5c32ea7  master -> master

Finally the project exists in Jazzhub and can be forked as required

So if you wanted to fork this project here is the URL to it.

More Information

For more information on the IBM dev ops service use the link below.
Categories: Fusion Middleware

quickly exchange code or text between workstations or teams

Yann Neuhaus - Tue, 2015-06-09 17:20
In a recent project I faced the following situation: One the one hand I had to execute scripts on a customer's workstation while on the other hand I had to integrate the results of these scripts into a report on my own workstation. The question was how to efficiently do this without sending dozens of mails to myself.

Bug 13914613 Example Shared Pool Latch Waits

Bobby Durrett's DBA Blog - Tue, 2015-06-09 17:18

Oracle support says we have hit bug 13914613.  Here is what our wait events looked like in an AWR report:

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class latch: shared pool 3,497 17,482 4999 38.83 Concurrency latch: row cache objects 885 12,834 14502 28.51 Concurrency db file sequential read 1,517,968 8,206 5 18.23 User I/O DB CPU 4,443 9.87 library cache: mutex X 7,124 2,639 370 5.86 Concurrency

What really struck me about these latch waits were that the average wait time was several thousand milliseconds which means several seconds.  That’s a long time to wait for a latch.

Oracle pointed to the Latch Miss Sources section of the AWR.  This is all gibberish to me.  I guess it is the name of internal kernel latch names.

Latch Miss Sources Latch Name Where NoWait Misses Sleeps Waiter Sleeps shared pool kghfrunp: clatch: wait 0 1,987 1,956 shared pool kghfrunp: alloc: session dur 0 1,704 1,364

Bug description says “Excessive time holding shared pool latch in kghfrunp with auto memory management” so I guess the “kghfrunp” latch miss sources told Oracle support that this was my issue.

I did this query to look for resize operations:

  2  to_char(start_time,'dd-mon hh24:mi:ss') Started,
  3  to_char(end_time,'dd-mon hh24:mi:ss') Ended

COMPONENT                 OPER_TYPE               FINAL STARTED                   ENDED
------------------------- ------------- --------------- ------------------------- -------------------------
DEFAULT 2K buffer cache   STATIC                      0 12-may 04:33:01           12-may 04:33:01
streams pool              STATIC            134,217,728 12-may 04:33:01           12-may 04:33:01
ASM Buffer Cache          STATIC                      0 12-may 04:33:01           12-may 04:33:01
DEFAULT buffer cache      INITIALIZING   10,401,873,920 12-may 04:33:01           12-may 04:33:08
DEFAULT 32K buffer cache  STATIC                      0 12-may 04:33:01           12-may 04:33:01
KEEP buffer cache         STATIC          2,147,483,648 12-may 04:33:01           12-may 04:33:01
shared pool               STATIC         13,958,643,712 12-may 04:33:01           12-may 04:33:01
large pool                STATIC          2,147,483,648 12-may 04:33:01           12-may 04:33:01
java pool                 STATIC          1,073,741,824 12-may 04:33:01           12-may 04:33:01
DEFAULT buffer cache      STATIC         10,401,873,920 12-may 04:33:01           12-may 04:33:01
DEFAULT 16K buffer cache  STATIC                      0 12-may 04:33:01           12-may 04:33:01
DEFAULT 8K buffer cache   STATIC                      0 12-may 04:33:01           12-may 04:33:01
DEFAULT 4K buffer cache   STATIC                      0 12-may 04:33:01           12-may 04:33:01
RECYCLE buffer cache      STATIC                      0 12-may 04:33:01           12-may 04:33:01
KEEP buffer cache         INITIALIZING    2,147,483,648 12-may 04:33:02           12-may 04:33:04
DEFAULT buffer cache      SHRINK         10,334,765,056 20-may 21:00:12           20-may 21:00:12
shared pool               GROW           14,025,752,576 20-may 21:00:12           20-may 21:00:12
shared pool               GROW           14,092,861,440 27-may 18:06:12           27-may 18:06:12
DEFAULT buffer cache      SHRINK         10,267,656,192 27-may 18:06:12           27-may 18:06:12
shared pool               GROW           14,159,970,304 01-jun 09:07:35           01-jun 09:07:36
DEFAULT buffer cache      SHRINK         10,200,547,328 01-jun 09:07:35           01-jun 09:07:36
DEFAULT buffer cache      SHRINK         10,133,438,464 05-jun 03:00:33           05-jun 03:00:33
shared pool               GROW           14,227,079,168 05-jun 03:00:33           05-jun 03:00:33
DEFAULT buffer cache      SHRINK         10,066,329,600 08-jun 11:06:06           08-jun 11:06:07
shared pool               GROW           14,294,188,032 08-jun 11:06:06           08-jun 11:06:07

The interesting thing is that our problem ended right about the time the last shared pool expansion supposedly started.  The latch waits hosed up our database for several minutes and it ended right about 11:06.  I suspect that the system was hung up with the bug and then once the bug finished then the normal expansion work started.  Or, at least, the time didn’t get recorded until after the bug finished slowing us down.

So, I guess it’s just a bug.  This is on on HP-UX Itanium.  I believe there is a patch set with the fix for this bug.

Maybe it will be helpful for someone to see an example.

– Bobby

Categories: DBA Blogs

Did your organization recently purchase Oracle WebCenter Content? Are you the new admin? Consider these 4 tools to successfully manage and administer the system

AdminSuiteCongratulations! Your organization has made an investment in a leading enterprise content management and portal system, and even better, you get to manage and administer the system. Lucky you, right? As long as users can access the system, find what they need, and receive important, system generated notifications that are relevant to them they will generally be happy and leave you alone, right?

Unfortunately, a complete state of end-user bliss doesn’t exist – if it did, there might not be a need for system administrators. The reason for this is there will be many different personas that will access and use the system, and each will have their own set of likes, dislikes and complaints. For example, some users won’t like the interface (most popular complaint). Others will complain about the (perceived) lack of features and functionality. Regardless, as the system administrator you will not be able to satisfy all user requests, but with Fishbowl’s Administration Suite  you will be able to ensure WebCenter users get the most out of inherent features and system functionality.

Administration (Admin) Suite brings together Fishbowl’s most popular components for WebCenter automation. The reason for their popularity is they truly help administrators be more efficient when performing the most common and repetitive tasks within Oracle WebCenter Content, as well as they provide additional functionality that provides organizations with more business value. These include rule-based security mappings to provide users with the right level of access (read, read/write, read/write/delete, etc.), enabling custom email notifications to be sent for content subscriptions, scheduling and off-loading the process of bulk loading content in WebCenter, and the addition of several workflow features to aid in workflow creation, review and auditing. Let’s take a closer look of each of these components.

Advanced User Security Mapping (AUSM)

  • Overview:
    • AUSM provides rule-based configuration to integrate external user sources (LDAP or Active Directory) with Oracle WebCenter. Rules can be created to assign aliases to users based on their directory information, and this information can be directly imported into WebCenter. AUSM also provides reporting capabilities to quickly audit user access and troubleshoot permission issues.
  • Business Problems it solves:
    • Decreases the time it takes for administrators to integrate an enterprise security model with Oracle WebCenter. No more having to create multiple (sometimes hundreds…) of mappings between LDAP groups and roles in WebCenter
    • Enables administrators to quickly troubleshoot user access issues
    • Accelerates new user access to content in the system by not having to wait until users log in

Subscription Notifier

  • Overview:
    • Subscription Notifier provides an intuitive interface for administrators to create subscriptions for tracking and managing business content. This is done through queries that can be scheduled to run at various intervals. For example, you can create queries to notify a contract manager at 90, 60 and 30 days before a vendor contract expires. You can also create queries that notify content owners that content is X days old and should be reviewed.
  • Business problem it solves:
    • Ensures internal and external stakeholders have visibility, and have enough time to respond, to when high-value content is set to expire (contracts, etc.)
    • Helps avoid duplication of effort by alerting teams when new content is available – sales teams get notified when new marketing content is checked in, for example
    • Provides owners of web content with triggers to update, create new or delete – this can help keep content on the site fresh and new which is important for SEO

Enterprise Batch Loader

  • Overview:
    • Enterprise Batch Loader provides a robust, standalone component for WebCenter administrators to quickly and efficiently load content into the system. This content can come from ERP, CRM, CAD and other business systems as Enterprise Batch Loader can be configured to “watch” folders where such data is output and then create a batch load file for loading into WebCenter. Metadata from these systems can also be mapped to fields existing in WebCenter.
  • Business problems it solves:
    • Helps organizations reduce content repositories and file shares by automating the process of checking content into Oracle WebCenter
    • Ensures high-value data from ERP, CRM, CAD and other business systems gets loaded into WebCenter, providing a single location for users to access and find information

Workflow Solution Set

  • Overview:
    • Packed with 9 powerful features, Workflow Solution Set complements the out-of-the-box Oracle WebCenter workflows through detailed auditing, the ability to search for content in workflow, and the ability to customize email notifications and the workflow review pane. Workflow Solution Set makes it easier for users to interact with and fully leverage WebCenter Content workflows.
  • Business problem it solves:
    • Helps remove confusion from the workflow process by enabling explicit instructions to be included in the workflow review pane or email notifications
    • Ensures the history of a workflow is fully retained – including rejection comments
    • Provides visibility into content that is in a workflow, which natively in WebCenter items in a workflow are not included as part of search results
    • Improves performance of review process by separating out workflow items into pages instead of one long list

I will be covering more of the capabilities of these components that make up Fishbowl’s Administration Suite during a one-hour webinar on Thursday, June 11th. Come hear more about why, together, the components of Fishbowl’s Admin Suite provide the perfect tools for WebCenter admins.

You can register for the event here. I hope you will be able to join us.

The post Did your organization recently purchase Oracle WebCenter Content? Are you the new admin? Consider these 4 tools to successfully manage and administer the system appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Cloud Security: Eight-Point Checklist for Peace of Mind

Linda Fishman Hoyle - Tue, 2015-06-09 14:56

A Guest Post by Vice President Dennis Jolluck (pictured left), Applications Development & Product Management for LAD

As Product Manager for Oracle’s Cloud Applications in Latin America, I have the opportunity to meet with many customers who are very interested in understanding how they can migrate to the cloud. At the same time, I can almost anticipate their next question and visualize a “partially cloudy” bubble above their head: “How does Oracle address security in the cloud?”

All you have to mention is Sony Pictures and we all feel vulnerable. According to Richard Bejtlich, the Chief Security Strategist for FireEye, Inc., “the amount of time a ‘bad guy’ (e.g. hacker) spends in your enterprise before somebody notices – is a median of 209 days. And two/thirds of the time, somebody else notifies you of the breach … which is usually the FBI!” One analyst estimates that more than 50 percent of Fortune 1000 firms experience an annual breach of 1,000 to 100,0000 confidential records, including records of their employees.

Did you know that during 2014, a third of the new hacking tools discovered by security researchers at Hewlett-Packard Co. relied on exploiting a flaw in MS Windows that was discovered in 2010. Many organizations do not update their software as quickly as needed to protect themselves. There are many internal technology services today that lack resources, rigor, or efficiencies to monitor their security on a 24/7 basis.

Over the years, personally I have never invested a lot of time in the aspect of security. I just took it for granted. But that has changed, both personally (my credit cards have been hacked, which changed the way I manage my passwords) and professionally with the move to the cloud. Now I am seeing that every CIO and every CEO has to have a certain level of knowledge about security and how it affects their business. This is no longer a function for IT to worry about. This topic has moved to the Board Room level. Enterprises must have a plan. And that includes alignment of all departments―the Board, Senior Management, the Security staff, IT, Legal, Public Relations and Communications―operating from the same agenda.

Please keep in mind that not all cloud providers are equal in the security services they provide. As a first step, an enterprise needs to know its requirements and map them to a provider’s capabilities to minimize the risk, as well as address any regulatory needs. In some cases, business will take into account 24/7 global support, data jurisdiction, cross-border data transfer, data location, and the privacy regulations where they are doing business. Privacy and security are absolutely linked!

So when evaluating cloud providers in your move to the cloud, there are a number of criteria to consider.

1. Transparency of the Cloud Provider

The customer must have a clear understanding of the provider’s commitment to security, as well as what responsibilities the customer retains. The vendor must be clear about what controls are in place, where the data resides, and who is managing the underlying technologies. Also, are there any third parties involved? Is the provider outsourcing any responsibilities? Some other important questions are:

  • Does the provider have an accountable security officer? Can you directly engage this individual?
  • Are independent audits done of the provider’s security controls?
  • Does the provider offer service options in addressing regulated data?

These questions will measure the maturity and experience of the cloud provider.

2. Data Center Operations

A global cloud vendor should have state-of-the-art physical data center protection. That should include logical data security and data privacy protection policies in place. Also, look for proactive security engagement and monitoring, as well as leading-edge disaster recovery plans.

Check if your cloud provider operates embassy-grade (i.e. exceeds the most stringent international embassy and military grade security and force protection requirement) cloud data centers with highly redundant infrastructures and at least 99.5 percent availability.

3. Risk Mitigation/Unified Access Controls

The enterprise must understand what steps the cloud provider is taking to mitigate risk surrounding its service offerings. A key risk to consider is managing end-user access. Typically this is user provisioning permissions to view and change data. However, complexity arises when an employee leaves the company and access must be revoked. This is easily done in an on-premises environment through the company’s internal directory server. However, if this directory is not integrated with your cloud services, the employee may still have access, which could be detrimental to your organization. Single sign-on (SSO) is one solution to this problem so that you can revoke access from a centralized database. But there is a downside here. Some companies prefer not to pass their credentials to third parties in leveraging SSO.

Check if your cloud provider can solve this problem through Federated Identity Technology, which provides all of the benefits with no downside. This is just one example when reducing risk―the cloud may offer a more robust solution that could be built in-house.

Role-based access control (RBAC) is another control to prevent unauthorized access to confidential requirements. Users see only data that’s related to their specific job-specific duties. Note: RBAC is an approach to restricting system access to authorized users. It is used by the majority of enterprises with more than 500 employees, and can implement either mandatory access control (MAC) or discretionary access control (DAC) to prevent unauthorized access to confidential requirements.

4. Compliance

For some enterprises, compliance is difficult to achieve on its own. So it becomes even more critical to choose a cloud provider that can demonstrate and deliver the service. Security certifications are one way to do this―it’s an easy and objective way to compare providers.

A few customers have asked me about industry certifications, such as the SSAE16 where the compliance details should be made transparent and available. Another critical certification is ISO 27001/2 framework (Best Practices for Information Security Management). Why? This demonstrates that the cloud provider continually vets its solution by conducting network detection intrusion tests and other penetration tests to ensure it is always providing a secure solution to customers. (Note: Audit reports should also be available for customers.)

5. Data Privacy/Local Data Residency

Because of the increasing number of countries that specify where data can and cannot be stored, cloud providers must be in compliance with both industry and country data standards. This is especially true for the government and the financial services industries, where data must be stored within its border for backup and recovery purposes.

Cloud providers can address data privacy twofold: (1) Establishing data centers in targeted countries so data is processed and stored within the given country. And to take it one step further, (2) a handful of providers have the capability to offer a hybrid solution to address customer data privacy. For example, the most sensitive customer data would reside in an on-premises environment inside the customer data center (or the provider’s local data center) and the non-sensitive customer data can be stored on a cloud solution. As a result, the sensitive data would always reside within a country’s borders. At the same time, a sales cloud or service cloud solution would process the data in one of the global data centers offered by the vendor.

6. Secure Data Isolation

It’s obvious when you move to the cloud, you want to leverage shared resources across all of your cloud assets at the lowest possible cost, but security is still a critical element. With the Oracle Cloud, you share the hardware, you share the middleware, and you share the application. BUT YOU DO NOT SHARE YOUR DATABASE WITH OTHER CUSTOMERS.

Oracle’s secure data isolation ensures privacy and performance. More importantly, by having your own private database, the customer chooses the right time for upgrading and ensuring the “noisy neighbors’ syndrome” (refers to a rogue virtual machine that periodically monopolizes storage I/O resources to the performance detriment of all the other VM “tenants” in the environment) can’t affect performance. With Oracle Cloud, you are not forced to upgrade over a single designated weekend!

7. Data Loss Prevention (Advanced Data Security)

Does your cloud provider also offer advanced security services, such as full data encryption whether the data is at rest, on the wire, at work, or in processing mode. A virtual private network (VPN) is also an option for remote access. You may also want to consider stronger controls over data and administration access to prevent unauthorized viewing or sharing of customer information. For example, inquire about database vaults, VPN capabilities, and federated SSO.

8. Breadth of Experience & Viability

Is the cloud provider viable? How long is the vendor’s track record with security in the cloud? What does its balance sheet look like? Can it demonstrate experience supporting very demanding industries, such as: retail, government, healthcare and financial services?

To summarize, when choosing a cloud provider, security capabilities are more than likely going to be your key criteria. For comparison purposes, Oracle has more than 35 years of experience securing and managing data and more than 15 years running enterprise cloud services. Today, we process billions of transactions each day. We have demonstrated secure data management and engineered security at every level in the technology stack from hardware and database to middleware and applications. Oracle has complete control of the entire cloud service, which is a unique offering among the many cloud providers.

So when it’s time to make that transition to the cloud, do your due diligence of your cloud providers and make sure your security concerns are addressed for today’s fast-paced digital world.

Robo Silicon Joins Ranks of Live and Successful Oracle Sales Cloud Customers

Linda Fishman Hoyle - Tue, 2015-06-09 14:53
Robo Silicon is a leader in the manufacture of engineering and construction materials. It’s also one in the latest wave of Oracle Sales Cloud customers to go live and tell its modern sales transformation story.

In this new video, Balaji Jayashankar (pictured left), Robo Silicon’s Head of Sales and Marketing, describes how Oracle Sales Cloud, integrated with SAP on the back end, is helping reps and management gain and sustain higher levels of sales performance.The themes covered by this business leader help reinforce the benefits of mobility, pipeline visibility (“This visibility has really improved our performance and has helped us in sustaining it.”), flexibility and integration, and territory management.

Creating Engaging Customer Experiences Across Multiple Channels

WebCenter Team - Tue, 2015-06-09 12:22

By Srinivasan Sankaran, Principal Product Manager, Oracle WebCenter Sites

Delivering seamless and consistent visitor experiences across channels is a significant challenge for marketers. In part, this is hindered by silos of disparate data and content in marketing departments and across organizations: All too often the data needed to effectively engage and personalize customer and anonymous visitor interactions, is stored in silos -- inaccessible to the tools and people that need it most... the marketers. The same is typically true of the content used on the website and in emails: it is often in separate repositories, or worse, file systems and is not shared between the different channels, creating an inconsistent user experience across email, web and mobile.

To help marketers deliver more personalized and effective campaigns and engaging customer experiences across digital channels, content and data need to be easily shared and utilized by all the channels. The latest integration of WebCenter Sites and Eloqua enables marketers and business users to easily deliver experiences for the web and email channels respectively. Rather than considering email and web channel independently, this integration provides the much needed continuity between email campaigns and web channel  by sharing content, theme and branding. This allows marketers to ensure a seamless and consistent visitor experience when crossing between campaign channels and the main website. In addition to content and branding, by sharing the visitor profile information across channel, marketers can retain the context of visitor interaction and serve targeted content to the visitor. This is achieved in the following ways:
  • Simplified lead generation: Eloqua can now share lead generation forms directly with WebCenter Sites. Marketers simply drag and drop the Eloqua forms directly into WebCenter Sites web pages and visitor data is passed back to Eloqua. Now, when a prospect comes to the website through organic search, provides their email address or other information to access some gated content, Eloqua is made instantly aware of the visitor journey during the website session. Sharing both forms and visitor responses provides insight into the visitor’s digital body language, and creates a unified engagement experience. 
  • Content and experience sharing: Using the new WebCenter Sites Cloud Connector, marketers creating campaigns in Elqoua can utilize the same content in WebCenter Sites across web pages, landing pages, and emails. By sharing and reusing the content from WebCenter Sites to unify the customer experience and consistency across channels, marketers can deliver higher conversion rates. 

  • Personalizing and targeting across channels: Both WebCenter Sites and Eloqua provide personalization. Now visitor profile data can be shared between the two, enabling a highly personalized and consistent experience email, landing pages, and the visitor’s entire web experience. When a visitor lands on a WebCenter Sites page, WebCenter Sites can utilize Eloqua visitor profile data to target specific content and information to the visitor by personalizing not only their visit to a landing page, but on any page wherever they navigate around the web site.  Now, by sharing both content and visitor profile data across Eloqua and WebCenter Sites, the visitor is guaranteed a consistent personalized experience, one that leads to higher conversions. 
Bringing together WebCenter Sites with the Eloqua marketing automation platform provides:
  • Marketers with a unified, enterprise ready, engagement platform that gives them a robust suite for increased customer acquisition
  • Enhanced customer journey management to create a consistent and unified visitor experience that leads to increased conversions
  • Faster time-to-market with the ability to quickly and easily find, utilize and publish content, across emails and websites, without IT involvement

DOAG Middleware Day: WebLogic von allen Seiten beleuchtet

Yann Neuhaus - Tue, 2015-06-09 09:48


This year, I had the opportunity to participate at the Middleware Day organized by the “Deutche Oracle Anwendergruppe” in Düsseldorf. As you expect, all sessions were given in a foreign language - “deutsch sprachige”. I was surprised that all German courses provided by dbi services and offered to their employees to improve language knowledge was not a waste of time. I understood all the sessions, I suppose ;) On the other side, speak and communicate with other participants was more challenging.


Let’s get back to our concern, the DOAG Middleware Day. So, in this blog post, I will quickly describe the sessions from my point of view without going in the detail. To have more detail, just participate to the event


At the beginning, the “begrüssung und Neues von der DOAG” directly took my attention as there are two interesting other events in Q3 this year.


  • The 23rd of September: Usability and UX-Design mit ADF, Apex und Forms! Verträgt sich das mit Performance? - detail
  • The 24th of September: DOAG OpenStack Day 2015 - detail


And the technical and interesting session began.


The fist one gave us some tips and tricks for Tuning ADF – Web Application im WLS 12c. Some interesting information have been provided. The referent really knew all possibilities of ADF customization and optimization that can be made declaratively in JDeveloper.


The second session gave us some information regarding the Oracle Cloud offer with the Java Cloud service. The referent described what is the Oracle Cloud offering and also the pricing. He also made a demo of the usability and the features provided to an administrator for easily provision his cloud space and make a live demo on how it’s very simple to deploy JEE application from his Netbeans IDE to the Java Cloud Service. He also made an interesting demo of an HTML 5 application on a weblogic server setup in cluster. WebLogic direclty managed the synchronization of each nodes part of a cluster in case of having a websockets application hosted on the cluster.


The third session covers a practical SSO use case demonstrating the architecture and the integration of forms applications to a new CRM system. This covers the following components - Oracle Forms, Reports andDiscoverer, Oracle Access Management, Oracle Internet Directory, Kerberos authentication mechanism, Microsoft Active Directory ASO.


The next session also covered a practical use case but also drawback on implementing WebLogic auf ODA at a customer. He covered the physical architecture and showed use some performances test results.


The last session was concentrated on WebLogic cluster features and capabilities. Unfortunately the main part was a demo which had a problem. The presenter remains cool and was able to manage it quite well.


It was a quite good day in Germany with interesting presentation. 

Instructor Replacement vs. Instructor Role Change

Michael Feldstein - Tue, 2015-06-09 07:53

By Phil HillMore Posts (329)

Two weeks ago I wrote a post about faculty members’ perspective on student-centered pacing within a course. What about the changing role of faculty members – how do their lives change with some of the personalized learning approaches?

In the video below, I spoke with Sue McClure, who teaches a redesigned remedial math course at Arizona State University (ASU) that is based on the use of Khan Academy videos. There are plenty of questions about whether this approach works and is sustainable, but for now let’s just get a first-hand view of how Sue’s role changed in this specific course. You’ll see that it took some prodding to get her to talk about her personal experience, and I did have to reflect back what I was hearing. Note that the “coaches” she described are teaching assistants.

Phil Hill: Let’s get more of a first-hand experience as the instructor for the course. What is a typical week for you as the course is running? What do you do? Who do you interact with?

Sue McClure:I interact by e-mail, and sometimes Google Hangouts, with the coaches and with some of the students. Now, not all of the students are going to contact me about a problem they might have because many of them don’t have any problems, and that’s wonderful. But quite a few of them do have problems either with understanding what they’re supposed to be doing or how to do what they’re supposed to be doing or how to contact somebody about something, and then they’ll send me an e-mail.

Phil Hill: So, as you go through this, it sounds like there’s quite a change in the role of the faculty member from a traditional course, and since you just got involved several months ago in the design and in instructing it, describe for me the difference in that role. What’s changed, and how does it affect you as a professor?

Sue McClure: Before I did this course, the way it’s being done now, I had taught [Math 110] online a few other semesters, and the main difference between those experiences and this experience is that with this experience our students have far more help, far more assistance, far more people willing to step up when they need help with anything to try to make them be successful. The main difference … is that with this experience our students have far more help.

Phil Hill: What about the changes for you personally?

Sue McClure: Partly because I think ASU is growing so much, my class sizes are getting bigger and bigger. That probably would have happened even if we were teaching these the way that we taught them before. That’s one big change—more and more students. So, having these coaches that we have working with us and for us has just been priceless. We couldn’t do it without them.

Phil Hill: It seems your role comes into more of an overseeing the coaches for their direct support of the students. Plus it sounds like you step in to directly talk to students where needed as well. Your role comes into more of an overseeing the coaches for their direct support of the students.

Sue McClure: Right. I think that explains it very well.

From what Michael and I have seen in the e-Literate TV case studies as well as other on-campus consulting experiences, the debate over adaptive software or personalized learning being used to replace faculty members is a red herring. Faculty replacement does happen in some cases, but that debate masks a more profound issue – how faculty members have to change roles to adapt to a student-centered personalized learning course design. [updated to clarify language]

For this remedial math course, the faculty member changes from one of content delivery to one of oversight, intervention, and coaching. This change is not the same for all disciplines, as we’ll see in upcoming case studies, but it is quite consistent with the experience at Essex County College.

As mentioned by Sue, however, these instructional changes do not just impact faculty members – they also affect teaching assistants. Below is a discussion with some TAs from the same course.

Phil Hill:Beyond the changes to the role of faculty, there are also changes to the role of teaching assistants.

Namitha Ganapa:Basically, in a traditional course there’s one instructor, maybe two TAs, and a class of maybe 175 students. So, it’s pretty hard for the instructor to go to each and every student. Now, we are 11 coaches for Session C. Each coach is having a particular set of students, so it’s much easier to focus on the set of students, and that helps for the progress.

We should stop here and note the investment being made by ASU – moving from 2 TAs to 11 for this course. There are two sides to this coin, however. On one side, not all schools can afford this investment in a new course design and teaching style. On the other side, it is notable that instructor roles are increasing (same number of faculty members, more TAs).

Jacob Cluff: I think, as a coach, it’s a little more involved with the students on a day-to-day basis. Every day I keep track of all the students, their progress, and if they’re struggling on a skill I make a video, send it to them, ask them if they need help understanding it—that sort of thing.

Phil Hill: So, Jacob, it sounds like this is almost an intervention model—that your role is looking at where students are and figuring out where to intervene and prompt them. Is that an accurate statement?

Jacob Cluff: I think that’s a pretty fair statement because most of the students (a lot of students)—they’re fine on their own and don’t really need help at all. They kind of just get off and run. So, I spend most of my time helping the students that actually need help, and I also spend time and encourage students that are doing well at the same time.
I spend most of my time helping the students that actually need help.

Phil Hill: So, Namitha, describe what is the typical week for you, and is it different? Any differences in how you approach the coaching role than from what we’ve heard from Jacob?

Namitha Ganapa: It’s pretty much the same, but my style of teaching is I make notes. I use different colors to highlight the concept, the formula, and how does the matter go. Many of my students prefer notes, so that is how I do it.

Phil Hill: So, there’s sort of a personal style to coaches that’s involved.

This aspect of the changing role of both faculty members and TAs is too often overlooked, and it’s helpful to hear from them first-hand.

The post Instructor Replacement vs. Instructor Role Change appeared first on e-Literate.

SQL Server 2016 CTP2: Stretch database feature - Part 1

Yann Neuhaus - Tue, 2015-06-09 03:51

SQL Server 2016 CTP 2 has introduced some interesting new features such as Always Encrypted, Stretch database, the configuration of the tempdb in the SQL Server installation, aso...

Regarding the configuration of the tempdb in SQL Server 2016 CTP 2, I recommend you a good article called SQL Server 2016 CTP2 : first thoughts about tempdb database from David Barbarin.


In this article, I will focus on the Stretch database feature!


What is the Stretch Database feature?

This new feature allows to extend on-premise databases to Microsoft Azure. In other words, you can use the Microsoft cloud as an additional storage for your infrastructure.

This can be useful if you have some issues with your local storage, such as available space.



First, you need to enable the option on the server by running the stored procedure named 'sp_configure'. It requires at least serveradmin or sysadmin permissions.





Of course, you also need a valid Microsoft Azure subscription and your credentials. Be careful, a SQL Database server and an Azure storage will be used for this new feature.


Enable Stretch for a database

After enabling this feature at the server level, you need to enable it for the desired database.

It requires at least db_owner and CONTROL DATABASE permissions.

By default, it will create a SQL Database server with the Standard service tier and the S3 performance level. To fit your needs, you can change the level of the service afterwards.

Everything is done using a wizard in Management Studio. To open the wizard, proceed as follows:



Skip the 'Introduction' step to access to the 'Microsoft Sign-In' step:



You need your Microsoft Azure credentials to access to your subscription. Once this is done, you can click on 'next'.




You have to select an Azure location. Of course for better performances, you should select the closest location to your on-premise server.
You also need to provide credentials for the Azure SQL DB server which will be created through your wizard.
The last step is to configure the SQL Databases firewall in Azure to allow connection from your on-premise server. To do this, you must specify a custom IP range or use the current IP of your instance.

Then, click the 'next' button. A summary of all your configuration is displayed. Click the 'next' button again.



The configuration is now completed! The feature is enabled for your database.

With Visual Studio, you can connect to the SQL Database server which is in Azure. You can see the SQL Database server recently created:




At the moment, there is no table stored in Azure, because we do not have enabled the feature for a table. In my next blog, I will show you how to do this!

Links for 2015-06-08 []

Categories: DBA Blogs

Where is my attachment??

OracleApps Epicenter - Mon, 2015-06-08 23:46
Where are attachments stored?   Whenever user upload a file from say transaction workbench, where will this file be store on the database? Also is there any physical file directory specific? We need to know where does Oracle store an attachment? In any of the FND application tables or in a file system?? How to […]
Categories: APPS Blogs

Accounting Flexfield Change: Can we change the value set of a segment(same size)

OracleApps Epicenter - Mon, 2015-06-08 23:46
We need to change the value set of one of the segments of our Accounting Flexfield. The new value set is of the same size. I need to know if that is allowed by Oracle Support. I read some articles where it says value set should not be changed if the max size is different. […]
Categories: APPS Blogs