Skip navigation.

Feed aggregator

VIDEO: The Evolution of IT Outsourcing

Pythian Group - Fri, 2014-04-04 07:46

“For us to understand what’s going on in the industry at large, it helps for us to have a deeper understanding of the history of the outsourcing industry.” Pythian founder, Paul Vallée shares his insights on the history and evolution of IT outsourcing.

Pythian has developed 5 criteria for choosing a data management outsourcing partner. Download the white paper here.

Categories: DBA Blogs

Log Buffer #366, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-04-04 07:43

While the Oracle blogosphere is buzzing with the Collaborate news and views, SQL Server and MySQL bloggers are also getting upbeat about their respective fields and producing gems of blog posts. This Log Buffer Edition covers that all and more.

Oracle:

Run Virtual Machines with Oracle VM.

Last Call to Submit to the JavaOne Java EE Track.

UX – No Time To Kill.

Updated: Price Comparison for Big Data Appliance and Hadoop.

PLSQL_OPTIMIZE_LEVEL paraeter and optimizing of the PL/SQL code.

SQL Server:

SQL Server 2012 AlwaysOn Groups and FCIs.

How to recover a suspect msdb database in SQL Server.

Data Science Laboratory System – Distributed File Databases.

Stairway to T-SQL: Beyond The Basics Level 5: Storing Data in Temporary Tables.

The Girl with the Backup Tattoo.

MySQL:

pt-online-schema change and row based replication.

MariaDB Client Library for C 2.0 and Maria JDBC Client 1.1.7 released.

Help MariaDB gather some statistics!

Uninitialized data in the TokuDB recovery log.

MySQL at LOADays Conference in Antwerp, Belgium (Apr 5-6, 2014)

Categories: DBA Blogs

Collaborate 14: Taking the WebCenter Portal User Experience to the Next Level!

Come join me and Ted Zwieg at Collaborate14 on our presentation Taking UX and development to the next level.
Fri, Apr 11, 2014 (09:45 AM – 10:45 AM) : Level 3, San Polo 3501A. 

Here is our session overview -

Taking the WebCenter Portal User Experience to the Next Level!

Abstract

Learn techniques to create unique, award winning portals that not only supports todays need for Mobile responsive and adaptive content but take the next steps towards innovative design – enhancing both the user journey and experience for creating todays modern portal and the way in which developers  can expand the reach and potential of the portal with these new modern techniques.
Attendees will not only learn about new approaches but will be shown live portals using these techniques today to create a modern experience. Learn how to develop your portal for future and enable marketing/design teams to react and generate interactive content fast with no ADF knowledge.

 

Target Audience

Designed for users wanting to learn the art of the possible and discover what is achievable with WebCenter Portal and ADF – creating compelling user experiences and keeping up to date with modern techniques and design approaches that can be combined to create a faster more interactive ways of navigating through portlets and the Portal.

 

Executive Summary

This session will demonstrate a couple award winnings examples of live clients who have taken their ADF WebCenter Portal environment to the next level – showing how by combining HTML5 techniques, third party libraries and responsive/adaptive design with ADF; when used in the correct way can not only improve the performance but the way in which users and developers can interact with portal using modern web design techniques.

 

Learner will be able to:

  • Identify art of the possible with ADF. (everything is achievable…)
  • Discuss achievable concepts and methods to enhancing the ways in which users can interact with Portal.
  • Improved understanding of Responsive and Adaptive techniques – not only targeted for Mobile devices
  • Understand how to structure the portal for faster response times with new frontend techniques
  • Integrate with Non ADF third party components for a more dynamic experience
  • Developers will learn new methods to manage and maintain key core components

The post Collaborate 14: Taking the WebCenter Portal User Experience to the Next Level! appeared first on C4 Blog by Fishbowl Solutions.

Categories: Fusion Middleware, Other

Details on Two New Upcoming Courses in Brighton – Including ODI12c

Rittman Mead Consulting - Fri, 2014-04-04 05:09

Just a quick note to mention two courses that are being run from our Brighton office soon, that might interested readers of the blog.

Our OBIEE 11g Bootcamp course, newly updated for OBIEE 11.1.1.7, is a start-to-finish introduction to Oracle Business Intelligence Enterprise Edition 11g aimed at developers and admins new to the platform. Starting with an overview of the platform, then taking you through RPD development, creating reports and dashboards and through to security, configuring for high-availability and systems management, this is our most popular course and includes a free copy of my book, “Oracle Business Intelligence Developers Guide”.

This 5-day course covers the following topics:

  • OBIEE 11g Overview & Product Architecture
  • Installation, Configuration & Upgrades
  • Creating Repositories from Relational Sources
  • Advanced Repository Modelling from Relational Sources
  • Systems Management using Oracle Enterprise Manager
  • Creating Analyses and Dashboards
  • Actionable Intelligence
  • KPIs, Scorecards & Strategy Management
  • Creating Published Reports (BI Publisher)
  • OBIEE 11g Security
  • High-Availability, Scaleout & Clustering
  •  OBIEE 11g Bootcamp, April 28th-May 2nd, 2014

Details on the full course agenda are available here, and the next run of the course is in Brighton on April 28th – May 2nd, 2014 – register using that link.

We’re also very pleased to announce the availability of our new Data Integrator 12c 3-day course, aimed at developers new to ODI as well as those upgrading their knowledge from the ODI11g release. Written entirely by our ODI development team and also used by us to train our own staff, this is a great opportunity to learn ODI12c based on Rittman Mead’s own, practical field experience.

The topics we’ll be covering in this course are:

  • Getting Started with ODI 12c
  • ODI 12c TopologyODI 12c Projects
  • Models & Datastores
  • Data Quality in a Model
  • Introduction to ODI Mappings
  • ODI Procedures, Variables, Sequences, & User Functions
  • Advanced ODI Mappings
  • ODI Packages
  • Scenarios in ODI
  • The ODI Debugger
  • ODI Load Plans

We’re running this course for the first time, in Brighton on May 12th – 14th 2014, and you can register using this link.

Full details of all our training courses, including public scheduled courses and private, standard or customised courses, can be found on our Training web page or for more information, contact the Rittman Mead Training Team.

Categories: BI & Warehousing

OBIEE Security: Repositories and RPD File Security

The OBIEE repository database, known as a RPD file because of its file extension, defines the entire OBIEE application. It contains all the metadata, security rules, database connection information and SQL used by an OBIEE application. The RPD file is password protected and the whole file is encrypted. Only the Oracle BI Administration tool can create or open RPD files and BI Administration tool runs only on Windows.  To deploy an OBIEE application, the RPD file must be uploaded to Oracle Enterprise Manager. After uploading the RPD, the PRD password then must be entered into Enterprise Manager.

From a security assessment perspective, who has physical access to the RPD file and the RPD password is critical. If multiple OBIEE applications are being used, the RPD passwords should all be different. It is also recommended that the RDP password be rotated per whatever policy governs critical database accounts and that production RPD passwords be different than non-production RPD passwords. 

Once deployed through WebLogic, RPD file (version 11g) is located here: 

ORACLE_INSTANCE/bifoundation/OracleBIServerComponent/coreapplication_obisn/

 

Figure 1 Repository (RDP) File Define OBIEE Solutions

 

Figure 2 Windows based OBIEE BI Admin Tool

 

If you have questions, please contact us at info@integrigy.com

 -Michael Miller, CISSP-ISSMP

References Tags: ReferenceOracle Business Intelligence (OBIEE)Security Resource
Categories: APPS Blogs, Security Blogs

Oracle Priority Service Newsletter for 03-APR-2014

Oracle Infogram - Thu, 2014-04-03 18:16

RDBMS
From the Insights from an Oracle University Classroom blog: Using the Container Database in Oracle Database 12c.
Never miss a chance to read Tom writing about Oracle if you can: Tom Kyte writes about Oracle Database 12c Multitenant Option in Oracle Magazine.
From the DBA survival BLOG: Removing passwords from Oracle scripts: Wallets and Proxy Users
also on 12c: Oracle Database 12c Learning Library...
From INTERMEDIATE SQL: How to track SQL performance. Part 1: is it good to be “mean” ?
Jonathan Lewis over at Oracle Scratchpad is posing a challenge. Are you up for it? Diagnostics.
SQL Developer
From that JEFF SMITH: Oracle SQL Developer: Query Builder Video Demonstration.
ASM
From the ASM Support Guy: ASM in Exadata.
SOA
From ArchBeat: Video: SOA Principles, SOA Applications - Chris Ostrowski
Exadata
Over at Richard Foote's Oracle Blog: Exadata Storage Indexes – Part I (Beginning To See The Light).
Indexing
Also from the Oracle index guru Richard Foote: Indexing Foreign Keys (Helden)
MySQL
From MySQL on Windows: MySQL for Excel 1.2.1 has been released.
ADF
From ADF Practice: Binding Container Viewable Attribute.
and from Oracle ADF Stuff: Avoid double download of PDF in popup + inlineframe.
Java
From ...and they shall know me by my speling errors.: 8 Cool Things About Java Streams.
APEX
From the iAdvice blog: APEX 5.0: Modal dialogs have never been so easy!.
EBS
Over at the Oracle E-Business Suite Support Blog:
New Log Analyzer for Depreciation Error Logs
Experience a Smoother Period Close with the Inventory Analyzer
New Contracts RUP Patches Released
Workflow Analyzer v5.06 Released & Workflow Search Assistant
Webcast: Understanding Event Based Revenue Management & Revenue Contingencies
Intercompany Program Passing Incorrect CREATED_BY Details to Interfaces
What's NEW? 12.1.3+ E-business Suite Recommended Patch Collection 1!
New Feature! Balance Intercompany Journals Using Clearing Balancing Segment Value!
Using Additional Currencies for Reporting Purposes in General Ledger

Webcast: E-Business Tax (EBTax) Webcast Series April 22 - May 7, 2014

UX - No Time To Kill

Floyd Teter - Thu, 2014-04-03 15:51
There’s no time to kill between the cradle and the graveFather Time still takes a toll on every minute that you saveLegal tenders never gonna change the number of your daysThe highest cost of livin’s dyin’, that’s one everybody paysSo have it spent before you get the bill, there’s no time to kill                                        —From Clint Black’s “There’s No Time To Kill”
Ha! A classic country music pull…who’d have thunk it, huh?
There’s no time to kill is an appropriate phrase for the past few weeks in the Oracle UX world.  Lots of cool stuff happening.  To whit:
-  The brilliant and versatile Ultan O’Broin (@ultan) and the Oracle Applications UX team have released a free ebook on Simplified User Experience Design Patterns for the Oracle Applications Cloud Service.  Yup, it’s written for the recent R8 release.
-  In conjunction with the ebook release, Amis(@amis_services) and Oracle recently partnered together in putting on a great Next Generation UX showcase event in the Nederlands (you may need a little help from Google Translate to get the gist of this summary).
-  If you’re a member of the Oracle Partner Network, there is a new Guided Learning Path and Specialization:  Oracle Fusion Applications User Experience Specialist.  Yes, of course I jumped right in and earned mine.  And Steve Bentz (@smb1650) was “johnny-on-the-spot” as well.  It’s a challenging cert, but well worth earning…I learned a few things in the process. And I'm grateful that the test was offered online - no trudging down to a PeasonVUE test center.  Hello, 21st Century!
- In working with Higher Education customers, I’ve found a very cool (and free) prototyping tool for PeopleSoft, including Campus Solutions.  And it’s built in PowerPoint.  You can get a copy yourself here.  Very cool for those of you working with PeopleTools.
Busy, busy, busy time in UX…gotta keep up.  No time to kill.  

Panduit - Leading the Way for Exceptional Digital Experiences

WebCenter Team - Thu, 2014-04-03 13:15

 When we notice great work being done by our Oracle WebCenter customers, we want to strut our stuff like a Peacock and show them off to the world. Last year, we received a nomination for the Oracle Excellence Awards in Fusion Middleware Innovation for our customer, Panduit. The competition was fierce last year for only 2 Award spots and although they ultimately did not win one of the awards, they certainly deserve recognition for their solution.

Panduit is a world-class developer and provider of leading-edge Data centers, Connected buildings and Industrial automation solutions that help customers optimize the physical infrastructure and mitigate risk through simplification, increased agility and operational efficiency. Independent leader since 1955 with a Global presence and local focus - 112 countries of operations, 4,000+ employees

Like many companies, their previous website and portals no longer met modern user expectations:

• They Needed to Foster, Support, Engage and Enable their Growing Global Partner Ecosystem with Self-Service Applications and Content • They had 300 System Integrators,  1451 Distributor Partners,  5026 Distribution Locations and 1900 Contractors/Installers to communicate with in a consistent regular manner.
• They needed to Modernize, globalize and re-brand the Panduit.com website and Partner Portals to Prevent loss of business due to partner drop-offs • They needed to Create an integrated High Availability solution for all customers and partners, with a single address (www.panduit.com) with Mobile and Social Media Channels • They needed to Develop a catalog of self-service functions that can be assigned based on differentiated role-based services by partner & customer segments and/or tiers

Its always better to hear it directly from the happy customer.

The Solution for Panduit was Oracle WebCenter Digital Experience Platform and Oracle Fusion Middleware AppAdvantage including the following:

  • Oracle WebCenter (Sites, Portal, and Content), Oracle SOA, Oracle BPM, OPSS, Weblogic Suite, and ADF
  • Single Platform – Supports portal, website, database based services, application-to-application service integration, EDI, etc.
  • Integration – Adapters were available out-of-the box to integrate with Panduit’s Oracle E-Business Suite with protected support for future releases
  • Reusability – Component and visual template reuse between applications
  • Social Community – Out-of-the-box social media and collaboration
  • Analytics –  Built in analytical capabilities
  • Mobility – Can be extended to mobile devices

Their new integrated website and self-service partner portal was completed in 12 months, starting from ground zero to launch.

Now they are able to:

  • Support a growing global partner ecosystem with secure, multilingual online experience
  • Provide integrated role-based experiences for all customers and partners within a single address (www.panduit.com).
  • Improve number and quality of sales leads through increased web and mobile customer interactions and registrations
  • Experience the benefits with activity up 57% from previous year with portal site @ 42,632 self-serve transactions per month


 Delivering Moments of Engagement Across the Enterprise


Annonce : PostgreSQL plugin pour EM12c

Jean-Philippe Pinte - Thu, 2014-04-03 11:20
En mai prochain, Blue Medora mettra à disposition un plugin pour PostgreSQL

Ce plugin sera disponible gratuitement avec un support via le forum.

A Simple Way to Monitor Java in Linux

Pythian Group - Thu, 2014-04-03 07:49

A quick and easy way to know what is it inside Java process that is using your CPU. Using just Linux command line tools and JDK supplied command line utilities.

Introduction

Here are a few things you need to know before starting. Following the links is not necessary, they are available for the reference.

  • there are different vendors of Java Virtual Machine. This post is about Oracle’s JVM which is called HotSpot. Linux x86-64 is considered as OS platform. Most of the things about HotSpot are applicable to other vendors too but with slight changes. OSes other than Linux may add some more complications
  • it’s called Virtual Machine, because it virtualizes runtime environment for a Java application. So to know where to look at you need to know a few things about how specific VM is organized. For a detailed overview of the HotSpot, please refer to this article
  • on Linux, a thread inside HotSpot VM is mapped directly to an OS level thread. Well, it may not be true on all OSes, but for modern Linux kernels this is true. So every thread on the OS level is a thread inside a Java application
  • there are generally two types of threads inside a HotSpot VM: native and application threads. Application threads are those that run some Java code, and usually this is what applications are using to run their code. Native threads run something which is not written in Java, usually it’s code in C/C++ and usually all these threads are special utility threads started by a VM itself.
Identifying Threads

Since a Java program may start many threads, each executing some program code, it is necessary to understand which threads are using CPUs. On Linux, top -H will show you CPU usage on a per-thread basis. Here is an example. First, a process which consumes CPU:

top - 16:32:29 up 10:29,  3 users,  load average: 1.08, 0.64, 0.56
Tasks: 172 total,   1 running, 171 sleeping,   0 stopped,   0 zombie
Cpu(s): 48.7%us, 51.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.3%si,  0.0%st
Mem:   1500048k total,  1476708k used,    23340k free,    62012k buffers
Swap:  4128764k total,    75356k used,  4053408k free,   919836k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 7599 oracle    20   0 1151m  28m 9100 S 85.5  1.9   0:12.67 java
 2575 oracle    -2   0  709m  10m 8336 S 10.6  0.7  47:34.05 oracle
 2151 root      20   0  207m  44m 6420 S  1.7  3.0   0:27.18 Xorg

If we check the details of CPU usage for PID=7599 with “top -H -p 7599″, then we will see something like this:

top - 16:40:39 up 10:37,  3 users,  load average: 1.47, 1.25, 0.90
Tasks:  10 total,   1 running,   9 sleeping,   0 stopped,   0 zombie
Cpu(s): 49.3%us, 50.3%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.3%si,  0.0%st
Mem:   1500048k total,  1460468k used,    39580k free,    50140k buffers
Swap:  4128764k total,    76208k used,  4052556k free,   912644k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 7601 oracle    20   0 1151m  28m 9100 R 85.9  1.9   7:19.98 java
 7602 oracle    20   0 1151m  28m 9100 S  1.0  1.9   0:02.95 java
 7599 oracle    20   0 1151m  28m 9100 S  0.0  1.9   0:00.01 java

So there is 1 execution thread inside a Java process, which is constantly on top and is utilizing around 85% of a single core.

Now the next thing to know is: what is this thread doing. To answer that question we need to know two things: thread stacks from a Java process and a way to map OS level thread to a Java thread. As I mentioned previously, there is one to one mapping between OS and Java level threads in HotSpot running on Linux.

To get a thread dump we need to use a JDK utility called jstack:

[oracle@oel6u4-2 test]$ jstack 7599
2014-02-28 16:57:23
Full thread dump Java HotSpot(TM) 64-Bit Server VM (20.12-b01 mixed mode):

"Attach Listener" daemon prio=10 tid=0x00007f05a0001000 nid=0x1e66 waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"Low Memory Detector" daemon prio=10 tid=0x00007f05c4088000 nid=0x1db8 runnable [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"C2 CompilerThread1" daemon prio=10 tid=0x00007f05c4085800 nid=0x1db7 waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"C2 CompilerThread0" daemon prio=10 tid=0x00007f05c4083000 nid=0x1db6 waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"Signal Dispatcher" daemon prio=10 tid=0x00007f05c4081000 nid=0x1db5 runnable [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"Finalizer" daemon prio=10 tid=0x00007f05c4064800 nid=0x1db4 in Object.wait() [0x00007f05c0631000]
   java.lang.Thread.State: WAITING (on object monitor)
	at java.lang.Object.wait(Native Method)
	- waiting on <0x00000000eb8a02e0> (a java.lang.ref.ReferenceQueue$Lock)
	at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
	- locked <0x00000000eb8a02e0> (a java.lang.ref.ReferenceQueue$Lock)
	at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
	at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)

"Reference Handler" daemon prio=10 tid=0x00007f05c4062800 nid=0x1db3 in Object.wait() [0x00007f05c0732000]
   java.lang.Thread.State: WAITING (on object monitor)
	at java.lang.Object.wait(Native Method)
	- waiting on <0x00000000eb8a0380> (a java.lang.ref.Reference$Lock)
	at java.lang.Object.wait(Object.java:485)
	at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
	- locked <0x00000000eb8a0380> (a java.lang.ref.Reference$Lock)

"main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]
   java.lang.Thread.State: RUNNABLE
	at java.security.SecureRandom.getInstance(SecureRandom.java:254)
	at java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:176)
	at java.security.SecureRandom.<init>(SecureRandom.java:133)
	at RandomUser.main(RandomUser.java:9)

"VM Thread" prio=10 tid=0x00007f05c405c000 nid=0x1db2 runnable

"VM Periodic Task Thread" prio=10 tid=0x00007f05c408b000 nid=0x1db9 waiting on condition

JNI global references: 975

To map OS level thread to a Java thread in a thread dump, we need to convert native thread ID from Linux to base 16, and search for “nid=$ID” in the stack trace. In our case thread ID is 7601 which is 0x1db1, and the Java thread had following stack trace at the time of running jstack:

"main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]
   java.lang.Thread.State: RUNNABLE
	at java.security.SecureRandom.getInstance(SecureRandom.java:254)
	at java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:176)
	at java.security.SecureRandom.<init>(SecureRandom.java:133)
	at RandomUser.main(RandomUser.java:9)
A Way to Monitor

Here is a way to get a stack trace of a thread inside a Java process with command line tools (PID and TID are Process ID of Java process, and Thread ID of an interesting thread on the OS level):

[oracle@oel6u4-2 test]$ jstack $PID | awk '/ nid='"$(printf '%#x' $TID)"' /,/^$/'
"main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]
   java.lang.Thread.State: RUNNABLE
	at java.io.FileInputStream.readBytes(Native Method)
	at java.io.FileInputStream.read(FileInputStream.java:220)
	at sun.security.provider.NativePRNG$RandomIO.readFully(NativePRNG.java:185)
	at sun.security.provider.NativePRNG$RandomIO.ensureBufferValid(NativePRNG.java:247)
	at sun.security.provider.NativePRNG$RandomIO.implNextBytes(NativePRNG.java:261)
	- locked <0x00000000eb8a3370> (a java.lang.Object)
	at sun.security.provider.NativePRNG$RandomIO.access$200(NativePRNG.java:108)
	at sun.security.provider.NativePRNG.engineNextBytes(NativePRNG.java:97)
	at java.security.SecureRandom.nextBytes(SecureRandom.java:433)
	- locked <0x00000000e43adc90> (a java.security.SecureRandom)
	at java.security.SecureRandom.next(SecureRandom.java:455)
	at java.util.Random.nextInt(Random.java:189)
	at RandomUser.main(RandomUser.java:9)

As you can see here, the thread is executing a main method of RandomUser class – at least at the time of taking a thread dump. If you would like to see how this changes over time, then a simple watch command may help to get an idea if this thread stack changes frequently or not:

watch -n .5 "jstack $PID | awk '/ nid='"$(printf '%#x' $TID)"' /,/^$/'"

Every 0.5s: jstack 7599 | awk '/ nid='0x1db1' /,/^$/'                                                                                Fri Mar 14 16:29:37 2014

"main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]
   java.lang.Thread.State: RUNNABLE
        at java.util.LinkedHashMap$LinkedHashIterator.<init>(LinkedHashMap.java:345)
        at java.util.LinkedHashMap$KeyIterator.<init>(LinkedHashMap.java:383)
        at java.util.LinkedHashMap$KeyIterator.<init>(LinkedHashMap.java:383)
        at java.util.LinkedHashMap.newKeyIterator(LinkedHashMap.java:396)
        at java.util.HashMap$KeySet.iterator(HashMap.java:874)
        at java.util.HashSet.iterator(HashSet.java:153)
        at java.util.Collections$UnmodifiableCollection$1.<init>(Collections.java:1005)
        at java.util.Collections$UnmodifiableCollection.iterator(Collections.java:1004)
        at java.security.SecureRandom.getPrngAlgorithm(SecureRandom.java:523)
        at java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:165)
        at java.security.SecureRandom.<init>(SecureRandom.java:133)
        at RandomUser.main(RandomUser.java:9)

So this way you could see what the application thread is doing right now. Since it could be quite a lot of different type of work, the next reasonable step is to add aggregation.

[oracle@oel6u4-2 test]$ ./prof.sh 7599 7601
Sampling PID=7599 every 0.5 seconds for 10 samples
      6  "main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]    java.lang.Thread.State: RUNNABLE
	at java.io.FileInputStream.readBytes(Native Method)
	at java.io.FileInputStream.read(FileInputStream.java:220)
	at sun.security.provider.NativePRNG$RandomIO.readFully(NativePRNG.java:185)
	at sun.security.provider.NativePRNG$RandomIO.ensureBufferValid(NativePRNG.java:247)
	at sun.security.provider.NativePRNG$RandomIO.implNextBytes(NativePRNG.java:261)
	- locked <address> (a java.lang.Object)
	at sun.security.provider.NativePRNG$RandomIO.access$200(NativePRNG.java:108)
	at sun.security.provider.NativePRNG.engineNextBytes(NativePRNG.java:97)
	at java.security.SecureRandom.nextBytes(SecureRandom.java:433)
	- locked <address> (a java.security.SecureRandom)
	at java.security.SecureRandom.next(SecureRandom.java:455)
	at java.util.Random.nextInt(Random.java:189)
	at RandomUser.main(RandomUser.java:9)

      2  "main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]    java.lang.Thread.State: RUNNABLE
	at sun.security.provider.NativePRNG$RandomIO.implNextBytes(NativePRNG.java:268)
	- locked <address> (a java.lang.Object)
	at sun.security.provider.NativePRNG$RandomIO.access$200(NativePRNG.java:108)
	at sun.security.provider.NativePRNG.engineNextBytes(NativePRNG.java:97)
	at java.security.SecureRandom.nextBytes(SecureRandom.java:433)
	- locked <address> (a java.security.SecureRandom)
	at java.security.SecureRandom.next(SecureRandom.java:455)
	at java.util.Random.nextInt(Random.java:189)
	at RandomUser.main(RandomUser.java:9)

      1  "main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]    java.lang.Thread.State: RUNNABLE
	at java.util.Random.nextInt(Random.java:189)
	at RandomUser.main(RandomUser.java:9)

      1  "main" prio=10 tid=0x00007f05c4007000 nid=0x1db1 runnable [0x00007f05c82f4000]    java.lang.Thread.State: RUNNABLE
	at java.security.SecureRandom.next(SecureRandom.java:452)
	at java.util.Random.nextInt(Random.java:189)
	at RandomUser.main(RandomUser.java:9)

Here is what’s inside the prof.sh script:

#!/bin/bash

P_PID=$1
P_NID=$2

if [ "$P_SLEEP" == "" ]; then
  P_SLEEP=0.5
fi

if [ "$P_CNT" == "" ]; then
  P_CNT=10
fi

echo Sampling PID=$P_PID every $P_SLEEP seconds for $P_CNT samples

if [ "$P_NID" == "" ]; then
  CMD="awk '//'"
else
  CMD="awk '/ nid='"$(printf '%#x' $P_NID)"' /,/^$/'"
fi

for i in `seq $P_CNT`
do
  jstack $P_PID | eval $CMD
  sleep $P_SLEEP;
done |
  awk ' BEGIN { x = 0; s = "" }
    /nid=/ { x = 1; }
    // {if (x == 1) {s = s ", "gensub(/<\w*>/, "<address>", "g") } }
    /^$/ { if ( x == 1) { print s; x = 0; s = ""; } }' |
  sort | uniq -c | sort -n -r | head -10 |
  sed -e 's/$/\n/g' -e 's/\t/\n\t/g' -e 's/,//g'

The idea of the script is based on the method from poor man’s profiler adapted for HotSpot thread dumps. The script does the following things:

  • Takes $P_CNT thread dumps of the Java process ID passed as $1 (10 by default)
  • If a native thread ID has been supplied as $2, then searches for the thread stack of this thread in the thread dump
  • Concatenates each thread stack trace into a comma-separated string
  • Aggregates strings and sorts them by the number of occurrences
  • Prettifies the output: removes tabs, commas, and adds new lines back to the thread stack
Conclusion

With a few little things it is possible to understand quite a lot of things in almost any situation related to Java: you can find out the most frequent stack trace by sampling thread dumps.
With this knowledge it is then easy to understand why an issue happening. In my test case, the application instantly generates random numbers without a pause, and 1 thread is occupying 1 CPU core.

Categories: DBA Blogs

Cache anomaly

Jonathan Lewis - Thu, 2014-04-03 06:27

Just a quick heads-up for anyone who likes to play around with the Keep and Recycle caches.

In 11g Oracle introduced the option for serial direct path reads for tablescans on tables that was sufficiently large – which meant more than the small_table_threshold – provided the table wasn’t already sufficient well cached.  (The rules mean that the choice of mechanism can appear to be a little random in the production environment for tables that are near the threshold size – but if you try testing by doing “alter system flush buffer_cache” you find that you always get direct path reads in testing.)

I’ve just discovered a little oddity about this, though.  I have a table of about 50MB which is comfortably over the threshold for direct path reads. But if I create a KEEP cache (db_keep_cache_size) that is a little larger than the table and then assign the table to the KEEP cache (alter table xxx storage(buffer_pool keep)) then 11.2.0.4 stops doing direct path reads, and caches the table.

Now this doesn’t seem unreasonable – if you’ve assigned an object to the KEEP cache you probably want it (or once wanted it) to be kept in cache as much as possible; so using the KEEP cache if it’s defined and specified makes sense. The reason I mention this as an oddity, though, is that it doesn’t reproduce in 11.1.0.7.

I think I saw a bug note about this combination a few months ago- I was looking for something else at the time and, almost inevitably, I can’t find it when I want it – but I don’t remember whether it was the 11.1 or 11.2 behaviour that was deemed to be correct.

 Update

See comments 1 and 2 below.  I’ve written about this previously, and the caching bechaviour is the correct behaviour. The patch is in 11.2.0.2 and backports are available for 11.1.0.7 and 11.2.0.1. The patch ensures that the table will be read into the cache if it is smaller than the db_keep_cache_size.  (Although we might want to check – see Tanel’s notes – whether this is based on the high water mark recorded in the segment header or on the optimizer stats for the table; and I suppose it might be worth checking that the same feature applies to index fast full scans). From the MoS document:

With the bug fix applied, any object in the KEEP buffer pool, whose size is less than DB_KEEP_CACHE_SIZE, is considered as a small or medium sized object. This will cache the read blocks and avoid subsequent direct read for these objects.

 

 


Cache anomaly

Jonathan Lewis - Thu, 2014-04-03 06:27

Just a quick heads-up for anyone who likes to play around with the Keep and Recycle caches.

In 11g Oracle introduced the option for serial direct path reads for tablescans on tables that was sufficiently large – which meant more than the small_table_threshold – provided the table wasn’t already sufficient well cached.  (The rules mean that the choice of mechanism can appear to be a little random in the production environment for tables that are near the threshold size – but if you try testing by doing “alter system flush buffer_cache” you find that you always get direct path reads in testing.)

I’ve just discovered a little oddity about this, though.  I have a table of about 50MB which is comfortably over the threshold for direct path reads. But if I create a KEEP cache (db_keep_cache_size) that is a little larger than the table and then assign the table to the KEEP cache (alter table xxx storage(buffer_pool keep)) then 11.2.0.4 stops doing direct path reads, and caches the table.

Now this doesn’t seem unreasonable – if you’ve assigned an object to the KEEP cache you probably want it (or once wanted it) to be kept in cache as much as possible; so using the KEEP cache if it’s defined and specified makes sense. The reason I mention this as an oddity, though, is that it doesn’t reproduce in 11.1.0.7.

I think I saw a bug note about this combination a few months ago- I was looking for something else at the time and, almost inevitably, I can’t find it when I want it – but I don’t remember whether it was the 11.1 or 11.2 behaviour that was deemed to be correct.

 Update

See comments 1 and 2 below.  I’ve written about this previously, and the caching bechaviour is the correct behaviour. The patch is in 11.2.0.2 and backports are available for 11.1.0.7 and 11.2.0.1. The patch ensures that the table will be read into the cache if it is smaller than the db_keep_cache_size.  (Although we might want to check – see Tanel’s notes – whether this is based on the high water mark recorded in the segment header or on the optimizer stats for the table; and I suppose it might be worth checking that the same feature applies to index fast full scans). From the MoS document:

With the bug fix applied, any object in the KEEP buffer pool, whose size is less than DB_KEEP_CACHE_SIZE, is considered as a small or medium sized object. This will cache the read blocks and avoid subsequent direct read for these objects.

 

 


SoapUI: Property Based Assertions

Darwin IT - Thu, 2014-04-03 06:14
In many cases you need to or are allowed to send a messageId and a correlationId to a service. In the response these Id's are mirrored back. In SoapUI it is pretty easy to generate those ID's using Groovy. The code for this is as follows:

def guidVal = "${java.util.UUID.randomUUID()}";
def testCase = testRunner.testCase;
testCase.setPropertyValue("MessageId", guidVal);
def msgDate = new Date() ;
def msgDateStr = msgDate.format("yyyy-MM-dd'T'HH:mm:ss");
testCase.setPropertyValue("MessageTimeStamp",msgDateStr);
This as you can conclude from the code this is from a Groovy test step in a test Case. In the succeeding soap request you can have something like the following to have the property values embedded in the message:

<ber:berichtHeader>
<ber:messageData>
<ber:messageId>${#TestCase#MessageId}
<ber:messageTimestamp>${#TestCase#MessageTimeStamp}
</ber:messageData>
<ber:correlatieId>${#TestCase#MessageId}
</ber:berichtHeader>
Here you see that I use the same id for both messageId as well as the correlationId. A correlationId might have a longer lifespan as a messageId. In my (simple) case we have just one-to-one conversations. In the response of the message you might find something like the following:

<abct:berichtHeader>
<abct:messageData>
<abct:messageId>23c20898-9164-449d-87ef-3d9ed96ba946
<abct:messageTimestamp>2014-04-03T13:40:25.147+02:00
<abct:refToMessageId>23bf6b7a-61f9-4c91-b932-b288c5e358be
</abct:messageData>
<abct:correlatieId>23bf6b7a-61f9-4c91-b932-b288c5e358be
</abct:berichtHeader>
Nice. Apparently SoapUI generated global unique message id's and apparently my service mirrored them back. But how do I test that automatically? The thing is in this that I don't know the expected value at designtime, since the messageId's are generated at runtime. But the nice thing in SoapUI is that almost everywhere you can define properties at several level's (General, Project, TestSuite, TestCase, etc.) and reference them almost every where. For instance, you can use properties to buildup a soap-endpoint, using properties at project level. So you could define an endpoint as follows:
http://${#Project#ServiceEndpoint}/${#Project#ServiceURI}
This can be used in assertions as well. So define an Xpath-Match assertion on your response and instead of a fixed expected value, give in
${#TestCase#messageId}
Like: As you can see you reference the TestCase property by prefixing it with #TestCase. This refers to the current, running TestCase. For the sharp readers: my property reference in the assert is lower-init-case, where in the groovy script and the message it has an initial capital. I found though that the property is apparently not case-sensitive.

BI Forum 2014 preview – No Silver Bullets : OBIEE Performance in the Real World

Rittman Mead Consulting - Thu, 2014-04-03 03:35

I’m honoured to have been accepted to speak at this year’s Rittman Mead BI Forum, the sixth year of this expert-level conference that draws some of the best Oracle BI/DW minds together from around the world. It’s running May 8th-9th in Brighton, and May 15-16th in Atlanta, with an optional masterclass from Cloudera’s Lars George the day before the conference itself at each venue.

My first visit to the BI Forum was in 2009 where I presented Performance Testing OBIEE, and now five years later (five years!) I’m back, like a stuck record, talking about the same thing – performance. That I’m still talking about it means that there’s still an audience for it, and this time I’m looking beyond just testing performance, but how it’s approached by people working with OBIEE. For an industry built around 1s and 0s, computers doing just what you tell them to and nothing else, there is a surprising amount of suspect folklore and “best practices” used when it comes to “fixing” performance problems.

OBIEE performance good luck charm

Getting good performance with OBIEE is just a matter of being methodical. Understanding where to look for information is half the battle. By understanding where the time goes, improvements can be targeted where they will be most effective. Heavily influence by Cary Millsap and his Method-R approach to performance, I will look at how to practically apply this to OBIEE. Most of the information needed to build up a full picture is available readily from OBIEE’s log files

I’ll also dig a bit deeper into OBIEE, exploring how to determine how the system’s behaving “under the covers”. The primary technique for this is through OBIEE’s DMS metrics which I have written about recently in relation to the new Rittman Mead open-source tool, obi-metrics-agent and am using day-to-day to rapidly examine and resolve performance problems that clients see.

I’m excited to be presenting again on this topic, and I hope to see you in Brighton next month. The conference always sells out, so don’t delay – register today!

Categories: BI & Warehousing

Corporations seek to find optimal database security

Chris Foot - Thu, 2014-04-03 01:46

Though it may sound counterintuitive, a number of database experts have claimed that a company may benefit from disclosing information regarding its IT infrastructure to competitors. This may seem like a network security nightmare in and of itself, but collaborating with other market participants may provide valuable insight as to how organizations can deter cybercriminals. Others prefer to stick with improvements issued by established professionals. 

Applying updates 
Possessing quality database protection is being seen more as a profit-driver than an expense, primarily due to the fact that if digital information is stolen from a corporate server, it could potentially result in millions of dollars in losses. It's no surprise that database administration services are being consulted now more than ever. In addition, the makers of the products these professionals interact with have assessed security concerns and sought to mitigate potential problems. 

Oracle NoSQL Database 3.0 was recently released, with improved performance, usability and safeguards. The upgrade utilizes cluster-wide, password-based user authentication and session-level SSL encryption techniques to deter cybercriminals from hacking into a company infrastructure. Andrew Mendelsohn, executive vice president of database server technologies for Oracle, claimed that that it helps remote DBA personnel construct and deploy state-of-the-art applications in a secure environment. 

Walking around naked 
Corporations often misunderstand the advice of IT professionals to share security protocols with their competitors. It's not about exposing weaknesses to cybercriminals and providing them with a comprehensive framework of the database's infrastructure, it's about collaborating with like-minded executives attempting to find a solution to an issue that isn't going to disappear. 

Evan Schuman, a contributor to Computerworld, cited Full Disclosure, an online community through which database administration support, C-suite personnel and IT professionals could publicly report network breaches and discuss methods through which security problems could be resolved.

Due to the fact that gray hat hackers could access the forum, researchers would notify software companies at least 30 days prior to posting on the website so that the developers could apply the appropriate patches beforehand. This kind of initiative identified problems before cybercriminals could exploit them. Unfortunately, to the dismay of its participants, rumors have been circulating that Full Disclosure will shut down in the near future.

"By not having this place to expose them, the vulnerabilities will remain hidden longer, they will remain unpatched longer, yet the attacks will keep coming," said an anonymous security manager for a retailer. 

Ultimately, black hat hackers have extensive communities through which they can share the same kind of information professionals posting to Full Disclosure are. If the website goes dark, cybercriminals will still have networks of communication, while law-abiding IT industry participants will not. 

SQL Developer’s Interface for GIT: Cloning a GitHub Repository

Galo Balda's Blog - Wed, 2014-04-02 22:05

SQL Developer 4 provides an interface that allows us to interact with Git repositories. In this post, I’m going to show how to clone a GitHub (A web based hosting service for software development projects that uses the Git revision control system) repository.

First you need to sign up for a GitHub account. You can skip this step if you already have one.

Your account will give you access to public repositories that could be cloned but I suggest you create your own repository so that you can play with SQL Developer and see what the different options can do.

Once you have an account, click on the green button that says “New Repository”. It will take you to a screen like this:

github_create_repo

Give your repository a name, decide if you want it to be public or private (you have to pay), click on the check box and then click on the green button. Now you should be taken to the main repository page.

github_repo

Pay attention to the red arrow on the previous image. It points to a text box that contains the HTTPS clone URL that we’re going to use in SQL Developer to connect to GitHub.

Let’s go to SQL Developer and click on Team –> Git –> Clone… to open the “Clone from Git Wizard”. Click on the next button and you should see the screen that lets you enter the repository details:

remote_repo

Enter the repository name, the HTTPS clone URL, your GitHub user name and your password. Click on next to connect to the repository and see the remote branches that are available.

remote_branch

The master branch gets created by default for every new repository. Take the defaults on this screen and click on next to get to the screen where you specify the destination for your local Git repository.

destination

Enter the path for the local repository and click on next. A summary screen is displayed and showing the options you chose. Click on finish to complete the setup.

How do we know if it worked? Go to the path of your local repository and it should contain the same structure as in the online repository.

local_repo

On a next post I’ll show how to commit changes to the local repository and how to push them to GitHub.


Filed under: Source Control, SQL Developer Tagged: Source Control, SQL Developer
Categories: DBA Blogs

MaxPermSize Be Gone!

Steve Button - Wed, 2014-04-02 17:48

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
No further commentary required.

Two Adaptive Plans Join Methods Examples

Bobby Durrett's DBA Blog - Wed, 2014-04-02 14:49

Here is a zip of two examples I built as I’m learning about the new adaptive plans features of Oracle 12c: zip

The first example has the optimizer underestimate the number of rows and the adaptive plans feature switches the plan on the fly from nested loops to hash join.

In the second example the optimizer overestimates the number of rows and the adaptive plans feature switches the plan from merge join to nested loops.

I ran the same scripts on 12c and 11.2.0.3 for comparison.

Example 1 11g:

Plan hash value: 2697562628

------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |      |      1 |        |      1 |00:00:00.01 |      18 |
|   1 |  SORT AGGREGATE               |      |      1 |      1 |      1 |00:00:00.01 |      18 |
|   2 |   NESTED LOOPS                |      |      1 |        |      8 |00:00:00.01 |      18 |
|   3 |    NESTED LOOPS               |      |      1 |      1 |      8 |00:00:00.01 |      17 |
|*  4 |     TABLE ACCESS FULL         | T1   |      1 |      1 |      8 |00:00:00.01 |      14 |
|*  5 |     INDEX RANGE SCAN          | T2I  |      8 |      1 |      8 |00:00:00.01 |       3 |
|   6 |    TABLE ACCESS BY INDEX ROWID| T2   |      8 |      1 |      8 |00:00:00.01 |       1 |
------------------------------------------------------------------------------------------------

Example 1 12c:

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem |  O/1/M   |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:00.01 |       6 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:00.01 |       6 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |      1 |      8 |00:00:00.01 |       6 |  2168K|  2168K|     1/0/0|
|*  3 |    TABLE ACCESS FULL| T1   |      1 |      1 |      8 |00:00:00.01 |       3 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2   |      1 |      1 |     16 |00:00:00.01 |       3 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Example 2 11g

---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem |  O/1/M   |
---------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |      |      1 |        |      1 |00:00:00.01 |      16 |       |       |          |
|   1 |  SORT AGGREGATE               |      |      1 |      1 |      1 |00:00:00.01 |      16 |       |       |          |
|   2 |   MERGE JOIN                  |      |      1 |      4 |      1 |00:00:00.01 |      16 |       |       |          |
|   3 |    TABLE ACCESS BY INDEX ROWID| T2   |      1 |     16 |      2 |00:00:00.01 |       2 |       |       |          |
|   4 |     INDEX FULL SCAN           | T2I  |      1 |     16 |      2 |00:00:00.01 |       1 |       |       |          |
|*  5 |    SORT JOIN                  |      |      2 |      4 |      1 |00:00:00.01 |      14 | 73728 | 73728 |          |
|*  6 |     TABLE ACCESS FULL         | T1   |      1 |      4 |      1 |00:00:00.01 |      14 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------

Example 2 12c

------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |      |      1 |        |      1 |00:00:00.01 |       5 |
|   1 |  SORT AGGREGATE               |      |      1 |      1 |      1 |00:00:00.01 |       5 |
|   2 |   NESTED LOOPS                |      |      1 |        |      1 |00:00:00.01 |       5 |
|   3 |    NESTED LOOPS               |      |      1 |      4 |      1 |00:00:00.01 |       4 |
|*  4 |     TABLE ACCESS FULL         | T1   |      1 |      4 |      1 |00:00:00.01 |       3 |
|*  5 |     INDEX RANGE SCAN          | T2I  |      1 |        |      1 |00:00:00.01 |       1 |
|   6 |    TABLE ACCESS BY INDEX ROWID| T2   |      1 |      1 |      1 |00:00:00.01 |       1 |
------------------------------------------------------------------------------------------------

The output of the plans for the 12c examples end with this line:

Note
-----
   - this is an adaptive plan

So, that tells me it is the adaptive plan feature that is changing the plan despite the wrong estimate of the number of rows.

- Bobby

 

Categories: DBA Blogs