Skip navigation.

Feed aggregator

Oracle Priority Support Infogram for 18-JUN-2015

Oracle Infogram - Thu, 2015-06-18 17:05

RDBMS
How does Oracle reuse the Expired and Unexpired undo extents? from Toad World.
12c new feature [PDB save state], from MAA/HA Solutions.
Big Data
In what has to be the most focused name for a blog I’ve ever seen, the Adding Location and Graph Analysis to Big Data blog lets us know that you may: Want to try out Oracle Big Data Spatial and Graph?
Big Data Spatial and Graph Analytics for Hadoop, from BI & EPM Partner Community EMEA.

WebLogic
Managing Logs in WebLogic, from The WebLogic Server Blog.
Diagnosing performance issues front to back-end in WebLogic Server applications with Java Flight Recorder, from WebLogic Partner Community EMEA.
Java
What's Coming with JSF 2.3?, from The Aquarium.
Java Stored Procedures and SQLJUTL2.INVOKE, from Jeff Taylor’s Weblog.
SOA
A couple of goodies from Capgemini:
5 Best practices for SoapUI Pro and Oracle SOA Suite.
and
Oracle SOA Suite Metadata Repository Performance Management
And from the Oracle SOA Suite - Team Blog: Using Oracle Service Bus with Apache Kafka
Security
From Security Inside Out: Database Administrators –the Undercover Security Superheroes. The DBA and SysAdmin are the two pillars of security in most organizations. It pays to have DBAs not leave the main security burden on the SysAdmin’s shoulders. It’s an increasingly dangerous world out there. Keep your digital powder dry.
Supply Chain
OTM Best Practices Pages, from the Oracle Supply Chain Management Blog.
Demantra
Demantra Certification Study Guides, Exam Preparation, from the Demantra blog.
EBS
From the Oracle E-Business Suite Support blog:
Webcast: Outside Processing (OSP) for OPM in 12.2.4
From the Oracle E-Business Suite Technology blog:
EBS 12.2 Certified on Oracle Linux 7 and Red Hat Enterprise Linux 7
DSTv24 Timezone Patches Available for E-Business Suite
Are We Replacing Workflow in EBS with BPEL Process Manager?


Detect languageAfrikaansAlbanianArabicArmenianAzerbaijaniBasqueBengaliBelarusianBulgarianCatalanChinese (Simp)Chinese (Trad)CroatianCzechDanishDutchEnglishEsperantoEstonianFilipinoFinnishFrenchGalicianGeorgianGermanGreekGujaratiHaitian CreoleHebrewHindiHungarianIcelandicIndonesianIrishItalianJapaneseKannadaKoreanLaoLatinLatvianLithuanianMacedonianMalayMalteseNorwegianPersianPolishPortugueseRomanianRussianSerbianSlovakSlovenianSpanishSwahiliSwedishTamilTeluguThaiTurkishUkrainianUrduVietnameseWelshYiddishAfrikaansAlbanianArabicArmenianAzerbaijaniBasqueBengaliBelarusianBulgarianCatalanChinese (Simp)Chinese (Trad)CroatianCzechDanishDutchEnglishEsperantoEstonianFilipinoFinnishFrenchGalicianGeorgianGermanGreekGujaratiHaitian CreoleHebrewHindiHungarianIcelandicIndonesianIrishItalianJapaneseKannadaKoreanLaoLatinLatvianLithuanianMacedonianMalayMalteseNorwegianPersianPolishPortugueseRomanianRussianSerbianSlovakSlovenianSpanishSwahiliSwedishTamilTeluguThaiTurkishUkrainianUrduVietnameseWelshYiddish Text-to-speech function is limited to 100 charactersOptions : History : Help : Feedback

Register Now - OTN Free Virtual Technology Summit!

OTN TechBlog - Thu, 2015-06-18 16:55
The Oracle Technology Network (OTN) is excited to invite you to the next Virtual Technology Summit. Learn firsthand from Oracle ACEs, Java Champions, and Oracle product experts, as they share their insight and expertise on using Oracle technologies to meet today’s IT challenges. Learn through Hands on Labs and Technical Presentations / Demo’s. 

Register now!

•    July 14, 2015 – 9am to 12:30 PT  

•    July 15, 2015 - 9am to 12:30pm BST 

•    July 23, 2015 - 9:30am to 1:00pm IST 

View Agenda/Abstracts HERE


This interactive, online event offers four highly technical tracks, each with a unique focus on specific tools, technologies, best practices and tips:


Java - It's All about Innovation: In its 20th year, Java is used by over 9 million developers world-wide in every major industry. Learn all about Java innovation. You will discover how to program a parallel application with Java 8 lambdas, build a robot with 3D printed parts and use Docker, a best-in-class platform to test and manage releases.


Database - Mastering Oracle Database Technologies: Oracle Database 12c delivers market-leading security, high performance, availability and scalability for Cloud Application deployments. This event offers two Database tracks; one focused on Cloud Application development and deployment practices and the other on developing and deploying .Net applications on the Oracle platform. Sessions focus on Oracle Database Cloud Services, Oracle .Net development tools and technologies and more.

Middleware - It's All About Oracle WebLogic!: The Middleware track brings together three experts on Oracle Fusion Middleware to present how-to technical sessions on WebLogic Server's role in today's middleware architectures. The sessions will focus on security and authentication, service monitoring and exploration, and on WebLogic 12c's new APIs and tools for application development. Other products and technologies covered include Oracle SOA Suite, Service Bus, JMX, JAX-RS, JSON, WebSocket and more.

Operating Systems, Virtualization Technologies, and Hardware - Systems Security: Oracle Systems software is engineered for deployment in the most demanding IT settings. From Oracle Solaris, to Open Stack to Oracle Linux, Oracle has engineered their systems platforms with security as an essential element. Learn from Oracle security experts about the secure by default features and layers of defense built into Oracle Systems and defense-in-depth best practices.


Become a member of the OTN Community: Register here to start participating in our online community. Share your expertise with other community members!


NEW REWARDS! If you attend this virtual technology summit and are a member of the Oracle Technology Network Community you will earn 150 points towards our new Rewards and Recognition program (use the same email for both). Read all about it in our FAQ: Oracle Community – Rewards & Recognition FAQ.

Replicating Hive Data Into Oracle BI Cloud Service for Visual Analyzer using BICS Data Sync

Rittman Mead Consulting - Thu, 2015-06-18 15:26

In yesterday’s post on using Oracle Big Data Discovery with Oracle Visual Analyzer in Oracle BI Cloud Service, I said mid-way through the article that I had to copy the Hadoop data into BI Cloud Service so that Visual Analyzer could use it; at present Oracle Visual Analyzer is only available as part of Oracle BI Cloud Service (BICS) so at some point the data prepared by Big Data Discovery had to be moved into BICS so that Visual Analyzer (VA) could access it. In the future once Visual Analyzer is available on-premise as part of Oracle Business Intelligence 12c we’ll be able to connect the on-premise RPD directly to Hadoop via the Cloudera Impala ODBC driver, but for now to get this early access to VA features we’re going to have to copy the data up to BICS and report on it from there. So how does this work?

With this second release of BICS there are actually a number of ways to get on-premise data up into BICS’s accompanying database service:

  • As before, you can export data as CSV or an Excel Spreadsheet, and upload it manually into BICS using the Data Load feature (the approach I took in this recent Oracle Magazine article)
  • You can use SQL*Developer to SFTP “carts” of Oracle database data up into BICS, where it’ll then be unpacked and used to create Oracle tables in the accompanying database
  • You can now also connect BICS to the full Oracle Database-as-a-Service, a full database rather than a single schema that also provides a SQL*Net connection that ETL tools can connect to, for example ODI or Informatica
  • And there’s now a new utility called “Data Sync” that we’ll use in this example, to replicate tables or files up into BICS’s database store with options for incremental refresh, drop-and-reload and so forth

In our case the situation is a bit more complicated in that our data sits in a Hadoop cluster, as Hive tables that we’re accessing through the Cloudera Impala MPP engine. OBIEE 11.1.1.9 can actually connect directly to Impala and if we were just using Answers and Dashboards we wouldn’t have any more work to do, but as we’re using VA through BICS and BICS can’t access on-premise data sources, we need some way of copying the data up into BICS so VA can access it. Again, there’s many ways you can get data out of Hive on Hadoop and into databases and files, but the approach I took is this:

  1. First export each of the Hive tables I accessed through the on-premise RPD into CSV files, in my case using the Hue web-based user interface in CDH5
  2. Then use the Data Sync to upload the contents of those CSV files to BICS’s database store, selecting the correct Oracle datatypes for each of the columns
  3. Do any modeling on those tables to add any sequences or keys that I’m going to need when working with BICS’s more simplistic RPD modeller
  4. Then create a replica (or as close to replica) RPD model in BICS to support the work I’m going to want to do with VA

Again, there are also other ways to do this – another option is to just lift-and-shift the current 11.1.1.9 RPD up into BICS, and replicate the Hive/CSV data into Oracle Database-as-a-Service and then repoint the uploaded RPD to this service, but I haven’t got a DBaaS instance to-hand and I think it’d be more useful to replicate using BICS and recreate the RPD manually – as that’s what most customers using BICS will end-up doing. So the first step then is to export the Hive data out into CSV files using Hue, by first running a SELECT * FROM … for each table, then using the menu option to export the query results to a CSV file on my workstation.

NewImage

Then it’s a case of setting up BICS Data Sync to first connect to my BICS cloud instance, and then selecting one-by-one the CSV files that I’ll be uploading into BICS via this tool.

NewImage

Of course anyone who’s been around Oracle BI for a while will recognise Data Sync as being built on the DAC, the ETL orchestration tool that came with the 7.9.x releases of BI Apps and worked in-conjunction with Informatica PowerCenter to load data into the BI Apps data warehouse. The DAC is actually a pretty flexible tool (disclaimer – I know the development PMs at Redwood Shores and think they’re a pretty cool bunch of people) and more recently it gained the ability to replicate BI Apps DW data into TimesTen for use with Exalytics, so it’s pluggable architecture and active development team meant it provided a useful platform to deliver something in-between BICS’s ApEX data uploader and fully-fledged ODI loading into Oracle DBaaS. The downside of using something built on the DAC is that the DAC had some UI “quirks”, but equally the upside is that if you know the DAC, you can pretty much pick up Data Sync and guess how it works.

As part of uploading each CSV file, I also get to sample the file contents and confirm the datatype choices that Data Sync has suggested; these can of course be amended, and if I’m bringing in data from Oracle, for example, I wouldn’t need to go through such an involved process. 

NewImage

Then it’s a case of uploading the data. In my case one of the tables uploaded OK first time, but an issue I hit was where Hive tables had numeric columns containing NULLs that got exported as the text “NULL” and then caused the load to fail when trying to insert them into numeric columns. Again, a bit of knowledge of how the DAC worked came in useful as I went through the log files and then restarted parts of the load – in the end I replaced the word NULL with an empty string and the loads then succeeded. 

NewImage

Now the data should be uploaded to BICS, you can check out the new tables and their contents either from within BICSs Data Modeller function, or from within the ApEx console that comes with BICS’s database part.

NewImage

One thing I did know I’d have to give some thought to was how to do the types of RPD modelling I’d done in the on-premise RPD, within the more constrained environment of the BICS data modeller. Looking back at the on-premise RPD I’ve made a lot of use of aliases to create fact and dimension versions of key log tables (posts, tweets) and multiple versions of the date dimensions, whereas in BICS you don’t get aliases but you can create database views. What was more worrying was that I’d used columns from the main webserver log table to populate both the main logical fact table and another dimension whilst still keeping a single source table as their physical source, but in BICS I’d have to create these two sources as views and then join them on a common key, which would be tricky as the log table in Hive didn’t have an obvious primary key. In the end I “cheated” a bit and created a derived copy of the incoming log file table with a sequence number added to it, so that I could then join both the derived fact table and dimension table on this synthetic unique key column.

NewImage

Now it’s a case of modelling out the various incoming tables uploaded via Data Sync into the facts and dimensions that the BICS data model will use; again something to be aware of is that each of these tables will need to join to its relevant dimensions or facts, so you need to leave the joining keys in the fact table rather than remove them as you’d do when creating logical fact tables in on-premise OBIEE.

NewImage

Tables that only perform one role, for example the IMP_RM_POSTS_VA table that contains details of all blog posts and web pages on our site, can be brought into the model as they are without creating views. For the second time when I add in the time dimension table, this time to create a time dimension role table for the Tweets fact table, I have to create a view over the table that performs a similar role to alias tables in on-premise OBIEE, and I’m then directed to create a fact or dimension object in the model from that view.

NewImage

Once this is all done, I end up with a basic BICS data model that’s starting to look like the one I had with the on-premise OBIEE install.

NewImage

Then finally, once I’d amended all the column names, brought in all of the additional columns and lookup tables to provide for example lists of Twitter user handles, I could then view the model in BICS’s Visual Analyzer and start produce data visualisation projects off of it.

NewImage

So – it’s admittedly a bit convoluted in the first iteration but once you’ve set up the BICS data model and the Data Sync upload process, you can use DataSync to refresh the replicated Hive table data in the BICS database instance and keep the two systems in-sync. As I said, OBIEE12c will come with Visual Analyzer as part of the on-premise install, but until then this is the way we link VA to Big Data Discovery on Hadoop to enable Visual Analyzer access to BDD’s enriched datasets.

Categories: BI & Warehousing

SQL Server 2016 : availability groups and load balancing features

Yann Neuhaus - Thu, 2015-06-18 14:53

Let’s continue with this third post about SQL Server AlwaysOn and availability groups.

Others studies are available here:

 

This time I’ll talk about read-only secondaries and the new load-balancing support that will be introduced by SQL Server 2016.

First of all, SQL Server 2014 improved the read-only secondary availability by solving the issue related to secondary accessibility when the primary is offline. However, the redirection to a readable secondary was still basic because it concerned only the first secondary replica defined in the configured priority list. So, unless using a third-party tool it was not possible to use very efficiently all of the resources available from secondaries. Fortunately, the next SQL Server version will change the game by introducing native load-balancing capabilities.

In order to be able to use this new feature, you must define:

  • The list of possible secondary replicas
  • A read-only route for each concerned replica
  • A routing list that include read-only replicas and load-balancing rules

At this point I admit to expect a GUI for configuring both read-only routes and the routing list rules in a user friendly fashion even if I prefer using T-SQL to be honest. But anyway, let’s try to configure secondary replicas in round-robin fashion as follows:

 

/* enable read-only secondary replicas */ ALTER AVAILABILITY GROUP [2016Grp] MODIFY REPLICA ON N'SQL161' WITH (        SECONDARY_ROLE (ALLOW_CONNECTIONS = READ_ONLY) ); GO   ALTER AVAILABILITY GROUP [2016Grp] MODIFY REPLICA ON N'SQL161' WITH (        SECONDARY_ROLE (READ_ONLY_ROUTING_URL = N'TCP://SQL161.dbi-services.test:1433') ); GO   ALTER AVAILABILITY GROUP [2016Grp] MODIFY REPLICA ON N'SQL162' WITH (        SECONDARY_ROLE (ALLOW_CONNECTIONS = READ_ONLY) ); GO   ALTER AVAILABILITY GROUP [2016Grp] MODIFY REPLICA ON N'SQL162' WITH (        SECONDARY_ROLE (READ_ONLY_ROUTING_URL = N'TCP://SQL162.dbi-services.test:1433') ); GO   ALTER AVAILABILITY GROUP [2016Grp] MODIFY REPLICA ON N'SQL163' WITH (        SECONDARY_ROLE (ALLOW_CONNECTIONS = READ_ONLY) ); GO   ALTER AVAILABILITY GROUP [2016Grp] MODIFY REPLICA ON N'SQL163' WITH (        SECONDARY_ROLE (READ_ONLY_ROUTING_URL = N'TCP://SQL163.dbi-services.test:1433') ); GO   /* configure replicas priority list */ ALTER AVAILABILITY GROUP [2016Grp] MODIFY REPLICA ON N'SQL161' WITH (        PRIMARY_ROLE (READ_ONLY_ROUTING_LIST=(('SQL162','SQL163'))) );   ALTER AVAILABILITY GROUP [2016Grp] MODIFY REPLICA ON N'SQL162' WITH (        PRIMARY_ROLE (READ_ONLY_ROUTING_LIST=(('SQL161','SQL163'))) ); GO   ALTER AVAILABILITY GROUP [2016Grp] MODIFY REPLICA ON N'SQL163' WITH (        PRIMARY_ROLE (READ_ONLY_ROUTING_LIST=(('SQL162','SQL161'))) ); GO

 

My test lab includes 3 replicas (SQL161, SQL162 and SQL163). The secondaries will be used as read-only replicas with the new load-balancing feature.

Note the double brackets around the replicas list that defines the load-balancing mechanism for the concerned replicas. In my context, I have only two read-only replicas but rules are defined as follows:

  • (replica1, replica2, replica3): no load-balancing capabilities in this case. The first replica will be used, then the second and finally the third.
  • ((replica1, replica2), replica3): replica1 and replica will be used in a round-robin fashion. The replica3 will be used only if both replica1 and replica2 are not available.

Now let’s play with this new infrastructure by using sqlcmd command as follows:

 

blog_52_-_1-_sqlcmd_readonly

 

As reminder, you have to meet some others requirements in order to use correctly the transparent redirection to a secondary replica as using TCP protocol, referencing directly the availability group listener and the concerned database as well and setting the application intent attribute as readonly. So in my case, I reference directly the LST-2016 listener and the killerdb. I use also the –K parameter with READONLY attribute. Finally, I run the query SELECT @@SERVERNAME in order to know which replica I am after login.

I ran this command several times and I can state that the load-balancing feature plays its full role.

 

blog_52_-_2-_sqlcmd_tests

 

However, let’s play now with the following PowerShell script:

 

Clear-Host;   $dataSource = “LST-2016"; $database = "killerdb"; $connectionString = "Server=tcp:$dataSource;Integrated Security=SSPI;Database=$database;ApplicationIntent=ReadOnly”;    $i = 0;   while ($i -le 3) {        Write-Host "Test connexion initial server nb : $i - $dataSource " -NoNewline;    Write-Host "";    Write-Host "";      Try    {        $connection = New-Object System.Data.SqlClient.SqlConnection;        $connection.ConnectionString = $connectionString;          $connection.Open();          $sqlCommandText="SELECT 'Current server : ' + @@SERVERNAME as server_name";        $sqlCommand = New-Object system.Data.sqlclient.SqlCommand($sqlCommandText,$connection);        $sqlCommand.ExecuteScalar();          $connection.Close();          $sqlCommand.Dispose();        $connection.Dispose();      }    Catch [Exception]    {        Write-Host "KO" -ForegroundColor Red;        Write-Host $_.Exception.Message;    }      Write-Host "";      Start-Sleep 3;      $i++; }

 

The result is not the same. The redirection to a read-only replica works perfectly but there was not load-balancing mechanism in action this time as shown below:

 

blog_52_-_3-_powershell_tests

 

 

What’s going on in the case? In fact and to be honest, I didn’t remember that PowerShell uses connection pooling by default (thanks to Brent Ozar - @BrentO to put me on the right track).

Let’s take a look at the output of an extended event session that includes the following events:

  • sqlserver.login
  • sqlserver.logout
  • sqlserver.read_only_route_complete
  • sqlserver.rpc_completed

 

blog_52_-_4-_xe_sqlcmd

 

You can notice that sqlcmd tool doesn’t use connection pooling (is_cached column = false). In this case for each run, SQL Server will calculate the read-only route.

However for my PowerShell script the story is not the same as shown below:

 

blog_52_-_5-_xe_pw

 

The first connection is not pooled and we can noticed only one read-only route calculation from SQL Server. All of the next connections are pooled and technically they are still alive on the SQL Server instance. This is why the load balancing mechanism is not performed in this case. So this is an important point to keep in mind if you want to plan to benefit to this new feature.

I also had a dream: Having a real load-balancing feature based on resource scheduling algorithm… maybe the next step? :)

See you

 

 

 

Is OAM alone enough or should I also learn OIM/SOA for Apps DBA ?

Online Apps DBA - Thu, 2015-06-18 14:49

We recently launched Oracle Access Manager Training (next batch starting on 6th July), one question that I’ve seen quite regularly is “I am an Oracle Apps DBA and learning OAM alone (without OIM/SOA and WebLogic) with Apps DBA will give me better opportunity or is it required to learn entire Identity Management Suite

Since OAM/OIM is very close to my heart (I wrote my first book on this topic) and I meet lot of Apps DBAs with similar question, I thought I should cover this question here .

Before I answer this question, let me first explain about OAM/OAM/SOA and other IAM products. Oracle Access Manager is recommended Web Single Sign-On product from Oracle Identity & Access Management Suite where other products include OIM, OID, OUD, OVD, OES, OMS, OAAM, OES, eSSO etc

Oracle Identity Manager is identity provisioning and management product that uses Oracle SOA Suite for approval based workflows and as Orchestration engine.

OAM and OIM products can be implemented independently and if you need just Single Sign-On (or Access Management) then Oracle Identity Manager (OIM) is not required . If you are using Oracle E-Business Suite (App R12) and just require SSO integration with Microsoft Active Directory (MS-AD)/Windows Native Authentication or with other oracle products like OBIEE, WebCenter etc then OIM/SOA is not required.

Having said that, it is better to know more so learning OIM will definitely help but I always believe in starting with small (keep things simple). I learnt OAM (Oblix that time) in 2003 and started OIM (Xelleterate at that time) 2 years later in 2005.

 

Note: OAM is deployed on WebLogic Server so basic WebLogic Concepts must also be learnt as part of OAM hence we cover WebLogic Domain, Admin/Managed Servers, Pack/Unpack, JDBC etc in our OAM 11gR2 Training/Workshop .

Tip for Apps DBAs to get better day rate/bigger role : Learn OAM and integrate EBS (R12) with OAM for Single Sign-On .

Having any doubt on what topics to learn then check what our OAM experts have to say  or leave a comment here for any other query .

Previous in series Related Posts for Access Manager
  1. Integration Steps – 10g AS with OAM (COREid)
  2. OAS – OAM (Access Manager / Oblix COREid) Integration Architecture
  3. Oblix COREid and Oracle Identity Management
  4. Installing Oracle Access Manager (Oblix COREid / Netpoint)
  5. Oracle Access Manager (Oblix COREid) 10.1.4.2 Upgrade
  6. Access Manager: WebGate Request Flow
  7. Introduction to Oracle Access manager : Identity and Access System – WebPass , Webgate, Policy Manager
  8. Certified Directory Server (AD, OID, Tivoli, Novell, Sun or OVD) and their version with Oracle Access Manager
  9. Install Oracle Access Manager (OAM) 10.1.4.3 Identity Server, WebPass, Policy Manager, Access Server, WebGate
  10. Multi-Language or multi-lingual Support/Documentation for Oracle Access Manager (OAM)
  11. OAM Policy Manager Setup Issue “Error in setting Policy Domain Root” : OAM with AD and Dynamic Auxiliary Class
  12. OAM 10.1.4.3 Installation Part II – Indentity Server Installation
  13. OAMCFGTOOL : OAM Configuration Tool for Fusion Middleware 11g (SOA/WebCenter) Integration with OAM
  14. Oracle Access Manager Installation Part III : Install WebPass
  15. OAM : Access Server Service Missing when installing Access Manager with ADSI for AD on Windows
  16. OAM : Create User Identity – You do not have sufficient rights : Create User Workflow
  17. Password Policy in Oracle Access Manager #OAM
  18. Changes in Oracle Access Manager 11g R1 (11.1.1.3)
  19. Agents in OAM 11g (WebGate 10g/11g, OSSO/mod_osso, AccessGate IDM Domain agent) aka PEP (Policy Enforcement Points)
  20. How to install Patches in Oracle Access Manager 10g : Bundle Patch / BPXX
  21. Session Management in #OAM 11g : SME , Idle Timeout, Session Lifetime
  22. Part IX : Install OAM Agent – 11g WebGate with OAM 11g
  23. How to integrate OAM 11g with OID 11g for User/Identity Store
  24. How to install Bundle Patch (BP) on OAM 11.1.1.3 – BP02 (10368022) OAM 11.1.1.3.2
  25. Error starting OAM on IBM AIX : AMInitServlet : failed to preload on startup oam java. lang. Exception InInitializer Error
  26. OAMCFG-60024 The LDAP operation failed. OAMCFG-60014 Oracle Access Manager is not configured with this directory
  27. How to Edit (create, delete, modify) Identity Store of OAM 11g from command line (WLST) – editUserIdentityStoreConfig
  28. OAM WebGate Registration RREG – Resource URL format is not valid
  29. Blank Screen on OAM 10g Identity Server Console : /identity/oblix
  30. Oracle 10g/11g webgate software download location
  31. How to find Webgate 10g/11g Version and Patches Applied
  32. OAM integration with OIF : Authentication Engine or Service Provider
  33. OAM 11g integration with Microsoft Windows Active Directory (WNA, IWA, Kerberos) for Zero Sign-On
  34. OAM 11g : How to change Security Mode (OPEN, SIMPLE, CERT) – WebGate to Access Server Communication
  35. Forgot Password link on OAM Login Page
  36. OIM-OAM-OAAM integration – Account Lockout in OAM obLoginTryCount , oblockouttime, MaxRetryLimit
  37. How to identify which LDAP (OID/AD/OVD) server OAM 11g connects to and as what user ?
  38. OAM 10g WebGate installation failed with Sorry Invalid User or Invalid Group
  39. Beware if you are running OAM in SIMPLE mode with 10g WebGate : Oracle AccessGate API is not initialized
  40. Troubleshooting : 11g WebGate with OHS 11g integrated with OAM 11g : OBWebGate_AuthnAndAuthz: Oracle AccessGate API is not initialized
  41. Deploying OAM in high availability across data centres in Active Active cluster : New Feature in OAM 11gR2 PS2
  42. New OAMConsole in OAM 11gR2 PS2 : Enabling Federation, STS, Mobile & Social in Oracle Access Management Suite 11.1.2.2
  43. OAM/WebGate troubleshooting : WebGate on Apache/OHS Unable to read the configuration file
  44. Is OAM alone enough or should I also learn OIM/SOA for Apps DBA ?

The post Is OAM alone enough or should I also learn OIM/SOA for Apps DBA ? appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

New Monitor… Again…

Tim Hall - Thu, 2015-06-18 12:40

220px-Commodore_PET2001I’ve just bought myself a Dell U3415W 34-Inch IPS LCD Monitor for use with the laptop. It’s quite an extravagant purchase, but it’s pretty amazing. Having 3440×1440 resolution on a single screen feels much more useful than sitting a couple of smaller monitors next to each other. It feels almost like having 3-4 screens in one.

I bought it to replace the Asus PB298Q 29 inch Widescreen AH-IPS Multimedia Monitor I got about 7 months ago. The resolution of 2560×1080 is pretty darn decent, but I don’t like having a depth of 1080. When you are using a wider screen, the limited height feels really restrictive for some reason.

Currently I have both screens plugged into the laptop, but I can’t see it staying that way. I’ve really had no reason to look at the MacBook or ASUS screen yet. I’ll see how I feel over the next few days. If I’m happy to let it go I’ll probably take the ASUS screen to work and use it there. It’s better than either of my work monitors. :)

Ditching the second screen will also free up some room on my desk, which is looking a little crazy at the moment… :(

Cheers

Tim…

New Monitor… Again… was first posted on June 18, 2015 at 7:40 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Create #em12c users fast and easy!

DBASolved - Thu, 2015-06-18 11:57

Over the last few months, I’ve been working a project where I’ve started to dive into EM CLI and the value that EM CLI brings to cutting down on doing things like creating Enterprise Manager users. Hence the reason for this post.

Note: If you haven’t looked into EM CLI yet, I encourage you to do so. A good starting point is here. Plus there is a whole book written on the topic by some friends and guru’s of mine, here.

Creating users in Enterprise Manager 12c is pretty simple as it is. Simply go to Setup -> Security -> Administrators. When you get this screen, then click on either the Create or Create Like buttons.

After clicking Create or Create Like, Enterprise Manger takes you to a five (5) step wizard for creating a user. This wizard allows you to provide details about the user, assign roles, assign target privileges, assign resource privileges and then review what you have done.

Depending on how many users you have to create, this wizard is either an great way of creating user or a slow way for creating users. Using EM CLI, users can be created from the command line very quickly and easily and no need to use the GUI wizard either.. :)

The syntax to create a user from the command line is as follows:

emcli create_user
-name="name"
-password="password"
[-type="user_type"]
[-roles="role1;role2;..."]
[-email="email1;email2;..."]
[-privilege="name[;secure-resource-details]]"
[-separator=privilege="sep_string"]
[-subseparator=privilege="subsep_string"]
[-profile="profile_name"]
[-desc="user_description"]
[-expired="true|false"]
[-prevent_change_password="true|false"]
[-department="department_name"]
[-cost_center="cost_center"]
[-line_of_business="line_of_business"]
[-contact="contact"]
[-location="location"]
[-input_file="arg_name:file_path"]

The beautiful part of EM CLI is that is can be used with any scripting language. Since I like to use PERL, I decided to write a simple script that can be used to create a user from the command line using EM CLI.

#!/usr/bin/perl -w
use strict;
use warnings;

#Parameters
my $oem_home_bin = “$OMS_HOME/bin";
my ($username, $passwd, $email) = @ARGV;
my $pwdchange = ‘false';

#Program
if (not defined $username or not defined $passwd or not defined $email)
{    
    print "\nUsage: perl ./emcli_create_em_user.pl username password email_address\n\n";    
    exit;
}

system($oem_home_bin.'/emcli login -username=sysman);
system($oem_home_bin.'/emcli sync');
my $cmd = 'emcli create_user -name='.$username.' -password='.$passwd.' -email='.$email.' -prevent_change_password='.$pwdchange;
#print $cmd."\n";
system($oem_home_bin.'/'.$cmd);
system($oem_home_bin.'/emcli logout');

Now using this bit of code, I’m able to create users very rapidly using EM CLI with a command like this:

perl ./emcli_create_em_user.pl <username> <password for user> <email address>

Well, I hope this helps other look at and start using EM CLI when managing their EM environments.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: EMCLI, OEM
Categories: DBA Blogs

Select One Choice with Select Items Tag

Andrejus Baranovski - Thu, 2015-06-18 08:26
If you need to implement select one choice based on alternative data sources (not based on ADF BC) - you shouldn't use af:forEach inside af:selectOneChoice component. Don't get confused with af:forEach tag, this tag is supposed to generate multiple components and not to iterate over a collection of objects and render HTML for each item. There could be cases, when choice list data will come duplicated, with af:forEach tag applied. I would suggest to construct array of SelectItem objects and return it to the f:selectItems tag to be rendered.

This is how proper af:selectOneChoice definition should look like. Array of items is being rendered in the choice list through f:selectItems tag:


Value property for f:selectItems can be entered manually or through the wizard, when creating af:selectOneChoice - this should point to the custom method, where array of SelectItem objects is constructed:


Custom method could read data from any source and construct array of SelectItem objects. This is the input for f:selectItems tag, it knows how to render a list of choice list items out of it:


This is how it looks on runtime - choice list is working fine, no need to use af:forEach:


Download sample application - CustomSelectListApp.zip.

Introducing Rittman Mead’s New User Engagement Service

Rittman Mead Consulting - Thu, 2015-06-18 05:15

NewImage

Although you’ve been hearing a lot on the blog recently about extending Oracle BI&DW with Hadoop and big data technologies, another initiative Rittman Mead have been working on over the past couple of months is user engagement. All-too often developers create BI systems that meet all an organisation’s technical goals, but then fail to get adopted by users for reasons we can’t understand; all we know is that our brilliantly-architected system just doesn’t seem to engage users, and that’s just as fatal to a project as the ETL not working or the underlying technology being out-of-date.

Surveys have shown that adoption rates for new BI systems are often as low as 25%, because we don’t focus enough on the user experience (UX) or user interface, queries run too slow, the data or reports just aren’t relevant or the overall experience just isn’t up to the standard that internet users expect now. If you’re a BI manager or a CIO for your organisation it’s essential that you know whether the BI systems you’re providing for your users are actually being used, and it’s quite often the case that a BI system provided by the IT department is thought to be well-used by the end-users, but when you check the usage stats you’ll find that engagement has really fallen-off since the initial rollout of the system – and the worst thing is, you probably don’t really know why this is, all you know is that users aren’t happy and they’re now looking to implement some self-service tools that’ll break your data governance model.

To help you improve user engagement for your BI system and increase the return on your investment in Oracle technology, Rittman Mead are developing a User Engagement Service to address these issues along with a User Engagement Toolkit developed by the likes of Robin Moffatt, Jordan Meyer, Tom Underhill and other OBIEE and UX experts within Rittman Mead. Initial details of the service are on our Rittman Mead User Engagement Service homepage and the Rittman Mead User Engagement Service Datasheet, look-out for more information on these services over the coming days and weeks, and if you’re interested in getting-involved in our early-adopter program you can sign-up using the form below and we’ll get back to you shortly.

#mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */

Categories: BI & Warehousing

SQL Server 2016 CTP2: Stretch database feature - Part 2

Yann Neuhaus - Thu, 2015-06-18 02:55

In my previous blog SQL Server 2016 CTP2: Stretch database feature - Part 1, I explained how to prepare your instance and your database to enable the Stretch feature for your table.

Now I present you how to enable the feature for your tables!

Prerequisites

Enabling Stretch Database at the table level requires ALTER permissions on this table.

 

Limitations

For the moment in the SQL Server 2016 CTP2, there are several and important limitations, which drastically reduces the scope of use of Stretch Database feature.

These limitations include Table Properties, Data Types, Column Properties, Column Types and Constraints and Indexes.

You can obtain the detailed list with the following Microsoft article: Requirements and limitations for Stretch Database.

 

Enable Stretch for a Table

First, I create a new table to avoid all limitations I explained above. Here is the SQL script:

Use AdventureWorks2014;
CREATE TABLE Stretch_Table
(
    Column1 nvarchar(50),
    Column2 int,
    Column3 nchar(10)
)

If you have followed all steps in my previous blog, you must be ready for enabling Stretch feature for your table!

As all have been pre-configured, you just need to enable the feature for the targeted table.

enable_table_for_stretch.png

 

If we take a look at the SQL Database server in Azure, we must be able to visualize the "Stretch_Table" table:

SQL_azure_20150618-090340_1.png

 

You can notice that a new column named "batchID" has been included in the original table. Indeed, a non-null bigint is incremented each time you insert a new row in your table: it is the PRIMARY KEY of your table.

Moreover, your table still remains visible on your on-premise instance and you can perform your normal work.

 

Next step

Performing backup of the database with Stretch feature enabled will not include the data stored in Azure. It means you also needs to perform a backup in Azure.

I will detail this part in my next blog.

Please Sell

Floyd Teter - Wed, 2015-06-17 18:12
Oracle's financial results for Q4 of their fiscal year 2015 came out a few minutes ago.  Seems they missed targets on license revenues and earnings per share.  So the stock may be headed for the professional investor's dog house.  I've even read of an analyst or two publishing a "sell" rating on Oracle stock.

Geez, please sell.  Drive the price down.  I can buy some more shares on the cheap and laugh all the way to the bank.  Let me explain.

First, license revenues shrank.  Gee, no kidding?  Oracle is transitioning away from licensed software to cloud and license revenues shrank?  (insert sarcasm tag here) Better dump the stock before the bottom falls out! (end of sarcasm)

Second, Oracle (like every other tech firm recently) was theoretically dinged by exchange rates.  The dollar fell against the yuan, peso, ruble, ducat, yen, etc.  But currency rates average out...even over the short run.  Take a snapshot after the Greeks and the European Union work out their issues in a few weeks...regardless of how they work it out, bet that exchange rate issue becomes less of an issue.

Third, the name of the game in measuring success in providing whatever-as-a-service is recurring revenue.  You'll compromise margins on new subscriptions to grow share, then work hard to minimize churn...which maximizes very high margin recurring revenue.  So the telling numbers for Oracle's future as a cloud provider:  subscription revenue growth, recurring revenue growth, and recurring revenue margins.  Which I do believe were the high points in the results.

Fourth, the technical fundamentals...which is really the most important factor...are very good.  Solid products with lots of functionality.  I'm not too concerned about Oracle's financial viability as long as they keep producing great products.

So only am I not worried, I'm actually pretty enthusiastic about the results and what they really mean.

So please sell...I'd like to increase my minuscule Oracle holdings.  If enough folks sell, I'll be able to do so on the cheap.

The Role of Today's CIO in Business Transformation

WebCenter Team - Wed, 2015-06-17 13:58
CIOs are extremely important to their executives and businesses, and the cloud empowers them to take the lead. The best CIOs in the industry are beginning to grasp that and train their staff to become experts in cloud technologies.

Epectations of business-driven IT are higher than ever before.  Words like agility, mobility, digital, data, cloud and customer centricity are common and constant from Line of Business (LOBs) executives like CMOs. Today’s business must move at the pace of – scratch that – ahead of the pace of today’s consumer. Businesses must anticipate customers’ needs to be truly customer-centric organizations.  Innovation through the cloud is a big force driving customer experience and customer centricity.

Kurian believes the CIO will play a pivotal role with analytics, integration, security and privacy, and ultimately business transformation. In this video he states “cloud actually empowers CIOs to take the lead with innovation.” He goes on to say that today’s CIO role is not diminished just different, requiring a new business mindset. The CIO is very much essential to this new digital, customer-centric business revolution.

Watch the video here.

Scripting Languages & Oracle: Blogs to Follow

Christopher Jones - Wed, 2015-06-17 13:58
If you haven't come across them, keep an eye on the blogs of Blaine Carter and Dan McGhan. They will be posting mostly on developing with Python and Javascript, respectively. Blaine & Dan work at Oracle under the leadership of Steven Feuerstein who is well known in the Oracle PL/SQL world for his enthusiasm for application development.

Combining Oracle Big Data Discovery and Oracle Visual Analyzer on BICS

Rittman Mead Consulting - Wed, 2015-06-17 13:11

So now that Oracle Visual Analyzer is out as part of Oracle BI Cloud Service, and Visual Analyzer (VA) is due to ship on-premise as part of OBIEE12c sometime in the next twelve months, several of our customers have asked us if they need both VA and Oracle Big Data Discovery if they’re looking to analyse Hadoop data as part of a BI project. It’s an interesting question so I thought it’d be useful to go through my thoughts on how the two tools work together, when to use one, and when to use the other.

Taking our standard “big data” dataset of website log activity, Twitter mentions and page details from our WordPress blogging software, before Visual Analyzer came along the two usual ways we’d want to analyze these datasets is either a traditional BI metrics analysis-type scenario, and a data discovery/visualization scenario where we’re more interested in the content of the data rather than precise metrics. My half of the recent BI Forum 2015 Masterclass goes through these two scenarios in detail (presentation slides in PDF format here), and it’s Big Data Discovery that provides the more “Tableau”-type experience with fast point-and-click access to both datasets joined together on their common website page URL details.

NewImage

Now we have Visual Analyzer though, things get interesting; in my article on Visual Analyzer within BICS I showed a number of data visualisations that look pretty similar to what you’d get with Big Data Discovery, and when we have VA available on-site as part of OBIEE12c we’ll be able to connect it directly to Hadoop via Cloudera Impala, potentially analyzing the whole dataset rather than the (representative) sample that Big Data Discovery loads into its Endeca Server-based engine.

NewImage

So if the customer is looking to analyze data held in Hadoop and Visual Analyzer is available, where’s the value in Big Data Discovery (BDD)? To my mind there’s three areas where BDD goes beyond what VA can do, or helps you perform tasks that you’ll need to do before you can work with your data in VA:

  • The initial data discovery, preparation and cleansing that you’d otherwise have to do using HiveQL or an ETL tool such as ODI12c
  • Providing you with a high-level overview and landscape of your data, when that’s more important to you at the time than precise counts and drill-down analysis
  • Understanding how data joins together, and how best to use your datasets in terms of metrics, facts, dimensions and so forth

Taking the data preparation and cleansing part first, I’ve covered in several blogs over the past couple of years how tools such as ODI can be used to create formal, industrialized data pipelines to ingest, prepare and then summarise data coming into your Hadoop system, and how you can drop-down to languages such as HiveQL, Pig and Spark to code these data transformations yourself. In the case of my webserver log, twitter and page details datasets this work would include standardising URL formats across the three sources, geocoding the IP addresses in the access logs to derive the country and city for site visitors, turning dates and times in different formats into ones that work as Hive timestamps, and so forth. Doing this all using ODI and/or HiveQL can be a pretty technical task, so where BDD comes in useful even – if VA and an OBIEE RPD is the final destination for the data.

NewImage

Datasets that you transform and enrich in Big Data Discovery can be saved back to Hive as new Hive tables, or exported out as files for you to load into Oracle using SQL*Developer, or upload into BICS to use in Visual Analyzer. Where BDD then becomes useful is giving you a quick, easy to use overview of your dataset before you get into the serious business of defining facts, dimensions and aliases against these three Hive tables. The screenshots below show a couple of typical Big Data Discovery Studio data visualisation pages against the webserver logs dataset, and you can see how easy it is to create simple charts, tag clouds and maps against the data you’re working with – the aim being to give you an overview of the data you’re working with, help you understand its contents and “shape”, before moving further down the curation process and applying formal structures to the data.

NewImage

Where things get harder to do within Big Data Discovery is when more-and-more formatting, complex joining and “arranging” of the data is required; for example, BDD gives you a lot of flexibility in how you join datasets, but this flexibility can be confusing for end-users if they’re then presented with every possible variation of a three-table join rather than having the data presented to them as simple facts and dimensions. But this is how we’d really expect it – if you go back to the logical data architecture I went through in the blog post a while ago about the updated Oracle Information Management Reference Architecture, the trade-off in using schema-on-read data reservoirs is that this data, although quick and cheap to store, requires a lot more work to be done each time you access the data to get “value” from it.

NewImage

OBIEE, in contrast, makes you define your data structures in-full before you present data to end-users, dividing data in the three datasets into measures (for the fact tables) and attributes (for dimensions) and making it possible to add more dimension lookups (for a date dimension, for Twitter users in this case) and separate the overall set of data into more focused subject areas. Working with the dataset on the on-premise version of OBIEE first, the RPD that I created to present this data in a more formal, dimensional and hierarchical way to users looked like this:

NewImage

I can leave this RPD connected directly to the underlying Hive and Impala tables if I want to use just Answers and Dashboards, but for the time being I either need to export the underlying Hive tables into CSV files or into an Oracle Database before uploading into Visual Analyzer, but come OBIEE12c this should all be seamless. What users are then presented with when they go into Visual Analyzer is then something like this:

NewImage

Notice how the various attributes of interest are grouped into fact and dimension table folders, and there’s a simple means to add calculations, change the visualisation type and swap chart settings around. Note also that the count on the screen is the actual count of records in the full dataset, not the sample that BDD takes in order to provide an overview of values and distribution in the full dataset. Whilst it’s relatively easy to create a line chart, for example, to show tweets per user within BDD, using Visual Analyzer it’s just a case of double-clicking on the relevant measures and attributes on one side of the page, selecting and arranging the visualisation and applying any filters using dialog boxes and value-selectors – all much more familiar and obvious to BI users.

NewImage

Enrichment to the data that I’ve done in Big Data Discovery should in most cases be able to come through to Visual Analyzer; for example, I used Big Data Discovery’s text enrichment features to determine the sentiment of blog post titles, tweets and other commentary data, I could use the latitude and longitude values derived during the visitor IP address geocoding to plot site visitors on a map. Using the sentiment value derived from the post title, tweet contents and other textual data, I can create a chart of our most popular posts mentioned on Twitter and colour bars to show how positive, or negative, the comments about the post were.

NewImage

The only thing that Visual Analyzer can’t yet do that would be useful, is to be able to include more than one subject area in a project. To analyze the number of tweets and the number of page views for posts in a scatter chart, for example, I currently have to create a separate subject area that includes both sets of facts and dimensions, though I understand BICS on VA will have the ability to include multiple subject areas in a forthcoming release.

NewImage

So in summary, I’d say that Big Data Discovery, and Visual Analyzer as part of BI Cloud Service, are complementary tools rather than one being able to replace the other in a big data context. I find that Big Data Discovery is a great tool to initially understand, catalog and view at a high-level data sources going into VA, and then  to do some user-driven cleaning-up of the data, enhancing it and enriching it before committing it to the formal dimensional model that Visual Analyzer requires.

In its BICS guise there’s the additional step of having to export the Hadoop data out of your Big Data Appliance or other Hadoop cluster and upload it in the form of files using BICS’s data load or the new Data Sync utility, but when VA comes as part of OBIEE12c in the next twelve months you’ll be able to directly connect to the Hadoop cluster using Impala ODBC and analyse the data directly, in-place.

I’ll be covering more on BICS over the next few weeks, including how I got data from Hadoop into BICS using the new Data Sync utility.

Categories: BI & Warehousing

How to learn Oracle Access Manager (OAM) 11gR2

Online Apps DBA - Wed, 2015-06-17 12:57

Screen shot 2015-06-17 at 00.38.01

Identity Management Jobs have grown exponentially (specially in last one year) and Oracle is among Leader in Gartner’s Magic Quadrant for IAM. In this post I am going to cover how to learn Oracle Access Manager and things you should learn for Oracle Access Manager (OAM).

What is Oracle Access Manager : If you don’t know already Oracle Access Manager (OAM) is Oracle’s recommended Single Sign-On (SSO) solution for Web Access Management.

Why should you learn OAM : Single Sign-On and Web Access Management is very important for securing applications. With Cloud bases SAAS applications, it is more important for enterprises to implement federated Sign-On (Federation is now part of OAM in 11gR2 version). Oracle Access Manager (OAM) is also mandatory in Oracle Fusion Applications.

What roles are available for OAM : You can be an OAM Architect, Administrator, Implementor, or Developer.

What should I learn in OAM : For all the OAM roles, you should have fair understanding of Architecture, Component, and functionality of OAM. If you are an Architect, Administrator or Implementor then you also know Installation, Configuration, Integration , High Availability & Disaster Recovery setup.  If you are developer then you should be able to write authentication modules, policies , custom login pages etc.

Where can I learn OAM : If you prefer self learning then you can refer to Oracle’s Documentation on OAM or attend Oracle University Course (costs 4200 USD) or attend our Online Live Training on OAM (costs 997 USD) – next batch starts on 4th July (We provide Full Money back guarantee for 7 Days) .

What topics should I look in OAM Training :  To start with , you should learn minimum

  • Architecture of Oracle Access Manager (OAM)
  • Overview of WebLogic Server and Fusion Middleware
  • Overview of Oracle Identity & Access Management (OAM, OIM, OID, OUD, OAAM, OES..)
  • Installation & Configuration of OAM
  • Install & Configuration of OHS & WebGates
  • Migration of OSSO 10g to OAM 11g
  • Authentication & Authorization policies in OAM
  • Protecting resources using SSO
  • OAM Integration with LDAP Server (OID or OUD)
  • Deploying OAM in High Availability
  • Common Integration Scenarios for OAM
  • Overview of Oracle Identity

 

I am Oracle Apps DBA /DBA should I also learn OAM : Yes, you should learn Oracle Access Manager (OAM) as Apps DBAs with OAM experience earn 25-40% more. Single Sign-On is quite common these days and with Oracle Fusion Applications (OAM is mandatory in Fusion Apps), it is important that you learn Oracle Access Manager (OAM).

I still have some more queries related to OAM : Contact our OAM experts for any query related to OAM training requirements or post a comment here .

 

Register for our Online Live Training on OAM (costs 997 USD) –batch starts on 4th July, register early to avoid disappointment as seats are limited  (Our Oracle Fusion Middleware Course was sold out long before start date)

 

Related Posts for Oracle Access Manager
  1. How to learn Oracle Access Manager (OAM) 11gR2

The post How to learn Oracle Access Manager (OAM) 11gR2 appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

Oracle Enterprise Manager Cloud Control 12c Release 5 (12.1.0.5) : My first two installations

Tim Hall - Wed, 2015-06-17 09:55

em-12cI’ve done a couple of play installations of EM12c 12.1.0.5, just to get a feel for it. You can see the result of that here.

From an installation perspective, everything was pretty similar to the previous releases. I tried the installation on both OL5 and OL6, in both cases using 12c as the database repository. No dramas there.

A couple of things of note.

  1. The 12c repository template database is a Non-CDB architecture.
  2. The Weblogic installation uses Java6.

Interesting…

The next step is to try some upgrades from EM 12.1.0.4 (on DB 11.2.0.4) to EM 12.1.0.5, which is what I’ll need for my upgrades at work. The testing is quite time consuming and boring, but it’s got to be done before I can unleash this on the company. :)

Cheers

Tim…

PS. Remember to download from edelivery.oracle.com (in a couple of days) for your production installations. Apparently there is a difference to the license agreement.

Oracle Enterprise Manager Cloud Control 12c Release 5 (12.1.0.5) : My first two installations was first posted on June 17, 2015 at 4:55 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

A natural born tree right in my own backyard!

FeuerThoughts - Wed, 2015-06-17 06:33
As some of my readers may have noticed, I spend a lot of time these days among trees, paying attention to trees, cutting back invasive trees to save native trees, etc.

And one thing that I came to realize is that at least in an area like Chicagoland, humans tightly control the reproduction of trees. 
I live on a lovely tree-lined street. Big trees - 100 ft tall or more. Maples, oaks, ash....but there are no baby trees, except for smallish trees that the city plants when they have to remove diseased trees (such as all the ash trees, under assault from ash borers).
It makes me sad to think of how impoverished my immediate surroundings are, how unnatural. We don't even let trees - majestic living things that make our lives possible, that live through many of our own generations - live out natural life cycles. 
In fact, I have come to accept that trees planted singly along streets to enhance our lives are really just ornaments. If "a man is not an island" then certainly a tree is not a forest. And very few trees live naturally outside of forests of many, many trees.
Well, enough of sadness. Veva and I were sitting on our patio last week, enjoying the (finally) warm weather and our lovely garden (thanks to Veva), when she pointed out something truly wonderful:

Can you see it? We planted the birch trees years ago. They are now 40 feet tall, but nestled in between? A natural born baby birch tree! Can't see it? Here maybe this will help:

I feel so much better now. The (minimal) wildness of our garden (as in: no grass) made it possible for a birch seed to take hold and grow. A tree that humans did not plant and hopefully will allow to grow to maturity.
Humans love to debate things like "Do plants feel?" Of course, it is terribly difficult for us to imagine such a thing - because the way that plants would think and feel would be so different from us. So we will likely never really be able to answer the question.
Which means it would make a lot more sense to err on the side of caution and assume that trees and plants and creatures do feel, do think in their own way, do take joy in life.
And watching this natural born tree grow, it is certainly easy to believe that it is joyful. I sure am.


Categories: Development

Reverse Key

Jonathan Lewis - Wed, 2015-06-17 06:11

A question came up on the OTN database forum recently asking if you could have a partitioned index on a non-partitioned table.

(Aside: I’m not sure whether it would be quicker to read the manuals or try the experiment – either would probably be quicker than posing the question to the forum. As so often happens in these RTFM questions the OP didn’t bother to acknowledge any of the responses)

The answer to the question is yes – you can create a globally partitioned index, though if it uses range partitioning you have to specify a MAXVALUE partition. The interesting thing about the question, though is that several people tried to guess why it had been asked and then made suggestions based on the most likely guess (and wouldn’t it have been nice to see some response from the OP ). The common guess was that there was a performance problem with the high-value block of a sequence-based (or time-based) index – a frequent source of “buffer busy wait” events and other nasty side effects.

Unfortunately too many people suggesting reverse key as a solution to this “right-hand” problem. If you’re licensed for partitioning it’s almost certain that a better option would simple be to use global hash partitioning (with 2^N for some N) partitions. Using reverse keys can result in a bigger performance than the one you’re trying to avoid – you may end up turning a little time spent on buffer busy waits into a large amount of time spent on db file sequential reads. To demonstrate the issue I’ve created a sample script – and adjusted my buffer cache down to the appropriate scale:

create table t1(
	id	not null
)
nologging
as
with generator as (
	select	--+ materialize
		rownum id 
	from dual 
	connect by 
		rownum <= 1e4
)
select
	1e7 + rownum	id
from
	generator	v1,
	generator	v2
where
	rownum <= 1e7 
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1'
	);
end;
/

alter table t1 add constraint t1_pk primary key(id) 
using index 
	reverse 
	nologging 
;

alter system flush buffer_cache;
alter session set events '10046 trace name context forever, level 8';

begin
	for i in 20000001..20010000 loop
		insert into t1 values(i);
	end loop;
end;
/

I’ve created a table with 10,000,000 rows using a sequential value as the primary key, then inserted “the next” 10,000 rows into the table in order. The index occupied about about 22,000 blocks, so to make my demonstration show you the type of effect you could get from a busy production system with more tables and many indexes I ran my test with the buffer cache limited to 6,000 blocks – a fair fraction of the total index size. Here’s a small section of the trace file from the test running 10.2.0.3 on an elderly machine:


WAIT #43: nam='db file sequential read' ela= 13238 file#=6 block#=12653 blocks=1 obj#=63623 tim=3271125590
WAIT #43: nam='db file sequential read' ela=  7360 file#=6 block#=12749 blocks=1 obj#=63623 tim=3271133150
WAIT #43: nam='db file sequential read' ela=  5793 file#=6 block#=12844 blocks=1 obj#=63623 tim=3271139110
WAIT #43: nam='db file sequential read' ela=  5672 file#=6 block#=12940 blocks=1 obj#=63623 tim=3271145028
WAIT #43: nam='db file sequential read' ela= 15748 file#=5 block#=13037 blocks=1 obj#=63623 tim=3271160998
WAIT #43: nam='db file sequential read' ela=  8080 file#=5 block#=13133 blocks=1 obj#=63623 tim=3271169314
WAIT #43: nam='db file sequential read' ela=  8706 file#=5 block#=13228 blocks=1 obj#=63623 tim=3271178240
WAIT #43: nam='db file sequential read' ela=  7919 file#=5 block#=13325 blocks=1 obj#=63623 tim=3271186372
WAIT #43: nam='db file sequential read' ela= 15553 file#=6 block#=13549 blocks=1 obj#=63623 tim=3271202115
WAIT #43: nam='db file sequential read' ela=  7044 file#=6 block#=13644 blocks=1 obj#=63623 tim=3271209420
WAIT #43: nam='db file sequential read' ela=  6062 file#=6 block#=13741 blocks=1 obj#=63623 tim=3271215648
WAIT #43: nam='db file sequential read' ela=  6067 file#=6 block#=13837 blocks=1 obj#=63623 tim=3271221887
WAIT #43: nam='db file sequential read' ela= 11516 file#=5 block#=13932 blocks=1 obj#=63623 tim=3271234852
WAIT #43: nam='db file sequential read' ela=  9295 file#=5 block#=14028 blocks=1 obj#=63623 tim=3271244368
WAIT #43: nam='db file sequential read' ela=  9466 file#=5 block#=14125 blocks=1 obj#=63623 tim=3271254002
WAIT #43: nam='db file sequential read' ela=  7704 file#=5 block#=14221 blocks=1 obj#=63623 tim=3271261991
WAIT #43: nam='db file sequential read' ela= 16319 file#=6 block#=14444 blocks=1 obj#=63623 tim=3271278492
WAIT #43: nam='db file sequential read' ela=  7416 file#=6 block#=14541 blocks=1 obj#=63623 tim=3271286129
WAIT #43: nam='db file sequential read' ela=  5748 file#=6 block#=14637 blocks=1 obj#=63623 tim=3271292163
WAIT #43: nam='db file sequential read' ela=  7131 file#=6 block#=14732 blocks=1 obj#=63623 tim=3271299489
WAIT #43: nam='db file sequential read' ela= 16126 file#=5 block#=14829 blocks=1 obj#=63623 tim=3271315883
WAIT #43: nam='db file sequential read' ela=  7746 file#=5 block#=14925 blocks=1 obj#=63623 tim=3271323845
WAIT #43: nam='db file sequential read' ela=  9208 file#=5 block#=15020 blocks=1 obj#=63623 tim=3271333239
WAIT #43: nam='db file sequential read' ela=  7708 file#=5 block#=15116 blocks=1 obj#=63623 tim=3271341141
WAIT #43: nam='db file sequential read' ela= 15484 file#=6 block#=15341 blocks=1 obj#=63623 tim=3271356807
WAIT #43: nam='db file sequential read' ela=  5488 file#=6 block#=15437 blocks=1 obj#=63623 tim=3271362623
WAIT #43: nam='db file sequential read' ela= 10447 file#=6 block#=15532 blocks=1 obj#=63623 tim=3271373342
WAIT #43: nam='db file sequential read' ela= 12565 file#=6 block#=15629 blocks=1 obj#=63623 tim=3271386741
WAIT #43: nam='db file sequential read' ela= 17168 file#=5 block#=15725 blocks=1 obj#=63623 tim=3271404135
WAIT #43: nam='db file sequential read' ela=  7542 file#=5 block#=15820 blocks=1 obj#=63623 tim=3271411882
WAIT #43: nam='db file sequential read' ela=  9400 file#=5 block#=15917 blocks=1 obj#=63623 tim=3271421514
WAIT #43: nam='db file sequential read' ela=  7804 file#=5 block#=16013 blocks=1 obj#=63623 tim=3271429519
WAIT #43: nam='db file sequential read' ela= 14470 file#=6 block#=16237 blocks=1 obj#=63623 tim=3271444168
WAIT #43: nam='db file sequential read' ela=  5788 file#=6 block#=16333 blocks=1 obj#=63623 tim=3271450154
WAIT #43: nam='db file sequential read' ela=  9630 file#=6 block#=16429 blocks=1 obj#=63623 tim=3271460008
WAIT #43: nam='db file sequential read' ela= 10910 file#=6 block#=16525 blocks=1 obj#=63623 tim=3271471174
WAIT #43: nam='db file sequential read' ela= 15683 file#=5 block#=16620 blocks=1 obj#=63623 tim=3271487065
WAIT #43: nam='db file sequential read' ela=  8094 file#=5 block#=16717 blocks=1 obj#=63623 tim=3271495454
WAIT #43: nam='db file sequential read' ela=  6670 file#=5 block#=16813 blocks=1 obj#=63623 tim=3271502293
WAIT #43: nam='db file sequential read' ela=  7852 file#=5 block#=16908 blocks=1 obj#=63623 tim=3271510360
WAIT #43: nam='db file sequential read' ela= 10500 file#=6 block#=17133 blocks=1 obj#=63623 tim=3271521039
WAIT #43: nam='db file sequential read' ela= 11038 file#=6 block#=17229 blocks=1 obj#=63623 tim=3271532275
WAIT #43: nam='db file sequential read' ela= 12432 file#=6 block#=17325 blocks=1 obj#=63623 tim=3271544974
WAIT #43: nam='db file sequential read' ela=  7784 file#=6 block#=17421 blocks=1 obj#=63623 tim=3271553331
WAIT #43: nam='db file sequential read' ela=  7774 file#=5 block#=17517 blocks=1 obj#=63623 tim=3271561346
WAIT #43: nam='db file sequential read' ela=  6583 file#=5 block#=17613 blocks=1 obj#=63623 tim=3271568146
WAIT #43: nam='db file sequential read' ela=  7901 file#=5 block#=17708 blocks=1 obj#=63623 tim=3271576231
WAIT #43: nam='db file sequential read' ela=  6667 file#=5 block#=17805 blocks=1 obj#=63623 tim=3271583259
WAIT #43: nam='db file sequential read' ela=  9427 file#=6 block#=18029 blocks=1 obj#=63623 tim=3271592988
WAIT #43: nam='db file sequential read' ela= 52334 file#=6 block#=18125 blocks=1 obj#=63623 tim=3271646055
WAIT #43: nam='db file sequential read' ela= 50512 file#=6 block#=18221 blocks=1 obj#=63623 tim=3271697284
WAIT #43: nam='db file sequential read' ela= 10095 file#=6 block#=18317 blocks=1 obj#=63623 tim=3271708095

Check the block number for this list of single block reads – we’re jumping through the index about 100 blocks at a time to read the next block where an index entry has to go. The jumps are the expected (and designed) effect of reverse key indexes: the fact that the jumps turn into physical disc reads is the (possibly unexpected) side effect. Reversing an index makes adjacent values look very different (by reversing the bytes) and go to different index leaf blocks: the purpose of the exercise is to scatter concurrent similar inserts across multiple blocks, but if you scatter the index entries you need to buffer a lot more of the index to keep the most recently used values in memory. Reversing the index may eliminate buffer busy waits, but it may increase time lost of db file sequential reads dramatically.

Here’s a short list of interesting statistics from this test – this time running on 11.2.0.4 on a machine with SSDs) comparing the effects of reversing the index with those of not reversing the index – normal index first:


Normal index
------------
CPU used by this session               83
DB time                                97
db block gets                      40,732
physical reads                         51
db block changes                   40,657
redo entries                       20,174
redo size                       5,091,436
undo change vector size         1,649,648

Repeat with reverse key index
-----------------------------
CPU used by this session              115
DB time                               121
db block gets                      40,504
physical reads                     10,006
db block changes                   40,295
redo entries                       19,973
redo size                       4,974,820
undo change vector size         1,639,232

Because of the SSDs there’s little difference in timing between the two sets of data and, in fact, all the other measures of work done are very similar except for the physical read, and the increase in reads is probably the cause of the extra CPU time thanks to both the LRU manipulation and the interaction with the operating system.

If you want to check the effect of index reversal you can take advantage of the sys_op_lbid() function to sample a little of your data – in my case I’ve queried the last 10,000 rows (values) in the table:


select 
	/*+ 
		cursor_sharing_exact 
		dynamic_sampling(0) 
		no_monitoring 
		no_expand 
		index_ffs(t1,t1_i1) 
		noparallel_index(t,t1_i1) 
	*/ 
	count (distinct sys_op_lbid( &m_ind_id ,'L',t1.rowid)) as leaf_blocks
from 
	t1
where 
	id between 2e7 + 1 and 2e7 + 1e4
;

The &m_ind_id substition variable is the object_id of the index t1_i1.

In my case, with an index of 22,300 leaf blocks, my 10,000 consecutive values were scattered over 9,923 leaf blocks. If I want access to “recent data” to be as efficient as possible I need to keep that many blocks of the index cached, compared to (absolute) worst case for my data 100 leaf blocks. When you reverse key an index you have to think about how much bigger you have to make your buffer cache to keep the performance constant.


HOWTO: Check if an XMLType View or Table is Hierarchy Enabled

Marco Gralike - Wed, 2015-06-17 03:53
The following simple, code snippet, demonstrates how you can check if an XMLType view or…

Introducing Formspider 1.9, the Web Application Development Framework for PL/SQL Developers.

Gerger Consulting - Wed, 2015-06-17 02:50
The new version of Formspider is coming out this summer. Join our webinar and find out its new features and how your organization can benefit from them. 

The following topics will be covered during the webinar: 
- New features in Formspider version 1.9 
- Formspider architecture&benefits 
- Introduction to development with Formspider 

You can sign up for the webinar at this link.
Categories: Development