Skip navigation.

Feed aggregator

Oracle PaaS4SaaS UX Enablement with Certus Solutions: Valid Business Proposition

Usable Apps - Sat, 2015-02-21 15:59

Oracle’s Platform as a Service (PaaS) offering is a huge opportunity for Oracle partners. Using PaaS4SaaS  for extending the Oracle Applications Cloud and building simplified UI solutions are powerful differentiators combined. Add in user experience (UX), and it's a competitive must-have move for business. The Oracle Applications User Experience (OAUX) communications and outreach team has been on the road enabling partners to make this competitive potential real.

Our first PaaS4SaaS enablement in 2015 was a three-day, hands-on design and development event with Certus Solutions, held at the Oracle London City office in the UK.

An awesome range of UX and technology skills from OAUX and partners was brought to bear on realizing a Cloud solution; attendees self-organizing and working seamlessly together in small agile teams.

 Debra Lilley (Certus Solutions) and Amit Kumar Bhowmick

All the stakeholders. Certus Solutions, eProseed and OAUX developers and designers collaborate. (L-R) Caroline Moloney (Certus Solutions), Lancy Silveira (OAUX), Mascha van Oosterhout (eProseed), and Julian Orr (OAUX). In the background are Debra Lilley (Certus Solutions) and Amit Kumar Bhowmick (OAUX).

Certus Solutions has partnered with eProseed to accelerate its PaaS offerings for extending the Oracle HCM Cloud and Oracle ERP Cloud, and participants from both companies were at the event*. Facilitated by the OAUX design and development chops, this powerhouse of a team wireframed a great business solution for the Oracle Applications Cloud, built it using the simplified UI RDK, and deployed the result using the Oracle Java Cloud Service SaaS-Extension (JCS-SX) PaaS offering.

Forget everything you knew about enterprise software UI design. Julian Orr (OAUX) keeps it simple.

Keepin' it simple. Julian Orr (OAUX) explains the essence of the simplified UI design. 

The event was a learning experience for all: OAUX got to walk in partner shoes. Certus Solutions and eProseed found out how to identify PaaS4SaaS business opportunities. All experienced the technical side of Oracle ADF-based Oracle Applications Cloud development and PaaS deployment. Such an exchange of empathy meant everyone discovered how to work together.

Demonstrating the straightforward nature of JCS-SX deployment,  Debra Lilley (@debralilley), Vice President of Certus Solutions Cloud Services, and OAUX designated speaker, was in the thick of the development action, deploying a prototype to the cloud like a pro while declaring “I'm not technical®” (more details about this will be revealed at a future Oracle event)!

 Bruno Neves Alves (eProseed), Amit Kumar Bhowmick (OAUX), Debra Lilley (Certus Solutions), Lancy Silveira (OAUX), and Lonneke Dikmans (eProseed)

Learning while doing. Agile, activity based work, side-by-side. (L-R) Bruno Neves Alves (eProseed), Amit Kumar Bhowmick (OAUX), Debra Lilley (Certus Solutions), Lancy Silveira (OAUX), and Lonneke Dikmans (eProseed).

The inimitable Debra closed the event, saying how the event moved Certus Solution’s Cloud business to a new high on the capability scale, and provided further vindication of Certus Solutions' business directions, including their strategic partnering with eProseed. You can read more from Debra about the event in her article "Partner Column: Extending Your SaaS Applications with PaaS" on the Oracle Fusion Middleware community blog. 

Reflecting on the event, OAUX felt that the PaaS4SaaS partner enablement strategy based on the simplified UI RDK and Oracle Cloud technology skills is hitting the right mark in the Oracle partner ecosystem. So, it was validation all round.

If you are a Gold or Platinum Oracle Applications Cloud partner that wants in on our ongoing PaaS and SaaS journey, seeking to validate that decision to take your business to the Cloud and to demonstrate confidence to customers, then reach out to us through the usual channels. 

* You can read about the OAUX eProceed PaaS and Oracle Alta UI enablement event in the Netherlands here

How to set NLS for SQL Developer

Yann Neuhaus - Sat, 2015-02-21 14:24

I'm using Oracle SQL Developer 4.1 Early Adopter for a while and I like it. That version comes with a command line (in beta) which goal is to be fully compatible with sqlplus but running in java, and having a lot more features. 

Becuse it's connecting with thin java driver by default, it doesn't use NLS_LANG. It's java. It's unicode. So here is how to set the language and characterset with the java options.

PeopleTools 8.54: %SQLHint Meta-SQL

David Kurtz - Sat, 2015-02-21 04:41
This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.
 
This new PeopleCode meta-SQL macro performs a search of SQL statement for the nth instance of SQL command keyword and inserts a string after it.
%SqlHint(SQL_cmd, index, hint_text, DB_platform [, {ENABLE | DISABLE}])
It is particularly effective with the %InsertSelect meta-SQL.  Previously the only way to put a hint into the main select was with a variable assignment, but that didn't work if the DISTINCT keyword was used because the hint appeared behind the distinct.
%InsertSelect(DISTINCT, DMK,JOB J, EMPLID= /*+LEADING(J)*/ J.EMPLID) 
FROM PS_JOB J
which resolves to:
INSERT INTO PS_DMK (EMPLID 
, EMPL_RCD
, EFFDT
, EFFSEQ
, SETID_DEPT
, DEPTID)
SELECT DISTINCT /*+LEADING(J)*/ J.EMPLID
, J.EMPL_RCD
, J.EFFDT
, J.EFFSEQ
, J.SETID_DEPT
, J.DEPTID
FROM PS_JOB J
Here is a deliberately contrived example of how to use the command.
  • I have created a separate SQL object, DMK_CURJOB, to hold effective date/sequence sub-queries which I will reference from an application engine SQL.
%P(2).EFFDT = ( 
SELECT MAX(%P(3).EFFDT)
FROM %Table(%P(1)) %P(3)
WHERE %P(3).EMPLID = %P(2).EMPLID
AND %P(3).EMPL_RCD = %P(2).EMPL_RCD
AND %P(3).EFFDT <= %CurrentDateIn)
AND %P(2).EFFSEQ = (
SELECT MAX(%P(4).EFFSEQ)
FROM %Table(%P(1)) %P(4)
WHERE %P(4).EMPLID = %P(2).EMPLID
AND %P(4).EMPL_RCD = %P(2).EMPL_RCD
AND %P(4).EFFDT = %P(2).EFFDT)
  • I want my insert statement to run in direct-path mode, so I am putting an APPEND hint into the INSERT statement.
  • I am going to put different hints into each of the different SQL query blocks, including the sub-queries in the SQL object.
%SqlHint(INSERT,1,'/*+APPEND*/',ORACLE,ENABLE)
%SqlHint(INSERT,1,'/*Developer Comment*/',ORACLE,DISABLE)
%SqlHint(SELECT,1,'/*+LEADING(J)*/',ORACLE)
%SqlHint(SELECT,2,'/*+UNNEST(J1)*/',ORACLE)
%SqlHint(SELECT,3,'/*+UNNEST(J2)*/',ORACLE)
%InsertSelect(DISTINCT, DMK,JOB J)
FROM PS_JOB J
WHERE %Sql(DMK_CURJOB,JOB,J,J1,J2)

Which resolves to:
 INSERT /*+APPEND*/ INTO PS_DMK (EMPLID 
, EMPL_RCD
, EFFDT
, EFFSEQ
, SETID_DEPT
, DEPTID)
SELECT /*+LEADING(J)*/ DISTINCT J.EMPLID
, J.EMPL_RCD
, J.EFFDT
, J.EFFSEQ
, J.SETID_DEPT
, J.DEPTID
FROM PS_JOB J
WHERE J.EFFDT = (
SELECT /*+UNNEST(J1)*/ MAX(J1.EFFDT)
FROM PS_JOB J1
WHERE J1.EMPLID = J.EMPLID
AND J1.EMPL_RCD = J.EMPL_RCD
AND J1.EFFDT <= TO_DATE(TO_CHAR(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD'))
AND J.EFFSEQ = (
SELECT /*+UNNEST(J2)*/ MAX(J2.EFFSEQ)
FROM PS_JOB J2
WHERE J2.EMPLID = J.EMPLID
AND J2.EMPL_RCD = J.EMPL_RCD
AND J2.EFFDT = J.EFFDT)

The %SQLHint processing appears to be done after all other expansions, so the search and insert can reach into %SQL objects.  Previously we had to put hints into the SQL object.  Although, sometimes, we could avoid that by using query block naming hints.  Now, I can place any hint after any SQL command.  I can choose to apply a hint in just one step that references a SQL object, rather than in the SQL object which affects every step that references it.

If you put multiple substitutions in for the same SQL command, only the last enabled one is processed.

I frequently find that developers love to put comments into SQL which then appears in logs files and Oracle monitoring tool.  I hate that.  Comments in SQL that run on the database is an unnecessary overhead, and it turns up later in SQL monitoring and tuning tools.  It is worth noting that comments that are not hints are stripped out of SQL in PL/SQL procedures.  Perhaps developers should put their comment in a disabled %SQLHint so it will not appear in the final SQL?
Oracle SQL Outlines/Profiles/Patches/BaselinesAll this talk of adding hints to source code is going to cause an Oracle DBA to ask why not use the techniques provided by Oracle to control execution plans on application engine SQL.  The problem is that those techniques are frequently thwarted by the dynamic nature of SQL created by PeopleSoft.
  • Bind variables can become literals when the SQL is generated, though profiles and baselines can handle this.
  • Different instances of temporary records are different tables in the database. You would have to handle each table (or every combination of tables if you have multiple temporary records in one statement).
  • Many delivered processes have dynamically generated SQL and you would again have to handle every variation separately.
ConclusionThe %SQLHint meta-SQL brings a huge advantage for Oracle's PeopleSoft developers.  Although it is possible to create platform specific application engine sections, there is huge reluctance to do this in development in Oracle.  This is entirely reasonable as it results in having to develop, test and maintain separate code lines.  Many of the meta-SQL macros are designed precisely to overcome SQL differences between different supported database platforms. Now, using %SQLHint they can now put Oracle specific optimizer hints into platform generic application engine steps, safe in the knowledge that the hints will only affect Oracle platforms.

This is going to be a very useful feature.  Simple.  Effective.  I look forward to hinting the places that other techniques cannot reach!



©David Kurtz, Go-Faster Consultancy Ltd.

IBM Bluemix - Deploy your app the way you want

Pas Apicella - Sat, 2015-02-21 03:18
We recently upgraded our capability in the IBM public cloud. Before this new version you could build, test, and deploy apps on Bluemix by using open source Cloud Foundry technology. But now you have more choice bringing two new options as follows

1. IBM Containers Beta

Use IBM Containers to run apps and services in a hosted cloud environment. Port your existing applications to IBM Bluemix and make them publicly accessible and composable. Use a private registry to upload, store, and retrieve your trusted images.

2. Virtual Machines Beta

Get total control over your app's infrastructure by deploying virtual machines. Manage them from the Bluemix dashboard, or from anywhere with OpenStack APIs.


Try it out as follows

https://console.ng.bluemix.net/http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

NVARCHAR2, UTL-16 and Emails

Gary Myers - Fri, 2015-02-20 20:00
Development is often the case of trying several paths through the forest, hoping to find one that leads out the other end. That was the start of my week.

Until we get our shiny new 12c database running on its shiny new box (and all the data shifted to it), we are living with a mix of databases. To begin with, the data we managed was mostly AU/NZ and Europeans stuff, and the character set is set accordingly. By which I mean one of those Eurocentric things and not UTF-8. We also have a bunch of columns in NVARCHAR2 with AL16UTF16 as the alternative character set.

I'm pretty sure the new database will start with UTF-8. But in the mean time I was responsible for trying to get emails out of the current database with data in various European and non-European character sets.  My paths through that forest went as follows...

  • It should just work. Let me test it.....Oh bugger.
  • Okay, maybe if I put "utf-8" in various bits of the message.
  • And switch the code so it uses NVARCHAR2 rather than defaulting to VARCHAR2.
  • Oh....UTF-16 isn't the same as UTF-8. I need to convert it somehow
  • So I can't put UTF-8 values in either my Eurocentric VARCHAR2 or UTF-16 NVARCHAR2.
  • And I have to get this through SMTP, where you can still see the exposed bones of 7-bit ASCII, 


AHA ! HTML Entities. That means I can get away with using ASCIISTR to convert the UTF-16 strings into a sequence of Hex values for each two-byte character. Then I stick a &#x in front of each character, and I have an HTML representation of the string !

It stinks of an ugly solution.
I think there should be a way of sending utf-16 in the content, but I couldn't get to it.

It doesn't help that email HTML is less capable than browser HTML, and has to support a variety of older clients (plus presenting an HTML email body inside of the HTML of a webmail client is always going to be awkward).

Automatic: Nice, but Not Necessary

Oracle AppsLab - Fri, 2015-02-20 14:35

Editor’s note: Here’s the first post from one of our newish team members, Ben. Ben is a usability engineer with a PhD in Cognitive Psychology, and by his own account, he’s also a below average driver. Those two factoids are not necessarily related; I just don’t know what his likes and dislikes are so I’m spit-balling.

Ben applied his research chops to himself and his driving using Automatic (@automatic), a doodad that measures your driving and claims to make you a better driver. So, right up his alley.

Aside from the pure research, I’m interested in this doodad as yet another data collector for the quantified self. As we generate mounds of data through sensors, we should be able to generate personal annual reports, a la Nicholas Felton, that have recommended actions and tangible benefits.

Better living through math.

Anyway, enjoy Ben’s review.

When I first heard about Automatic (@automatic), I was quite excited—some cool new technology that will help me become a better driver. The truth is, I’m actually not a big fan of driving. Which is partly because I know I’m not as good of a driver as I could be, so Automatic was a glimmer of hope that would lead me on the way to improving my skills.

Though I will eagerly adopt automated cars once they’re out and safe, the next best thing is to get better so I no longer mildly dread driving, especially when I’m conveying others. And one issue with trying to improve is knowing what and when you’re doing something wrong, so with that in mind (and for enterprise research purposes), I tried out Automatic.

Automatic is an app for your phone plus a gadget (called the Link) that plugs into your car’s diagnostics port, which together gives you feedback on your driving and provides various ways to look at your trip data.

Automatic Link

The diagnostics port the Link plugs into is the same one that your mechanic uses to see what might be wrong when your check engine light is ominously glaring on your dashboard. Most cars after 1996 have these, but not all data is available for all cars. Mine is a 2004 Honda Civic, which doesn’t put out gas tank level data, meaning that MPG calculations may not be as accurate as they could be. But it still calculates MPG, and it seems to be reasonably accurate. I don’t, however, get the benefit of “time to fuel up” notifications, though I do wonder how much of a difference those notifications make.

The Link has its own accelerometer, so that combined with the data from the port and paired with your phone via Bluetooth, it can tell you about your acceleration, distance driven, your speed, and your location. It can also tell you what your “Check Engine” light means, and send out some messages in the result of a crash.

It gives three points of driving feedback: if you accelerate too quickly, brake too hard, or go over 70 mph. Each driving sin is relayed to you with its own characteristic tones emitted from the Link. It’s a delightful PC speaker, taking you way back to the halcyon DOS days (for those of you who were actually alive at the time). It also lets you know when it links up with your phone, and when it doesn’t successfully connect it outputs a sound much like you just did something regrettable in a mid-’80s Nintendo game.

App screenshot

One of the main motivators for the driving feedback is to save gas—though you can change the top speed alert if you’d like. From their calculations, Automatic says 70 mph is about as fast as you want to go, given the gas-spent/time-it-will-take-to-get-there tradeoff.

Automatic web dashboard

Another cool feature is that it integrates with IFTTT (@ifttt), so you can set it up to do things like: when you get home, turn the lights on (if you have smart lights); or when you leave work, send a text to your spouse; or any other number of things—useful or not!

Is It Worth It?

The big question is, is it worth $99? It’s got a great interface, a sleek little device, and a good number of features, but for me, it hasn’t been that valuable (yet). For those with the check engine light coming up, it could conceivably save a lot of money if you can prevent unnecessary service on your car. Fortunately, my Civic has never shown me the light (knock on wood), though I’ll probably be glad I have something like Automatic when it does.

I had high hopes for the driver feedback, until I saw that it’s actually pretty limited. For the most part, the quick acceleration and braking are things I already avoided, and when it told me I did them, I usually had already realized it. (Or it was a situation out of my control that called for it.) A few times it beeped at me for accelerating where it didn’t feel all that fast, but perhaps it was.

I was hoping the feedback would be more nuanced and could allow me to improve further. The alerts would be great for new drivers, but don’t offer a whole lot of value to more experienced drivers—even those of us who would consider themselves below average in driving skill (putting me in an elite group of 7% of Americans).

The Enterprise Angle

Whether it’s Automatic, or what looks like might be a more promising platform, Mojio (@getmojio), there are a few potentially compelling business reasons to check out car data-port devices.

One of the more obvious ones is to track mileage for work purposes—it gives you nice readouts of all your trips, and allows you to easily keep records. But that’s just making it a little easier for an employee to do their expense reports.

The most intriguing possibility (for me) is for businesses that manage fleets of regularly driven vehicles. An Automatic-like device could conceivably track the efficiency of cars/trucks and drivers, and let a business know if a driver needs better training, or if a vehicle is underperforming or might have some other issues. This could be done through real-time fuel efficiency, or tracking driving behavior, like what Automatic already does: hard braking and rapid acceleration.
If a truck seems to be getting significantly less mpg than it should, they can see if it needs maintenance or if the driver is driving too aggressively. Though trucks probably get regular maintenance, this kind of data may allow for preventive care that could translate to savings.

This kind of tracking could also be interesting for driver training, examining the most efficient or effective drivers and adopting an “Identify, Codify, Modify” approach.

Overall

I’d say this technology has some interesting possibilities, but may not be all that useful yet for most people. It’s fun to have a bunch of data, and to get some gentle reminders on driving practices, but the driver improvement angle from Automatic hasn’t left me feeling like I’m a better driver. It really seems that this kind of technology (though not necessarily Automatic, per se) lends itself more to fleet management, improving things at a larger scale.

Stay tuned for a review of Mojio, which is similar to Automatic, but features a cellular connection and a development platform, and hence more possibilities.Possibly Related Posts:

Oracle Manual Standby – Applying Log

Michael Dinh - Fri, 2015-02-20 13:49

If you are running Oracle EE, there are many books on Data Guard. However, for Oracle SE and manual standby, have fun searching.

I wanted to get a better understanding of registering logfile.

Depending on how the standby environment is monitored, not registering logfile may yield incorrect results when checking standby lag.

When logfile is not registered, v$archived_log is not updated.

STANDBY: check status

[oracle@armor:hawk:/home/oracle]
$ sqlplus / as sysdba @stby.sql

SQL*Plus: Release 11.2.0.4.0 Production on Fri Feb 20 10:12:58 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Release 11.2.0.4.0 - 64bit Production

ARMOR:(SYS@hawk):PHYSICAL STANDBY> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
  2  from
  3  (
  4    select thread# thrd, max(sequence#) almax from v$archived_log
  5    where resetlogs_change#=(select resetlogs_change# from v$database) and applied='YES'
  6    group by thread#
  7  ) al,
  8  (
  9    select thread# thrd, max(sequence#) lhmax
 10    from v$log_history
 11    where first_time=(select max(first_time) from v$log_history)
 12    group by thread#
 13  ) lh
 14  where al.thrd = lh.thrd
 15  ;

          Thread Last Seq Received Last Seq Applied
---------------- ----------------- ----------------
               1                44               44

ARMOR:(SYS@hawk):PHYSICAL STANDBY> select first_time,applied,archived,registrar,creator,thread#,sequence# from v$archived_log
  2  where resetlogs_change#=(select resetlogs_change# from v$database)
  3  and first_time>sysdate-3/24
  4  order by sequence# desc
  5  ;

NOTE: SRMN - RMAN at standby

FIRST_TIME          APPLIED   ARC REGISTR CREATOR          THREAD#        SEQUENCE#
------------------- --------- --- ------- ------- ---------------- ----------------
2015-02-20 10:09:09 YES       YES SRMN    SRMN                   1               44
2015-02-20 10:03:54 NO        YES SRMN    SRMN                   1               43
2015-02-20 09:58:33 YES       YES SRMN    SRMN                   1               42
2015-02-20 09:52:53 YES       YES SRMN    SRMN                   1               41
2015-02-20 09:47:17 YES       YES SRMN    SRMN                   1               40
2015-02-20 09:45:50 YES       YES SRMN    SRMN                   1               39
2015-02-20 09:41:36 YES       YES SRMN    SRMN                   1               38
2015-02-20 09:21:34 YES       YES SRMN    SRMN                   1               37
2015-02-20 09:01:33 YES       YES SRMN    SRMN                   1               36
2015-02-20 08:41:35 YES       YES SRMN    SRMN                   1               35
2015-02-20 08:21:34 YES       YES SRMN    SRMN                   1               34
2015-02-20 08:01:33 YES       YES SRMN    SRMN                   1               33
2015-02-20 07:41:30 YES       YES SRMN    SRMN                   1               32
2015-02-20 07:41:29 YES       YES SRMN    SRMN                   1               31
2015-02-20 07:41:29 YES       YES SRMN    SRMN                   1               30
2015-02-20 07:41:29 YES       YES SRMN    SRMN                   1               29
2015-02-20 07:41:26 YES       YES SRMN    SRMN                   1               28
2015-02-20 07:41:23 YES       YES SRMN    SRMN                   1               27
2015-02-20 07:24:52 YES       YES SRMN    SRMN                   1               26

19 rows selected.

ARMOR:(SYS@hawk):PHYSICAL STANDBY>

PRIMARY: create tablespace new1

ANGEL:(SYS@hawk):PRIMARY> create tablespace new1;

Tablespace created.

ANGEL:(SYS@hawk):PRIMARY> alter system archive log current;

System altered.

ANGEL:(SYS@hawk):PRIMARY> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /oradata/archivelog/hawk
Oldest online log sequence     44
Next log sequence to archive   46
Current log sequence           46
ANGEL:(SYS@hawk):PRIMARY>

PRIMARY: transfer archive log to STANDBY

[oracle@angel:hawk:/oradata/archivelog/hawk]
$ rsync -avh /oradata/archivelog/hawk/ armor:/oradata/archivelog/hawk/
sending incremental file list
./
hawk_45_1_872097259.arc

sent 170.68K bytes  received 34 bytes  341.42K bytes/sec
total size is 78.17M  speedup is 457.90
[oracle@angel:hawk:/oradata/archivelog/hawk]
$

STANDBY: apply log

[oracle@armor:hawk:/home/oracle]
$ sqlplus / as sysdba @recoverauto.sql

SQL*Plus: Release 11.2.0.4.0 Production on Fri Feb 20 10:14:43 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Release 11.2.0.4.0 - 64bit Production

ARMOR:(SYS@hawk):PHYSICAL STANDBY> recover automatic standby database;
ORA-00279: change 216448 generated at 02/20/2015 10:14:03 needed for thread 1
ORA-00289: suggestion : /oradata/archivelog/hawk/hawk_46_1_872097259.arc
ORA-00280: change 216448 for thread 1 is in sequence #46
ORA-00278: log file '/oradata/archivelog/hawk/hawk_46_1_872097259.arc' no longer needed for this recovery
ORA-00308: cannot open archived log '/oradata/archivelog/hawk/hawk_46_1_872097259.arc'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3


Specify log: {=suggested | filename | AUTO | CANCEL}
cancel
Media recovery cancelled.
ARMOR:(SYS@hawk):PHYSICAL STANDBY> @@stby.sql
ARMOR:(SYS@hawk):PHYSICAL STANDBY> set echo on numw 16 lines 200 pages 1000
ARMOR:(SYS@hawk):PHYSICAL STANDBY> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
  2  from
  3  (
  4    select thread# thrd, max(sequence#) almax from v$archived_log
  5    where resetlogs_change#=(select resetlogs_change# from v$database) and applied='YES'
  6    group by thread#
  7  ) al,
  8  (
  9    select thread# thrd, max(sequence#) lhmax
 10    from v$log_history
 11    where first_time=(select max(first_time) from v$log_history)
 12    group by thread#
 13  ) lh
 14  where al.thrd = lh.thrd
 15  ;

          Thread Last Seq Received Last Seq Applied
---------------- ----------------- ----------------
               1                44               45

NOTE: sequence 45 is not registered with v$archived_log but has been applied

ARMOR:(SYS@hawk):PHYSICAL STANDBY> select first_time,applied,archived,registrar,creator,thread#,sequence# from v$archived_log
  2  where resetlogs_change#=(select resetlogs_change# from v$database)
  3  and first_time>sysdate-3/24
  4  order by sequence# desc
  5  ;

FIRST_TIME          APPLIED   ARC REGISTR CREATOR          THREAD#        SEQUENCE#
------------------- --------- --- ------- ------- ---------------- ----------------
2015-02-20 10:09:09 YES       YES SRMN    SRMN                   1               44
2015-02-20 10:03:54 NO        YES SRMN    SRMN                   1               43
2015-02-20 09:58:33 YES       YES SRMN    SRMN                   1               42
2015-02-20 09:52:53 YES       YES SRMN    SRMN                   1               41
2015-02-20 09:47:17 YES       YES SRMN    SRMN                   1               40
2015-02-20 09:45:50 YES       YES SRMN    SRMN                   1               39
2015-02-20 09:41:36 YES       YES SRMN    SRMN                   1               38
2015-02-20 09:21:34 YES       YES SRMN    SRMN                   1               37
2015-02-20 09:01:33 YES       YES SRMN    SRMN                   1               36
2015-02-20 08:41:35 YES       YES SRMN    SRMN                   1               35
2015-02-20 08:21:34 YES       YES SRMN    SRMN                   1               34
2015-02-20 08:01:33 YES       YES SRMN    SRMN                   1               33
2015-02-20 07:41:30 YES       YES SRMN    SRMN                   1               32
2015-02-20 07:41:29 YES       YES SRMN    SRMN                   1               31
2015-02-20 07:41:29 YES       YES SRMN    SRMN                   1               30
2015-02-20 07:41:29 YES       YES SRMN    SRMN                   1               29
2015-02-20 07:41:26 YES       YES SRMN    SRMN                   1               28
2015-02-20 07:41:23 YES       YES SRMN    SRMN                   1               27
2015-02-20 07:24:52 YES       YES SRMN    SRMN                   1               26

19 rows selected.

ARMOR:(SYS@hawk):PHYSICAL STANDBY> select name from v$tablespace;

NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
NEW1

6 rows selected.

ARMOR:(SYS@hawk):PHYSICAL STANDBY>

PRIMARY: create tablespace new2

ANGEL:(SYS@hawk):PRIMARY> create tablespace new2;

Tablespace created.

ANGEL:(SYS@hawk):PRIMARY> alter system archive log current;

System altered.

ANGEL:(SYS@hawk):PRIMARY> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /oradata/archivelog/hawk
Oldest online log sequence     45
Next log sequence to archive   47
Current log sequence           47
ANGEL:(SYS@hawk):PRIMARY>

PRIMARY: transfer archive log to STANDBY

[oracle@angel:hawk:/oradata/archivelog/hawk]
$ rsync -avh /oradata/archivelog/hawk/ armor:/oradata/archivelog/hawk/
sending incremental file list
./
hawk_46_1_872097259.arc

sent 144.59K bytes  received 34 bytes  289.24K bytes/sec
total size is 78.31M  speedup is 541.50
[oracle@angel:hawk:/oradata/archivelog/hawk]
$

STANDBY: register archive log using RMAN

[oracle@armor:hawk:/home/oracle]
$ rman @catalog_arc.rman

Recovery Manager: Release 11.2.0.4.0 - Production on Fri Feb 20 10:16:20 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

RMAN> set echo on
2> connect target;
3> catalog start with '/oradata/archivelog/hawk/' noprompt;
4> exit
echo set on

connected to target database: HAWK (DBID=3130795691, not open)

using target database control file instead of recovery catalog
searching for all files that match the pattern /oradata/archivelog/hawk/

List of Files Unknown to the Database
=====================================
File Name: /oradata/archivelog/hawk/hawk_46_1_872097259.arc
File Name: /oradata/archivelog/hawk/hawk_45_1_872097259.arc
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /oradata/archivelog/hawk/hawk_46_1_872097259.arc
File Name: /oradata/archivelog/hawk/hawk_45_1_872097259.arc

Recovery Manager complete.

STANDBY: apply log

[oracle@armor:hawk:/home/oracle]
$ sqlplus / as sysdba @recoverauto.sql

SQL*Plus: Release 11.2.0.4.0 Production on Fri Feb 20 10:16:34 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Release 11.2.0.4.0 - 64bit Production

ARMOR:(SYS@hawk):PHYSICAL STANDBY> recover automatic standby database;
ORA-00279: change 216754 generated at 02/20/2015 10:15:39 needed for thread 1
ORA-00289: suggestion : /oradata/archivelog/hawk/hawk_47_1_872097259.arc
ORA-00280: change 216754 for thread 1 is in sequence #47
ORA-00278: log file '/oradata/archivelog/hawk/hawk_47_1_872097259.arc' no longer needed for this recovery
ORA-00308: cannot open archived log '/oradata/archivelog/hawk/hawk_47_1_872097259.arc'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3


Specify log: {=suggested | filename | AUTO | CANCEL}
cancel
Media recovery cancelled.
ARMOR:(SYS@hawk):PHYSICAL STANDBY> @@stby.sql
ARMOR:(SYS@hawk):PHYSICAL STANDBY> set echo on numw 16 lines 200 pages 1000
ARMOR:(SYS@hawk):PHYSICAL STANDBY> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
  2  from
  3  (
  4    select thread# thrd, max(sequence#) almax from v$archived_log
  5    where resetlogs_change#=(select resetlogs_change# from v$database) and applied='YES'
  6    group by thread#
  7  ) al,
  8  (
  9    select thread# thrd, max(sequence#) lhmax
 10    from v$log_history
 11    where first_time=(select max(first_time) from v$log_history)
 12    group by thread#
 13  ) lh
 14  where al.thrd = lh.thrd
 15  ;

          Thread Last Seq Received Last Seq Applied
---------------- ----------------- ----------------
               1                46               46

ARMOR:(SYS@hawk):PHYSICAL STANDBY> select first_time,applied,archived,registrar,creator,thread#,sequence# from v$archived_log
  2  where resetlogs_change#=(select resetlogs_change# from v$database)
  3  and first_time>sysdate-3/24
  4  order by sequence# desc
  5  ;

FIRST_TIME          APPLIED   ARC REGISTR CREATOR          THREAD#        SEQUENCE#
------------------- --------- --- ------- ------- ---------------- ----------------
2015-02-20 10:14:03 YES       YES SRMN    SRMN                   1               46
2015-02-20 10:11:03 NO        YES SRMN    SRMN                   1               45
2015-02-20 10:09:09 YES       YES SRMN    SRMN                   1               44
2015-02-20 10:03:54 NO        YES SRMN    SRMN                   1               43
2015-02-20 09:58:33 YES       YES SRMN    SRMN                   1               42
2015-02-20 09:52:53 YES       YES SRMN    SRMN                   1               41
2015-02-20 09:47:17 YES       YES SRMN    SRMN                   1               40
2015-02-20 09:45:50 YES       YES SRMN    SRMN                   1               39
2015-02-20 09:41:36 YES       YES SRMN    SRMN                   1               38
2015-02-20 09:21:34 YES       YES SRMN    SRMN                   1               37
2015-02-20 09:01:33 YES       YES SRMN    SRMN                   1               36
2015-02-20 08:41:35 YES       YES SRMN    SRMN                   1               35
2015-02-20 08:21:34 YES       YES SRMN    SRMN                   1               34
2015-02-20 08:01:33 YES       YES SRMN    SRMN                   1               33
2015-02-20 07:41:30 YES       YES SRMN    SRMN                   1               32
2015-02-20 07:41:29 YES       YES SRMN    SRMN                   1               31
2015-02-20 07:41:29 YES       YES SRMN    SRMN                   1               30
2015-02-20 07:41:29 YES       YES SRMN    SRMN                   1               29
2015-02-20 07:41:26 YES       YES SRMN    SRMN                   1               28
2015-02-20 07:41:23 YES       YES SRMN    SRMN                   1               27
2015-02-20 07:24:52 YES       YES SRMN    SRMN                   1               26

21 rows selected.

STANDBY: open read only test

ARMOR:(SYS@hawk):PHYSICAL STANDBY> @openreadonly
ARMOR:(SYS@hawk):PHYSICAL STANDBY> set echo on numw 16 lines 200
ARMOR:(SYS@hawk):PHYSICAL STANDBY> alter database open read only;

Database altered.

ARMOR:(SYS@hawk):PHYSICAL STANDBY> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
ARMOR:(SYS@hawk):PHYSICAL STANDBY> startup nomount;
ORACLE instance started.

Total System Global Area       1068937216 bytes
Fixed Size                        2260088 bytes
Variable Size                   281019272 bytes
Database Buffers                780140544 bytes
Redo Buffers                      5517312 bytes
ARMOR:(SYS@hawk):PHYSICAL STANDBY> alter database mount standby database;

Database altered.

ARMOR:(SYS@hawk):PHYSICAL STANDBY> select controlfile_type,open_mode,database_role,db_unique_name,standby_became_primary_scn from v$database
  2  ;

CONTROL OPEN_MODE            DATABASE_ROLE    DB_UNIQUE_NAME                 STANDBY_BECAME_PRIMARY_SCN
------- -------------------- ---------------- ------------------------------ --------------------------
STANDBY MOUNTED              PHYSICAL STANDBY hawk_sfo                                                0

ARMOR:(SYS@hawk):PHYSICAL STANDBY> @@stby.sql
ARMOR:(SYS@hawk):PHYSICAL STANDBY> set echo on numw 16 lines 200 pages 1000
ARMOR:(SYS@hawk):PHYSICAL STANDBY> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
  2  from
  3  (
  4    select thread# thrd, max(sequence#) almax from v$archived_log
  5    where resetlogs_change#=(select resetlogs_change# from v$database) and applied='YES'
  6    group by thread#
  7  ) al,
  8  (
  9    select thread# thrd, max(sequence#) lhmax
 10    from v$log_history
 11    where first_time=(select max(first_time) from v$log_history)
 12    group by thread#
 13  ) lh
 14  where al.thrd = lh.thrd
 15  ;

          Thread Last Seq Received Last Seq Applied
---------------- ----------------- ----------------
               1                46               46

ARMOR:(SYS@hawk):PHYSICAL STANDBY> select first_time,applied,archived,registrar,creator,thread#,sequence# from v$archived_log
  2  where resetlogs_change#=(select resetlogs_change# from v$database)
  3  and first_time>sysdate-3/24
  4  order by sequence# desc
  5  ;

FIRST_TIM APPLIED   ARC REGISTR CREATOR          THREAD#        SEQUENCE#
--------- --------- --- ------- ------- ---------------- ----------------
20-FEB-15 YES       YES SRMN    SRMN                   1               46
20-FEB-15 YES       YES SRMN    SRMN                   1               45
20-FEB-15 YES       YES SRMN    SRMN                   1               44
20-FEB-15 YES       YES SRMN    SRMN                   1               43
20-FEB-15 YES       YES SRMN    SRMN                   1               42
20-FEB-15 YES       YES SRMN    SRMN                   1               41
20-FEB-15 YES       YES SRMN    SRMN                   1               40
20-FEB-15 YES       YES SRMN    SRMN                   1               39
20-FEB-15 YES       YES SRMN    SRMN                   1               38
20-FEB-15 YES       YES SRMN    SRMN                   1               37
20-FEB-15 YES       YES SRMN    SRMN                   1               36
20-FEB-15 YES       YES SRMN    SRMN                   1               35
20-FEB-15 YES       YES SRMN    SRMN                   1               34
20-FEB-15 YES       YES SRMN    SRMN                   1               33
20-FEB-15 YES       YES SRMN    SRMN                   1               32
20-FEB-15 YES       YES SRMN    SRMN                   1               31
20-FEB-15 YES       YES SRMN    SRMN                   1               30
20-FEB-15 YES       YES SRMN    SRMN                   1               29
20-FEB-15 YES       YES SRMN    SRMN                   1               28
20-FEB-15 YES       YES SRMN    SRMN                   1               27
20-FEB-15 YES       YES SRMN    SRMN                   1               26

21 rows selected.

ARMOR:(SYS@hawk):PHYSICAL STANDBY> select name from v$tablespace;

NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
NEW1
NEW2

7 rows selected.

ARMOR:(SYS@hawk):PHYSICAL STANDBY>

PRIMARY: configuration

ANGEL:(SYS@hawk):PRIMARY> show parameter name

NAME                      TYPE        VALUE
------------------------- ----------- ----------------------------------------
cell_offloadgroup_name    string
db_file_name_convert      string
db_name                   string      hawk
db_unique_name            string      hawk
global_names              boolean     FALSE
instance_name             string      hawk
lock_name_space           string
log_file_name_convert     string
processor_group_name      string
service_names             string      hawk
ANGEL:(SYS@hawk):PRIMARY> show parameter convert

NAME                      TYPE        VALUE
------------------------- ----------- ----------------------------------------
db_file_name_convert      string
log_file_name_convert     string
ANGEL:(SYS@hawk):PRIMARY> show parameter standby

NAME                      TYPE        VALUE
------------------------- ----------- ----------------------------------------
standby_archive_dest      string      ?/dbs/arch
standby_file_management   string      AUTO
ANGEL:(SYS@hawk):PRIMARY> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/oradata/HAWK/datafile/o1_mf_system_bgf2mflo_.dbf
/oradata/HAWK/datafile/o1_mf_sysaux_bgf2mj87_.dbf
/oradata/HAWK/datafile/o1_mf_undotbs1_bgf2mkds_.dbf
/oradata/HAWK/datafile/o1_mf_users_bgf2mlf8_.dbf
/oradata/HAWK/datafile/o1_mf_new1_bggyc50z_.dbf
/oradata/HAWK/datafile/o1_mf_new2_bggyg5ky_.dbf

6 rows selected.

ANGEL:(SYS@hawk):PRIMARY>

STANDBY: configuration

ARMOR:(SYS@hawk):PHYSICAL STANDBY> show parameter name

NAME                      TYPE        VALUE
------------------------- ----------- ----------------------------------------
cell_offloadgroup_name    string
db_file_name_convert      string
db_name                   string      hawk
db_unique_name            string      hawk_sfo
global_names              boolean     FALSE
instance_name             string      hawk
lock_name_space           string
log_file_name_convert     string
processor_group_name      string
service_names             string      hawk
ARMOR:(SYS@hawk):PHYSICAL STANDBY> show parameter convert

NAME                      TYPE        VALUE
------------------------- ----------- ----------------------------------------
db_file_name_convert      string
log_file_name_convert     string
ARMOR:(SYS@hawk):PHYSICAL STANDBY> show parameter standby

NAME                      TYPE        VALUE
------------------------- ----------- ----------------------------------------
standby_archive_dest      string      ?/dbs/arch
standby_file_management   string      AUTO
ARMOR:(SYS@hawk):PHYSICAL STANDBY> select name from v$datafile;

NAME
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
/oradata/HAWK_SFO/datafile/o1_mf_system_bgf4o5x7_.dbf
/oradata/HAWK_SFO/datafile/o1_mf_sysaux_bgf4on2c_.dbf
/oradata/HAWK_SFO/datafile/o1_mf_undotbs1_bgf4of05_.dbf
/oradata/HAWK_SFO/datafile/o1_mf_users_bgf4oq4b_.dbf
/oradata/HAWK_SFO/datafile/o1_mf_new1_bggydmsq_.dbf
/oradata/HAWK_SFO/datafile/o1_mf_new2_bggyj2n3_.dbf

6 rows selected.

ARMOR:(SYS@hawk):PHYSICAL STANDBY>

PRIMARY: drop tablespace and register/apply log to STANDBY – review alert log

[oracle@armor:hawk:/u01/app/oracle/diag/rdbms/hawk_sfo/hawk/trace]
$ cat alert_hawk.log
Fri Feb 20 11:23:18 2015
ALTER DATABASE RECOVER  automatic standby database
Media Recovery Start
 started logmerger process
Fri Feb 20 11:23:18 2015
Managed Standby Recovery not using Real Time Apply
Parallel Media Recovery started with 2 slaves
Media Recovery Log /oradata/archivelog/hawk/hawk_55_1_872097259.arc
Recovery deleting file #5:'/oradata/HAWK_SFO/datafile/o1_mf_new1_bggydmsq_.dbf' from controlfile.
Deleted Oracle managed file /oradata/HAWK_SFO/datafile/o1_mf_new1_bggydmsq_.dbf
Recovery dropped tablespace 'NEW1'
Recovery deleting file #6:'/oradata/HAWK_SFO/datafile/o1_mf_new2_bggyj2n3_.dbf' from controlfile.
Deleted Oracle managed file /oradata/HAWK_SFO/datafile/o1_mf_new2_bggyj2n3_.dbf
Recovery dropped tablespace 'NEW2'
Media Recovery Log /oradata/archivelog/hawk/hawk_56_1_872097259.arc
Errors with log /oradata/archivelog/hawk/hawk_56_1_872097259.arc
Errors in file /u01/app/oracle/diag/rdbms/hawk_sfo/hawk/trace/hawk_pr00_13085.trc:
ORA-00308: cannot open archived log '/oradata/archivelog/hawk/hawk_56_1_872097259.arc'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-279 signalled during: ALTER DATABASE RECOVER  automatic standby database  ...
ALTER DATABASE RECOVER    CANCEL
Media Recovery Canceled
Completed: ALTER DATABASE RECOVER    CANCEL
[oracle@armor:hawk:/u01/app/oracle/diag/rdbms/hawk_sfo/hawk/trace]
$

Oracle Design Jam takes a look at the Future of Information

Usable Apps - Fri, 2015-02-20 13:47

By Sarahi Mireles and Kathy Miedema, Oracle Applications User Experience Communications and Outreach

From Kathy...

In keeping with a new emphasis and investment from Oracle on exploring emerging technology for ways to encourage the evolution of the Oracle user experience, the UX Innovation Events (@InnovateOracle) team held a design jam for Oracle employees in early February.

Since embracing their charter in Fall 2014, the team -- a branch of the Oracle Applications User Experience (UX) team -- has organized and held a rapid succession of design jams. The design jam focused on the future of information design.

 The Team. (L-R) Tony Orciuoli, Sarahi Mireles, Sasha Boyko, Rob Hernandez (Kathy Miedema not pictured)

Verbal Karate Chop, the team: (L-R) Tony Orciuoli, Sarahi Mireles, Sasha Boyko, Rob Hernandez, and Kathy Miedema (not pictured) 

It’s worth pointing out how exciting it is to be part of an organization that encourages a free flow of thinking and creativity by supporting events like these. Our team met a few times before the event to kick-start our brainstorming, and then took off an entire day to participate in this event.

We were well-supported during the event too – we had room to collaborate, materials to help us develop ideas, mentors to help guide us, food to keep us fueled.

Our team, Verbal Karate Chop, designed a product that builds on the technology behind Oracle Voice, pulling in information around a particular keyword or phrase to create meetings, help prepare for meetings, and even start a meeting hands-free if you happen to be driving in your car, for example.

Sarahi can better describe what it’s like to participate as a developer and build an idea like this on a tight deadline. Before turning it over to her, I’m happy to announce that our idea won both the People’s Choice award and the Best Use of Audio/Video award. This was my first time participating in such an event – what a thrill it was!

 Kathy Miedema and Sarahi Mireles

Innovate and diversify. Getting the message out: Kathy Miedema and Sarahi Mireles

... and from Sarahi

As a developer, I find it really fun going to a design jam. The best part after the brain-storming is starting to build your prototype. This can be something really simple or something quite complex, and that actually depends on the time you have and how fast are you able to play with whatever tool you are using.

Time was actually the key factor for this design jam. Having only a couple of hours to build your entire idea is what really makes your adrenaline surge.

We started putting together all our ideas, and then we began to draw the general design of the whole idea (I’m glad we had two designers on our team!), and after that, we built it.

From a non-designer point of view, I have to say that we designed some cool UIs after a couple of hours of pushing our brains to the maximum. And it was awesome to build out those ideas.

If you have the opportunity to join a design jam, do it! It’s also the best way to learn from other developers and non-developers, and to explore all kinds of crazy ideas for innovation in the enterprise.

 People’s Choice and the Best Use of Audio/Video awards

Verbal Karate Chop, the user experience: People’s Choice and Best Use of Audio/Video awards

Explore more 

Find out more about this Oracle Applications User Experience design jam and about other events on the UX Innovation Events blog, and follow event happenings on Twitter.

The overall results of the design jam are here.

To discover more about the emerging technology and trends that drive the Oracle Applications User Experience strategy, get the free eBook from Vice President, Jeremy Ashley (@jrwashley). 

Data Guard Logical Standby – what does it mean?

The Oracle Instructor - Fri, 2015-02-20 11:34

With Data Guard, you have the choice between Physical and Logical Standby databases. Let’s see the differences! My demo starts with a Physical Standby, that is then converted into a Logical Standby (therefore the name of the database):

[oracle@uhesse1 ~]$ dgmgrl sys/oracle@prima
DGMGRL for Linux: Version 11.2.0.1.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxPerformance
  Databases:
    prima - Primary database
    logst - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

For now, logst is still a Physical Standby. It is called that way, because the datafiles of prima and logst are physically identical. I can even restore them from one side to the other:

DGMGRL> edit database logst set state=apply-off;
Succeeded.
DGMGRL> exit
[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Fri Feb 20 11:43:07 2015

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

SYS@prima > select name from v$datafile where file#=4;

NAME
--------------------------------------------------
/u01/app/oracle/oradata/prima/users01.dbf

SYS@prima > alter database datafile 4 offline;

Database altered.

Now I copy the datafile from the standby server uhesse2 to the primary server uhesse1 – there are different ways to do that, but scp is one:

SYS@logst > select name from v$datafile where file#=4;

NAME
--------------------------------------------------
/u01/app/oracle/oradata/logst/users01.dbf

SYS@logst > exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
[oracle@uhesse2 ~]$ scp /u01/app/oracle/oradata/logst/users01.dbf uhesse1:/u01/app/oracle/oradata/prima/users01.dbf
The authenticity of host 'uhesse1 (192.168.56.10)' can't be established.
RSA key fingerprint is e9:e7:5b:8b:b2:33:42:26:89:03:54:0c:16:0d:98:57.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'uhesse1,192.168.56.10' (RSA) to the list of known hosts.
oracle@uhesse1's password: 
users01.dbf                                                                                               100% 5128KB   5.0MB/s   00:00    
[oracle@uhesse2 ~]$ 

When I try to online the datafile again on prima, it is like if I would have restored it from backup:

SYS@prima > alter database datafile 4 online;
alter database datafile 4 online
*
ERROR at line 1:
ORA-01113: file 4 needs media recovery
ORA-01110: data file 4: '/u01/app/oracle/oradata/prima/users01.dbf'


SYS@prima > recover datafile 4;
Media recovery complete.
SYS@prima > alter database datafile 4 online;

Database altered.

The datafiles and also the archived logfiles are physically identical on both sites here, only the controlfiles are different. v$database (like v$datafile, by the way) derives its content from the controlfile:

SYS@prima > select name,dbid,database_role from v$database;

NAME							 DBID DATABASE_ROLE
-------------------------------------------------- ---------- ----------------
PRIMA						   2012613220 PRIMARY

SYS@prima > connect sys/oracle@logst as sysdba
Connected.
SYS@logst > select name,dbid,database_role from v$database;

NAME							 DBID DATABASE_ROLE
-------------------------------------------------- ---------- ----------------
PRIMA						   2012613220 PHYSICAL STANDBY

Now I will convert it into Logical Standby:

DGMGRL> edit database logst set state=apply-off;
Succeeded.
DGMGRL> exit
[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Fri Feb 20 17:29:16 2015

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

SYS@prima > exec dbms_logstdby.build

PL/SQL procedure successfully completed.

SYS@prima > connect sys/oracle@logst as sysdba
Connected.
SYS@logst > alter database recover to logical standby logst;

Database altered.

SYS@logst > shutdown immediate
ORA-01507: database not mounted


ORACLE instance shut down.
SYS@logst > startup mount
ORACLE instance started.

Total System Global Area  521936896 bytes
Fixed Size		    2214936 bytes
Variable Size		  314573800 bytes
Database Buffers	  201326592 bytes
Redo Buffers		    3821568 bytes
Database mounted.
SYS@logst > alter database open resetlogs;

Database altered.

SYS@logst > select name,dbid,database_role from v$database;

NAME							 DBID DATABASE_ROLE
-------------------------------------------------- ---------- ----------------
LOGST						   3156487356 LOGICAL STANDBY

SYS@logst > exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
[oracle@uhesse1 ~]$ dgmgrl sys/oracle@prima
DGMGRL for Linux: Version 11.2.0.1.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> remove database logst;
Removed database "logst" from the configuration
DGMGRL> add database logst as connect identifier is logst;
Database "logst" added
DGMGRL> enable database logst;
Enabled.
DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxPerformance
  Databases:
    prima - Primary database
    logst - Logical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

One significant change is that the DBID and the name is now different from the primary database as you see above. And the datafiles are no longer physically identical:

DGMGRL> edit database logst set state=apply-off;
Succeeded.
DGMGRL> exit
[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Fri Feb 20 17:38:56 2015

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

SYS@prima > alter database datafile 4 offline;

Database altered.

SYS@prima > select name from v$datafile where file#=4;

NAME
--------------------------------------------------
/u01/app/oracle/oradata/prima/users01.dbf

SYS@prima > host cp /u01/app/oracle/oradata/prima/users01.dbf /u01/app/oracle/oradata/prima/users01.old

I copy the original file because I know that the restore from logst will not work. It is just to show my point:

[oracle@uhesse2 ~]$ scp /u01/app/oracle/oradata/logst/users01.dbf uhesse1:/u01/app/oracle/oradata/prima/users01.dbf
oracle@uhesse1's password: 
users01.dbf                                                                                   100% 5128KB   5.0MB/s   00:00    
SYS@prima > alter database datafile 4 online;
alter database datafile 4 online
*
ERROR at line 1:
ORA-01122: database file 4 failed verification check
ORA-01110: data file 4: '/u01/app/oracle/oradata/prima/users01.dbf'
ORA-01206: file is not part of this database - wrong database id

Exactly. logst is now an autonomous database that is just incidentally doing (nearly) the same DML as prima does. It is no longer Oracle-Block-wise the same as prima. The rowids from prima have no meaning on logst any more:

DGMGRL> edit database logst set state=apply-on;
Succeeded.
SYS@prima > insert into scott.dept values (50,'TEST','TEST');
insert into scott.dept values (50,'TEST','TEST')
                  *
ERROR at line 1:
ORA-00376: file 4 cannot be read at this time
ORA-01110: data file 4: '/u01/app/oracle/oradata/prima/users01.dbf'


SYS@prima > host cp /u01/app/oracle/oradata/prima/users01.old /u01/app/oracle/oradata/prima/users01.dbf

SYS@prima > alter database datafile 4 online;
alter database datafile 4 online
*
ERROR at line 1:
ORA-01113: file 4 needs media recovery
ORA-01110: data file 4: '/u01/app/oracle/oradata/prima/users01.dbf'


SYS@prima > recover datafile 4;
Media recovery complete.
SYS@prima > alter database datafile 4 online;

Database altered.

SYS@prima > insert into scott.dept values (50,'TEST','TEST');

1 row created.

SYS@prima > commit;
Commit complete.

SYS@prima > select rowid,dept.* from scott.dept where deptno=50;

ROWID		       DEPTNO DNAME	     LOC
------------------ ---------- -------------- -------------
AAADS8AAEAAAACNAAE	   50 TEST	     TEST

This rowid is what we normally record in the redo entries and it would be sufficient to retrieve that row on the primary and also on a physical standby where we do “Redo Apply” (another term for “recover database”). But that rowid is different on logst:

SYS@logst > connect sys/oracle@logst as sysdba
Connected.
SYS@logst > select rowid,dept.* from scott.dept where deptno=50;

ROWID DEPTNO DNAME LOC
------------------ ---------- -------------- -------------
AAADS8AAEAAAACOAAA 50 TEST TEST

That is why we need to put additional information – supplemental log data – into the redo entries on the primary. It will help the SQL Apply mechanism to retrieve the row there:

Logical Standby Architecture

Logical Standby Architecture

The supplemental log data contains at least additionally the primary/unique key like on the picture. In the absence of primary/unique keys, every column of a modified row is written into the redo logs. That may impact the performance of the primary database. Another serious drawback of Logical Standby is that not every datatype and not every operation on the primary is supported for the SQL Apply mechanism. The number of unsupported datatypes decreases version by version, though.

The demo and the sketch above are from my presentation about Transient Logical Standby at the Oracle University Expert Summit 2015 in Dubai – really an amazing location! Hope you find it useful :-)


Tagged: Data Guard
Categories: DBA Blogs

Database Flashback -- 5

Hemant K Chitale - Fri, 2015-02-20 10:15
Continuing my series on Database Flashback.

Here I demonstrate that Flashback Logs alone are not sufficient.  The process of FLASHBACK DATABASE does need *some* redo entries from Archive/Online Redo Logs.



[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Fri Feb 20 23:18:37 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select sysdate from dual;

SYSDATE
---------
20-FEB-15

SYS>show parameter db_flashback_retention_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target integer 1440
SYS>select * from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ---------------- -------------- ------------------------
14573526 17-FEB-15 1440 78561280 415383552

SYS>

My database seems to have adequate Flashback Capability in the Flashback Logs.  Let me run some transactions and generate Redo and Flashback.

SYS>connect hemant/hemant
Connected.
HEMANT>drop table obj_list;
drop table obj_list
*
ERROR at line 1:
ORA-00942: table or view does not exist


HEMANT>create table obj_list tablespace users
2 as select * from dba_objects where 1=2;

Table created.

HEMANT>select tablespace_name
2 from user_tables
3 where table_name = 'OBJ_LIST';

TABLESPACE_NAME
------------------------------
USERS

HEMANT>insert into obj_list
2 select * from dba_objects
3 union all
4 select * from dba_objects
5 union all
6 select * from dba_objects;

225138 rows created.

HEMANT>rollback;

Rollback complete.

HEMANT>insert into obj_list
2 select * from dba_objects
3 union all
4 select * from dba_objects
5 union all
6 select * from dba_objects;

225135 rows created.

HEMANT>delete obj_list;

225135 rows deleted.

HEMANT>
HEMANT>select count(*)
2 from v$archived_log
3 where first_time >
4 (select startup_time
5 from v$instance)
6 /

COUNT(*)
----------
2

HEMANT>
HEMANT>alter system switch logfile;

System altered.

HEMANT>insert into obj_list
2 select * from dba_objects
3 union all
4 select * from dba_objects
5 union all
6 select * from dba_objects
7 union all
8 select * from dba_objects;

300180 rows created.

HEMANT>insert into obj_list
2 select * from obj_list;

300180 rows created.

HEMANT>select count(*) from obj_list;

COUNT(*)
----------
600360

HEMANT>commit;

Commit complete.

HEMANT>select count(*) from v$archived_log
2 where first_time >
3 (select startup_time from v$instance)
4 /

COUNT(*)
----------
4

HEMANT>

Now, let's suppose that a scheduled (periodic) archive log backup job kicks in and creates a backup of archivelogs and deletes them.

HEMANT>exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Fri Feb 20 23:30:36 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN> backup as compressed backupset archivelog all delete input;

Starting backup at 20-FEB-15
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=58 device type=DISK
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=3 RECID=106 STAMP=872033779
channel ORA_DISK_1: starting piece 1 at 20-FEB-15
channel ORA_DISK_1: finished piece 1 at 20-FEB-15
piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnshxs_.bkp tag=TAG20150220T233055 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:04
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_18/o1_mf_1_3_bg9dcl3v_.arc RECID=106 STAMP=872033779
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=3 RECID=112 STAMP=872205844
input archived log thread=1 sequence=4 RECID=113 STAMP=872205859
input archived log thread=1 sequence=5 RECID=114 STAMP=872205867
input archived log thread=1 sequence=6 RECID=115 STAMP=872206048
input archived log thread=1 sequence=7 RECID=116 STAMP=872206098
input archived log thread=1 sequence=8 RECID=117 STAMP=872206254
channel ORA_DISK_1: starting piece 1 at 20-FEB-15
channel ORA_DISK_1: finished piece 1 at 20-FEB-15
piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnsmgc_.bkp tag=TAG20150220T233055 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_3_bggndmz4_.arc RECID=112 STAMP=872205844
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_4_bggnf0qy_.arc RECID=113 STAMP=872205859
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_5_bggnf8ty_.arc RECID=114 STAMP=872205867
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_6_bggnlzvr_.arc RECID=115 STAMP=872206048
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_7_bggnnk6c_.arc RECID=116 STAMP=872206098
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_8_bggnsg5k_.arc RECID=117 STAMP=872206254
Finished backup at 20-FEB-15

Starting Control File and SPFILE Autobackup at 20-FEB-15
piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/autobackup/2015_02_20/o1_mf_s_872206274_bggnt34k_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 20-FEB-15

RMAN> exit


Recovery Manager complete.
[oracle@localhost ~]$

Now, suppose that I need to Flashback the database.

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Fri Feb 20 23:32:36 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select * from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ---------------- -------------- ------------------------
14577571 19-FEB-15 1440 135348224 3784998912

SYS>select to_char(oldest_flashback_time,'DD-MON HH24:MI:SS')
2 from V$flashback_database_log;

TO_CHAR(OLDEST_FLASHBACK
------------------------
19-FEB 00:04:02

SYS>

Notice how the OLDEST_FLASHBACK_TIME has changed from 17-Feb to the midnight of 18/19-Feb ! Apparently, my FRA cannot hold very many Flashback Logs.
As I have mentioned in two posts earlier, here and here, the scope of the actual ability to Flashback may vary.

Can I flashback to SCN 14577572 ?  Let me give it a try.

SYS>shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SYS>startup mount
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 394267320 bytes
Database Buffers 54525952 bytes
Redo Buffers 6008832 bytes
Database mounted.
SYS>flashback database to SCN 14577572;
flashback database to SCN 14577572
*
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SCN 14577571 to SCN 14577572
ORA-38761: redo log sequence 1 in thread 1, incarnation 5 could not be accessed


SYS>

Aaha ! Apparently, Oracle needs to read some redo from Archive Log(s) !  So, having Flashback Logs alone is *not* sufficient.  I know that I need the database to be running in ArchiveLog mode.  But I should also know that if I want to Flashback to a particular Time or SCN, I will need the corresponding ArchiveLog as well !  (Imaging trying to Flashback to 3 days ago and having purged all ArchiveLogs simply because I do daily Full Backups and have Retention of 2 days only !)

So, I must take the necessary action now.

SYS>!sh
sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Fri Feb 20 23:43:43 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN> restore archivelog from sequence 1 until sequence 2;

Starting restore at 20-FEB-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=21 device type=DISK

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=1
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_19/o1_mf_annnn_TAG20150219T001619_bg9gpq87_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_19/o1_mf_annnn_TAG20150219T001619_bg9gpq87_.bkp tag=TAG20150219T001619
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=2
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_19/o1_mf_annnn_TAG20150219T001930_bg9gwl9w_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_19/o1_mf_annnn_TAG20150219T001930_bg9gwl9w_.bkp tag=TAG20150219T001930
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 20-FEB-15

RMAN> exit


Recovery Manager complete.
sh-3.2$
sh-3.2$ exit
exit

SYS>l
1* flashback database to SCN 14577572
SYS>/

Flashback complete.

SYS>alter database open read only;

Database altered.

SYS>

What if I want to Flashback to another, later time ?

SYS>select sequence#, first_change#, to_char(first_time,'DD-MON HH24:MI:SS')
2 from v$archived_log
3 where first_time > trunc(sysdate)
4 order by 1;

SEQUENCE# FIRST_CHANGE# TO_CHAR(FIRST_TIME,'DD-M
---------- ------------- ------------------------
4 14580736 20-FEB 23:24:03
5 14581651 20-FEB 23:24:16
6 14583536 20-FEB 23:24:23
7 14584203 20-FEB 23:27:27
8 14584351 20-FEB 23:28:17

SYS>
SYS>shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SYS>startup mount
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 394267320 bytes
Database Buffers 54525952 bytes
Redo Buffers 6008832 bytes
Database mounted.
SYS>flashback database to SCN 14584205;
flashback database to SCN 14584205
*
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SCN 14577573 to SCN 14584205
ORA-38761: redo log sequence 5 in thread 1, incarnation 5 could not be accessed


SYS>
SYS>!sh
sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Fri Feb 20 23:53:48 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN> restore archivelog from sequence 5 until sequence 6;

Starting restore at 20-FEB-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=21 device type=DISK

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=5
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=6
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnsmgc_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnsmgc_.bkp tag=TAG20150220T233055
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 20-FEB-15

RMAN> exit


Recovery Manager complete.
sh-3.2$
sh-3.2$ cd /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20
sh-3.2$ ls -l
total 63344
-rw-rw---- 1 oracle oracle 494592 Feb 20 23:44 o1_mf_1_1_bggol5t8_.arc
-rw-rw---- 1 oracle oracle 18432 Feb 20 23:44 o1_mf_1_2_bggol6wm_.arc
-rw-rw---- 1 oracle oracle 50630144 Feb 20 23:54 o1_mf_1_5_bggp4zbp_.arc
-rw-rw---- 1 oracle oracle 13635072 Feb 20 23:54 o1_mf_1_6_bggp4zh3_.arc
sh-3.2$

I have Sequences 1 and 2 that were restored for the first flashback and Sequences 5 and 6 that have been restored now. Do I need Sequences 3 and 4 ? Do I need Sequence 7 (that contains the Redo beyond SCN 14584203) ?

Let's see.
sh-3.2$ exit
exit

SYS>l
1* flashback database to SCN 14584205
SYS>/
flashback database to SCN 14584205
*
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SCN 14577573 to SCN 14584205
ORA-38761: redo log sequence 3 in thread 1, incarnation 5 could not be accessed


SYS>
Apparently, I also need Sequences 3,at least,  and (maybe ?) 4 ! Why ? Because my database is currently at an SCN lower than the one I want to Flashback to and the corresponding redo is required. (If I had *not* done the first Flashback to the lower SCN, I wouldn't need these Archivelogs !)

SYS>!sh
sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sat Feb 21 00:01:34 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN> restore archivelog from sequence 3 until sequence 4;

Starting restore at 21-FEB-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=17 device type=DISK

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=3
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=4
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnsmgc_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnsmgc_.bkp tag=TAG20150220T233055
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 21-FEB-15

RMAN> exit


Recovery Manager complete.
sh-3.2$
sh-3.2$ cd /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20
sh-3.2$ ls -l
total 63344
-rw-rw---- 1 oracle oracle 494592 Feb 20 23:44 o1_mf_1_1_bggol5t8_.arc
-rw-rw---- 1 oracle oracle 18432 Feb 20 23:44 o1_mf_1_2_bggol6wm_.arc
-rw-rw---- 1 oracle oracle 50630144 Feb 20 23:54 o1_mf_1_5_bggp4zbp_.arc
-rw-rw---- 1 oracle oracle 13635072 Feb 20 23:54 o1_mf_1_6_bggp4zh3_.arc
sh-3.2$ cd ../*21
sh-3.2$ ls -l
total 80272
-rw-rw---- 1 oracle oracle 31472640 Feb 21 00:01 o1_mf_1_3_bggpmf06_.arc
-rw-rw---- 1 oracle oracle 50630144 Feb 21 00:01 o1_mf_1_4_bggpmdxk_.arc
sh-3.2$

Here is something important (nothing to do with Flashback Database).  The server clock went past midnight into 21-Feb so the restored files went into 2015_02_21 and not 2015_02_20 !

sh-3.2$ exit
exit

SYS>l
1* flashback database to SCN 14584205
SYS>/

Flashback complete.

SYS>

Remember my question about whether I would need Sequence 7 ?  Let's see what the alert.log shows about the Flashback Database actions.

Flashback Media Recovery Log /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_2_bggol6wm_.arc
Flashback Media Recovery Log /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_21/o1_mf_1_3_bggpmf06_.arc
Flashback Media Recovery Log /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_21/o1_mf_1_4_bggpmdxk_.arc
Flashback Media Recovery Log /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_5_bggp4zbp_.arc
Flashback Media Recovery Log /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_6_bggp4zh3_.arc
Recovery of Online Redo Log: Thread 1 Group 1 Seq 7 Reading mem 0
Mem# 0: /home/oracle/app/oracle/oradata/orcl/redo01.log
Incomplete Recovery applied until change 14584206 time 02/20/2015 23:28:03
Flashback Media Recovery Complete
Completed: flashback database to SCN 14584205

Another learning point !  Sequence 7 was applied from the Online Redo Log file.

What we have learned :
1.  If we Flashback the database to a particular SCN / Time / Restore Point, Oracle does need the Redo from the Archive / Online Redo Log file that was active at that time.  It still needs some Redo to make the database consistent (e.g. apply Undo)

2. If we Flashback the database to SCN 101 and then (without OPEN RESETLOGS), choose to Flashback to a subsequent SCN 201, we again need ArchiveLogs !

3. Flashback from the SQL command-line is intelligent enough to use the Online Redo Log but not (like, say, RMAN's RECOVER DATABASE), automatically restore ArchiveLogs as they are required !

.
.
,



Categories: DBA Blogs

PeopleTools 8.54: Table/Index Partitioning

David Kurtz - Fri, 2015-02-20 10:00
This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.
Partitioning in OraclePartitioning of table (and index) segments involves breaking them into several smaller segments where certain data values only occur in certain segments.  Thus if a query is looking for a certain data value it may be able to eliminate some partitions without having to scan them because by definition those values cannot occur in those partitions.  Thus saving logical and physical read, and improving performance.  This is called partition elimination or pruning.  It is often the principal reason for partitioning a table.
Physically each partition in a partitioned table is a table, but logically the partitions form a single table.  There should be no need to change application code in order to use partitioning, but the way the code is written will affect Oracle's ability to perform partition elimination.
The following diagram is taken from the Oracle 11g Database VLDB and Partitioning Guide

If a query was only looking data in March, then it could eliminate the January and February partitions without inspecting them.  However, if it was filtering data by another column then it would still have to inspect all three partitions.  Application design will determine whether, and if so how to partition a table.
NB: I can't mention partitioning without also saying that Partitioning Option is a licensed feature of Oracle Database Enterprise Edition.
Partitioning in PeopleTools prior to 8.54I have to declare an interest.  I have been using partitioning in PeopleSoft since PeopleTools 7.5 when it was introduced in Oracle 8i.  The line from PeopleSoft was that you can introduce partitioning without invalidating your support (the same is not true of E-Business suite).  Application Designer won't generate partition DDL, so you were supposed to give your DDL scripts to the DBA who would add the partition clauses.  So you if wanted to use partitioning, you would be plunged into a hellish world of manual scripting.  One of the key benefits of Application Designer is that it generates the DDL for you.
Since 2001, I have developed a PL/SQL utility that effectively reverse engineers the functionality of Application Designer that builds DDL scripts, but then adds the partitioning clauses.  It also adds partitions, and has been extended to assist with partition-wise data archive/purge.  It is in uses at a number sites using Global Payroll (for which it was originally designed) and Financials (see Managing Oracle Table Partitioning in PeopleSoft Applications with GFC_PSPART Package)
So in investigating the new partitioning feature of PeopleTools 8.54 I was concerned:
  • Is my partitioning utility was now obsolete?  Or should I continue to use it?
  • How would I be able to retrofit existing partitioning into PeopleTools?
Partitioning in PeopleTools 8.54I am going to illustrate the behaviour of the new partition support with a number of example.
Example 1: Range Partitioning PSWORKLISTIn this example, I will range partition table PSWORKLIST on the INSTSTATUS column. The valid statuses for this column are:

INSTSTATUS Description 0 Available 1 Selected 2 Worked 3 Cancelled
  • the first partition will only contain statuses 0 and 1, which are the open worklist items, 
  • the other partition will contain the other statuses; 2 and 3 which are the closed items. 
The application repeatedly queries this table looking for work lists items to be processed, using the criterion INSTSTATUS < 2.  Over time, unless data is archived, the vast majority of entries are closed.  This partitioning strategy will enable the application to find the open worklist items quickly by eliminating the larger closed partition only querying the smaller open item partition.  As items are worked or cancelled, their statuses are updated to 2 or 3, and they will automatically be moved to the other partition.
This is something that I have actually done on a customer site, and it produced a considerable performance improvement.
PeopleSoft provides a component that allows you to configure the partitioning strategy for a record.  However, I immediately ran into my first problem. 
  • The Partitioning Utility component will only permit me to partition by a PeopleSoft unique key column.  If a query doesn't have a predicate on the partitioning column, then Oracle will certainly not be able to prune any partitions, and the query will perform no better than if the table had not been partitioned.  While a column frequently used in selective criteria is often the subject of an index, and sometimes the unique key, this is not always the case.  It does not make sense to assume this in this utility component.
  • In this case, INSTSTATUS is not part of any delivered index, though I added it to index B.  I have seen that the application frequently queries the PSWORKLIST table by INSTSTATUS, so it does make sense to partition it on that column.
However, I can customise the component to get around this.  The key field is validated by the view PPMU_RECKEYS_VW.
SELECT A.RECNAME 
,A.FIELDNAME
FROM PSRECFIELDALL A
WHERE %DecMult(%Round(%DECDIV(A.USEEDIT,2),0 ) , 2) <> A.USEEDIT
I can change the view as follows:
DROP TABLE PS_ST_RM2_TAO
/
SELECT A.RECNAME
,A.FIELDNAME
FROM PSRECFIELDALL A /* WHERE %DecMult(%Round(%DECDIV(A.USEEDIT,2),0 ) , 2) <> A.USEEDIT*/
, PSDBFIELD B
WHERE A.FIELDNAME = B.FIELDNAME
AND B.FIELDTYPE IN(0,2,3,4,5,6)
So, now I can specify the partitioning for this table in the Partitioning Utility Component
 I notice that can leave tablespace blank in component, but the tablespace keyword is lying around - so I have to put a tablespace in.  It is valid to omit physical attributes at partition level and they will be inherited from table level, and similarly for table level.
  • The component automatically adds a MAXVALUE partition.  This means that is valid to put any value into the partition column, otherwise it can cause an error.  However, it might not be what I want.
  • The component also adds a table storage clause, overriding anything specified in the record, with a fixed PCTFREE 20 which applies to all partitions.  Again this might not be what I want.  The value of PCTFREE depends on whether and how I update data in the table. 
  • There are a number of things that I can't control in this component
    • The name of MAXVALUE partition
    • The tablespace of the MAXVALUE partition, which defaults to be the same tablespace as the last defined partition, which might not be what I want.
    • Any other physical attribute of any of the partitions, for example I might want a different PCTFREE on partitions containing data will not be updated.
  • The component adds clause to enable row movement.  This permits Oracle to move rows between partitions if necessary when the value of the partitioning key column is updated.  In this case it is essential because as worklist items are completed they move from the first partition to the other.  ALTER TABLE ... SHRINK requires row moment, so it is useful to enable it generally.
The partitioning definition can be viewed in Application Designer under Tools -> Data Administration -> Partitioning.


The create table script (PSBUILD.SQL) does not contain any partition DDL.  So first you build the table and then alter it partitioned.  To be fair, this limitation is set out in the PeopleTools documentation, and it is not unreasonable as you would often build the table and then decide to partition it.  I do the same in my own utility.
-- Start the Transaction 


-- Create temporary table

CREATE TABLE PSYPSWORKLIST (BUSPROCNAME VARCHAR2(30) DEFAULT ' ' NOT
NULL,

DESCR254_MIXED VARCHAR2(254) DEFAULT ' ' NOT NULL) PARTITION BY
RANGE (INSTSTATUS)
(
PARTITION OPEN VALUES LESS THAN (2) TABLESPACE PTTBL,
PARTITION PE_MAXVALUE VALUES LESS THAN (MAXVALUE) TABLESPACE PTTBL
)
PCTFREE 20 ENABLE ROW MOVEMENT
/

-- Copy from source to temp table

INSERT INTO PSYPSWORKLIST (
BUSPROCNAME,

DESCR254_MIXED)
SELECT
BUSPROCNAME,

DESCR254_MIXED
FROM PSWORKLIST
/

-- CAUTION: Drop Original Table

DROP TABLE PSWORKLIST
/

-- Rename Table

RENAME PSYPSWORKLIST TO PSWORKLIST
/

-- Done

CREATE UNIQUE INDEX PS_PSWORKLIST ON PSWORKLIST (BUSPROCNAME,
ACTIVITYNAME,
EVENTNAME,
WORKLISTNAME,
INSTANCEID)
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 40960 NEXT 106496 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "PSINDEX"
/
ALTER INDEX PS_PSWORKLIST NOPARALLEL LOGGING
/

CREATE INDEX PSBPSWORKLIST ON PSWORKLIST (OPRID,
INSTSTATUS)
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 40960 NEXT 106496 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "PSINDEX"
/
ALTER INDEX PSBPSWORKLIST NOPARALLEL LOGGING
/
CREATE INDEX PSBWORKLIST ON PSWORKLIST
('') LOCAL TABLESPACE PTTBL
/

The DDL to create partitioned index does not seem to appear properly.  The first CREATE INDEX command was generated by Application Designer extracting it from the catalogue with DBMS_METADATA.  This functionality was introduced in PeopleTools 8.51 to preserve existing configuration.The second create index comes from the partitioning definition.
  • The index column list is missing, it should come from the column list is defined in Application Designer.
  • The locally partitioned index is the same tablespace as the table instead of the tablespace defined on the index. 
    • I would not normally keep indexes in the same tablespace as the table (the rationale is that in the case of having to recover only a tablespace with indexes then I could rebuild it instead of recovering it).
I also note that the table is not altered NOLOGGING.  Yet the indexes are still made NOPARALLEL.  The default degree of parallelism on a partitioned table is equal to the number of partitioned, so it will cause parallel query to be invoked on the table access. 
  • I strongly recommend against generally allowing parallel query in all SQLs that reference a partitioned table in an OLTP system, which is what PeopleSoft is.  There are occasions where parallel query is the right thing to do, and in those cases I would use a hint, or SQL profile or SQL patch.
If I leave the Partitioning Utility component and then go back to a record where I have previously created partition DDL, then the partition DDL is still there, but all the other information that I typed in has disappeared.



If you trace the SQL generated by this component while entering partition details and generating partition DDL, then the only two tables that are updated at all;  PS_PTTBLPARTDDL and PS_PTIDXPARTDDL.  They are both keyed on RECNAME and PLATFORMID and have just one other column, a CLOB to hold the DDL.
  • The partition information disappears because there is nowhere to hold it persistently, and the component cannot extract it from the DDL.  It was being entered into a derived work record.
    • So it is not going to be much help when I want to adjust partitioning in a table that is already partitioned.  For example, over time, I might want to add new partitions, compress static partitions, or purge old ones.
  • It is also clear that there is no intention to support different partitioning strategies for different indexes on the same table.  There are certainly cases where a table will one or more locally partitioned indexes and some global indexes that may or may not be partitioned.
  • Even these two tables are not fully integrated into Application Designer.  There is a throwaway line in Appendix E of the Data Management Guide - Administering Databases on Oracle:"Record and index partitioning is not migrated as part of the IDE project. If you want to migrate the partitioning metadata along with the record, you will need to…" copy it yourself and it goes on to recommend creating a Data Migration Project in the Data Migration Workbench"
Sample 2: Import Existing PartitioningSticking with PSWORKLIST, I have partitioned it exactly the way I want.  The partition DDL was generated by my own partitioning utility .  I have added INSTSTATUS to index B.
CREATE TABLE sysadm.psworklist
(busprocname VARCHAR2(30) NOT NULL

,descr254_mixed VARCHAR2(254) NOT NULL
)
TABLESPACE PTTBL
PCTFREE 10 PCTUSED 80
PARTITION BY RANGE(INSTSTATUS)
(PARTITION psworklist_select_open VALUES LESS THAN ('2')
,PARTITION psworklist_worked_canc VALUES LESS THAN (MAXVALUE) PCTFREE 1 PCTUSED 90
)
ENABLE ROW MOVEMENT
PARALLEL
NOLOGGING
/

ALTER TABLE sysadm.psworklist LOGGING NOPARALLEL MONITORING
/

CREATE INDEX sysadm.ps0psworklist ON sysadm.psworklist
(transactionid
,busprocname
,activityname
,eventname
,worklistname
,instanceid
)
TABLESPACE PSINDEX
PCTFREE 10
PARALLEL
NOLOGGING
/

CREATE INDEX sysadm.psbpsworklist ON sysadm.psworklist
(oprid
,inststatus
)
LOCAL
(PARTITION psworklistbselect_open
,PARTITION psworklistbworked_canc PCTFREE 1
)
TABLESPACE PSINDEX
PCTFREE 10
PARALLEL
NOLOGGING
/
ALTER INDEX sysadm.psbpsworklist LOGGING
/
ALTER INDEX sysadm.psbpsworklist NOPARALLEL
/

The DDL in the Maintain Partitioning box in Application Designer is extracted from the data dictionary using the Oracle supplied DBMS_METADATA package.  Application Designer has done this since PeopleTools 8.51 for index build scripts, but now you can see the DDL directly in the tool.
When I generate an alter table script I still get two create index command for the partitioned index.  The second one comes from the generated partition DDL and is not correct because it still doesn't have a column list.
CREATE   INDEX PSBPSWORKLIST ON PSWORKLIST (OPRID,
INSTSTATUS)
PCTFREE 10 INITRANS 2 MAXTRANS 255 LOGGING
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "PSINDEX" LOCAL
(PARTITION "PSWORKLISTBSELECT_OPEN"
PCTFREE 10 INITRANS 2 MAXTRANS 255 LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS
2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "PSINDEX" ,
PARTITION "PSWORKLISTBWORKED_CANC"
PCTFREE 1 INITRANS 2 MAXTRANS 255 LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS
2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "PSINDEX" )
/
ALTER INDEX PSBPSWORKLIST NOPARALLEL LOGGING
/
CREATE INDEX PSBPSWORKLIST ON PSWORKLIST ('') LOCAL TABLESPACE
PTTBL
/

Example 3 - GP_RSLT_ACUMI have now chosen to partition one of the Global Payroll result tables.  This is often the largest table in a payroll system.  I have seen more than 1 billion rows in this table at one customer.  In a Global Payroll system, I usually:
  • range partition payroll tables on EMPLID to match the streamed processing (in GP streaming means concurrently running several Cobol or Application Engine programs to process different ranges of employees).  So there is a 1:1 relationship between payroll processes and physical partitions
  • the largest result tables are sub-partitioned on CAL_RUN_ID so each payroll period is in a separate physical partition.  Later I can archive historical payroll data by partition.
Here, I have swapped the partitioning over.  I have partitioned by CAL_RUN_ID and sub-partitioned by EMPLID.  I explain why below.


    And this is Table DDL that the utility generated.
    PARTITION BY RANGE (CAL_RUN_ID) 
    SUBPARTITION BY RANGE (EMPLID)
    SUBPARTITION TEMPLATE
    (
    SUBPARTITION SUB1 VALUES LESS THAN ('K9999999999'),
    SUBPARTITION SUB2 VALUES LESS THAN ('KE999999999'),
    SUBPARTITION SUB3 VALUES LESS THAN ('KM999999999'),
    SUBPARTITION SUB4 VALUES LESS THAN ('KT999999999') ,
    SUBPARTITION PE_MAXVALUE VALUES LESS THAN (MAXVALUE)
    )
    (
    PARTITION STRM1 VALUES LESS THAN ('K9999999999') TABLESPACE GPPART1,
    PARTITION STRM2 VALUES LESS THAN ('KE999999999') TABLESPACE GPPART2,
    PARTITION STRM3 VALUES LESS THAN ('KM999999999') TABLESPACE GPPART3,
    PARTITION STRM4 VALUES LESS THAN ('KT999999999') TABLESPACE GPPART4,
    PARTITION PE_MAXVALUE VALUES LESS THAN (MAXVALUE) TABLESPACE GPPART4
    )
    PCTFREE 20 ENABLE ROW MOVEMENT

    • Use of the sub-partition template clause simplifies the SQL.  There is certainly a lot less of it.  However, it means you get all the sub-partitions within in all partitions.  That might not be what you want.  In this demo database both employees and calendars are prefixed by something that corresponds to legislature, so some partitions will be empty.  They won't take up any physical space, due to deferred segment creation, but it would be better not to build them at all.
    • I can specify tablespace on the sub-partitions in the utility component, but there is no tablespace on the sub-partition template in the DDL.  I care more about putting different payroll periods into different tablespaces, than different ranges of employees (so I can compress and purge data later) so I swapped the partition key columns and have range partitioned on CAL_RUN_ID and sub-partitioned on EMPLID.
    In Global Payroll, partitioning is required to support streamed processing.  Tables are range partitioned on EMPLID to match the stream definitions.  In UK payroll, there are 45 tables that are updated or heavily referenced by streamed payroll processes that should all have similar range partitioning on EMPLID. 
    In PeopleTools 8.54, it is not possible to define a single partitioning strategy and consistently apply it to several tables.  Even if the data entered into the partition utility component was retained, I would have to enter it again for each table.
    ConclusionWhen I heard that PeopleTools would have native support for partitioning, if only in Oracle, I was hopeful that we would get something that would bring the process of migrating and building partitioned tables in line with normal tables. Unfortunately, I have to say that I don't think the partitioning support that I have seen so far is particularly useful.
    • There is no point typing in a lot of partition data into a utility component that doesn't retain the data.
    • As with materialized views, table partitioning is something on which DBAs will have to advise and will probably implement and maintain.  This component doesn't really help them do anything they already do with a text editor!
    • Even the minimal partition data that the utility component does retain is not migrated between environments by Application Designer when you migrate the record.
    Again, I think the problems stem from PeopleTools development trying to minimize the level of alteration to the Application Designer.  The partitioning utility component looks good because it sets out a lot of the partition attributes and metadata that you do need to consider, but there is no data structure behind that to hold it.
    I would like to see PeopleTools tables to hold partitioning metadata for tables and indexes, and for Application Designer to build DDL scripts to create and alter partitioned tables, to add partitions to existing tables, and then to migrate those definitions between environments.
    One positive that I can take from this is that Oracle has now clearly stated that it is reasonable to introduce partitioning into your PeopleSoft application without invalidating your support. The position hasn't actually changed, but now there is clarity.©David Kurtz, Go-Faster Consultancy Ltd.

    The Database Protection Series- Vulnerability Assessments

    Chris Foot - Fri, 2015-02-20 09:07

    This is the fourth article of a series that focuses on database security.  In my introduction, I provide an overview of the database protection process and what is to be discussed in future installments.   In last month’s article, we finished our discussion of the most common threats and vulnerabilities.  In this latest installment, we’ll review the database vulnerability assessment process.  We’ll begin by learning how to perform an initial database vulnerability assessment.   In addition, we’ll discuss the importance of performing assessments on a regular basis to ensure that no new security vulnerabilities are introduced into our environment.

    Vulnerability Assessment Overview

    The vulnerability assessment provides a detailed security analysis of the database systems being protected.  Their release and patch levels will be identified and compared to vendor security patch distributions as well as how well industry and internal security best practices are being followed.   The types of vulnerabilities range the spectrum, from weak and default passwords to unpatched (and often well known) database software weaknesses.

    Ranking the vulnerabilities allows the highest priority issues to be addressed more quickly than their less important counterparts.  After the vulnerabilities are addressed, the configuration is used as a template for future database implementations.  The configuration assessment template document should be reviewed monthly to ensure that it protects against newly identified database system and administrative process vulnerabilities.

    Third-Party Vulnerability Scanners

    You will notice that I often refer to the McAfee database security protection products in my blogs.  If you are truly serious about protecting your database data, you’ll quickly find that partnering with a security vendor is an absolute requirement and not “something nice to have.”   Since RDX offers breach protection as a service, we needed to choose a leading vendor that has a strong offering in the database security space.

    To better understand the benefits that these products provide, I’ll use McAfee’s vulnerability scanner as an example.   This isn’t a sales pitch for the product or the vendor; it is the product that RDX chose and, as a result, the one whose features I have the most experience with.   There are numerous scanning products available on the market to choose from.

    The McAfee Vulnerability Manager is a security software product that uses a library of predefined vulnerabilities to scan the database being evaluated.   McAfee’s security labs, consisting of 400 researchers, have created over 4,700 vulnerability checks that evaluate the potential risks generated from a myriad of threat vectors.

    Here’s a laundry list of the benefits that McAfee provides to RDX.   As stated previously, the intent is to highlight the capabilities these types of products provide and why they are so important to achieving our goal of safeguarding our sensitive database data stores.

    • Scanner rules are automatically updated regularly to identify new vulnerabilities
    • Provides templates for PCI DSS, SOX, HIPAA and other regulations, as well as user defined scans for DBAs, developers, and security teams
    • 4,800 vulnerability checks including:
      Auditing OS Tests Backdoor Detection PCI DSS Checks CIS Benchmarking Password Discovery DB Configuration Patch Checks Credit Discovery STIG Benchmarks Custom Checks Unused Features Data Discovery SQL Injection Default Password Buffer Overflows Weak Passwords Insecure Code Shared Passwords Code Weaknesses Release/Patch Versions Encryption Discovery
    • Provide actionable intelligence on how to address risks, including fix scripts whenever possible
    • Ability to discover databases and tables containing sensitive data automatically
    • Conducts port scans to identify database version and patch status
    • Supports major database platforms including Oracle, SQL Server, DB2, MySQL
    • Automates discovery of databases on the network
    • Locates and identifies tables containing sensitive information
    • Presents findings in preconfigured reports for various compliance standards
    • Reports order items by priority

    After reviewing the product’s features, it is obvious that scanning tools, as a class of products, offer a wealth of benefits.  McAfee’s vulnerability scanner provides templates for several industry regulatory requirements.   This allows RDX to scan customers’ environments to quickly demonstrate proof of compliance for PCI DSS, SOX, HIPAA, GLBA, etc..  The recommendations are prioritized based on vulnerability and consist of the vulnerability definition, ranking and instructions to correct.  As stated at the beginning of this article, ranking the vulnerabilities allows the highest priority issues to be quickly addressed.

    Ongoing Vulnerability Scanning

    Once the database vulnerabilities have been identified and addressed, the challenge is to ensure that the internal support team’s future administrative activities do not introduce any additional security vulnerabilities into the environment.  Subsequent scans, run on a scheduled basis, ensure that no new security vulnerabilities are introduced into the environment.

    Leveraging Database Security Best Practices to Reduce Vulnerabilities

    What happens if your organization doesn’t have the funds to purchase a vulnerability scanning product?   There are numerous discussion forums and code depots on the web, so you can find homegrown scripts that may help. However, you are certainly exposing your environment to unidentified security vulnerabilities if you depend solely on these publicly provided solutions.

    A much better alternative is to utilize database administration security best practices, which include database hardening procedures, to reduce the number of potential vulnerabilities.  I’ll cover administrative best practices in my next article.  We’ll take an in-depth look at the benefits of database hardening and its direct impact on reducing vulnerabilities.

    Thanks for reading!

    The post The Database Protection Series- Vulnerability Assessments appeared first on Remote DBA Experts.

    4 To-dos before migrating from Windows Server 2003 [VIDEO]

    Chris Foot - Fri, 2015-02-20 08:24

    Transcript

    Hi, welcome to RDX! Microsoft support for Windows Server 2003 ends in mid-July. So, what do you need to do to prepare?

    First, assess your hardware. How many machines are running 2003? What are the component details of those servers?

    {marker} Second, create a performance portfolio of servers operating on 2003. How heavily were these assets utilized? Were any machines used less than 50 percent throughout their lifecycles?

    Third, get rid of the systems your business doesn’t require. Unless you’re planning on expanding, there’s no point in trying to find use for servers that aren’t needed.

    {marker} Fourth, develop a list of solutions and services that can be provisioned onto existing servers. The more workloads that can be consolidated onto individual systems, the leaner your business will be.

    Thanks for watching! If you need guidance migrating from Windows Server 2003, check out our OS support page to find out how we can help.

    The post 4 To-dos before migrating from Windows Server 2003 [VIDEO] appeared first on Remote DBA Experts.

    Partner News: Portal Architects Release New Connector to Enable Hybrid Storage

    WebCenter Team - Fri, 2015-02-20 06:00

    Portal Architects, Inc., the creators of SkySync, announced this week the release of a new connector for Oracle WebCenter Content. The new SkySync connector, along with the existing SkySync Oracle Documents Cloud connector, enables organizations to tightly integrate any existing on-premises or cloud content storage systems with Oracle WebCenter Content or Oracle Documents Cloud Service quickly.

    SkySync provides bi-directional, fully-synchronized integration across Oracle WebCenter Content on-premises and Oracle Documents Cloud Service, providing organizations the ability to leverage both on-premises content management solution as well as the enterprise grade document sync and sharing solution in the cloud. That means, no content islands or silos, comprehensive security and the ability to leverage your existing investments in content management infrastructures.

    Here is a quick video from Portal Architects to show how it all comes together. 

    Connections Types in SQLcl

    Barry McGillin - Fri, 2015-02-20 05:07

    We support many ways to connect in SQLcl, including lots from SQL*Plus which we need to support to make sure all your SQL*Plus scripts work exactly the same way using SQLcl as with SQL*Plus.
    I've added several ways to show how to connect to SQLcl.  If there is one you want to see added that is not here, let me know and I'll add it to the list.  So far, We have below:
    • EZConnect
    • TWO_TASK
    • TNS_ADMIN
    • LDAP
    At any time when connected you can use the command 'SHOW JDBC'  to display what the connection is and how we are connected.  Here's some details of the types above.

    EZCONNECT
    The easy connect naming method eliminates the need for service name lookup in the tnsnames.ora files for TCP/IP environments.  It extends the functionality of the host naming method by enabling clients to connect to a database server with an optional port and service name in addition to the host name of the database:
     $sql barry/oracle@localhost:1521/orcl  
    SQLcl: Release 4.1.0 Beta on Fri Feb 20 10:15:12 2015
    Copyright (c) 1982, 2015, Oracle. All rights reserved.
    Connected to:
    Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
    SQL>

    TWO_TASK
    The TWO_TASK (on UNIX) or LOCAL (on Windows) environment variable can be set to a connection identifier. This removes the need to explicitly enter the connection identifier whenever a connection  is made in SQL*Plus or SQL*Plus Instant Client. 
    In SQLcl, we can set this up as a jdbc style connection like this

    $export TWO_TASK=localhost:1521/orcl  




    TNS_ADMIN


    Local Naming resolves a net service name stored in a tnsnames.ora file stored on a client.  We can set the location of that in the TNS_ADMIN variable.

     $export TNS_ADMIN=~/admin  

    An example tons entry is shown here below.

     $cat tnsnames.ora   
    BLOG =
    (DESCRIPTION =
    (ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521) )
    (CONNECT_DATA=
    (SERVICE_NAME=orcl) ) )

    we can then use the entry to connect to the database.

     $sql barry/oracle@BLOG  
    SQLcl: Release 4.1.0 Beta on Fri Feb 20 10:29:14 2015
    Copyright (c) 1982, 2015, Oracle. All rights reserved.
    Connected to:
    Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
    SQL>

    LDAP

    We've already written about LDAP connections here.  Here's a quick review.

      set LDAPCON jdbc:oracle:thin:@ldap://scl58261.us.oracle.com:389/#ENTRY#,cn=OracleContext,dc=ldapcdc,dc=lcom   


     $export LDAPCON=jdbc:oracle:thin:@ldap://scl58261.us.oracle.com:389/#ENTRY#,cn=OracleContext,dc=ldapcdc,dc=lcom   
    $sql /nolog
    SQLcl: Release 4.1.0 Beta on Fri Feb 20 10:37:02 2015
    Copyright (c) 1982, 2015, Oracle. All rights reserved.
    SQL> connect barry/oracle@orclservice_test(Emily's Desktop)
    Connected
    SQL>

    If we have more types to add, then they will appear here.  Let us know what you want to see.

    Oracle MAF - assigning value to a backing bean property from javascript

    Communicating between layers in MAF is quite easy with the provided APIs. For calling Java method for javascript we have the invokeMethod js function, which is quite straight forward: ...

    We share our skills to maximize your revenue!
    Categories: DBA Blogs

    Screaming at Each Other

    Scott Spendolini - Thu, 2015-02-19 20:20
    Every time I attend a conference, the Twitter traffic about said conference is obviously higher.  It starts a couple weeks or even months before, builds steadily as the conference approaches, and then hits a crescendo during the conference.  For the past few conferences, I’ve started my sessions by asking who in the audience uses Twitter.  Time and time again, I only get about 10-20% of the participants say that they do.  That means that up to 90% of the participants don’t.  That’s a lot of people.  My informal surveys also indicate a clear generation gap.  Of those that do use Twitter, they tend to be around 40 years old or younger.  There are of course exceptions to this rule, but by and large this is the evidence that I have seen.

    I actually took about 10 minutes before my session today to attempt to find out why most people don’t care about Twitter.  The answer was very clear and consistent: there’s too much crap on there.  And they are correct.  I’d guess that almost 100% of all Tweets are useless or at least irrelevant to an Oracle professional.
    I then took a few minutes to explain the basics of how it worked - hash tags, followers, re-tweets and the like.  Lots of questions and even more misconceptions.  “So does someone own a hash tag?” and “Can I block someone that I don’t care for” were some of the questions that I addressed.  
    After a few more questions, I started to explain how it could benefit them as Oracle professionals.  I showed them that most of the Oracle APEX team had accounts.  I also highlighted some of the Oracle ACEs.  I even showed them the RMOUG hash tag and all of the tweets associated with it.  Light bulbs were starting to turn on.
    But enough talking.  It was time for a demo.  To prove that people are actually listening, I simply tweeted this:Please reply if you follow #orclapex - want to see how many people will in the next 30 mins. Thanks!
    — Scott Spendolini (@sspendol) February 19, 2015 Over the next 30 minutes, I had 10 people reply. At the end of the session, I went through the replies, and said what I knew about those who did reply.  Oracle Product Manager, Oracle Evangelist, Oracle ACE, APEX expert, etc.  The crowd was stunned.  This proved that Twitter as a medium to communicate with Oracle experts was in fact, real.  
    More questions.  “Can I Tweet to my power company if I have an issue with them?” and “Do people use profanity on Twitter?” were some of the others.  People were clearly engaged and interested.  Mission accomplished.
    The bigger issue here is that I strongly feel that the vast majority of the Oracle community is NOT on Twitter.  And that is a problem, because so much energy is spent tweeting about user groups and conferences.  It's like we’re just screaming at each other, and not at those who need to listen.  
    We can fix this.  I encourage everyone who presents at a conference to take 5 minutes at the beginning or end of their session to talk about the benefits of Twitter.  Demonstrate that if you follow Oracle experts, the content that will be displayed is not about Katy Perry, but rather about new features, blog posts or other useful tidbits that can help people with their jobs. Take the time to show them how to sign up, how to search for content, and who to follow.  I think that if we all put forth a bit of effort, we can recruit many of those to join the ranks of Twitter for all the right reasons, and greatly increase the size of the Oracle community that’s connected via this medium.

    Running Carsten Czarski's node-oracledb WebSocket Example

    Christopher Jones - Thu, 2015-02-19 17:40

    My colleague Carsten Czarski recently presented on the node-oracledb driver for Node.js. One of his demos used WebSockets. It was a live demo, not captured in slides. I thought I'd explain how I got it to run in my Oracle Linux 64 bit environment.

    • Download and extract the Node 0.10.36 bundle from here. (At time of writing, the node-oracle driver requires Node.js 0.10). Add the bin to your PATH, for example:

      $ export PATH=/opt/node-v0.10.36-linux-x64/bin:$PATH
      
    • Download and install the 'basic' and 'devel' Instant Client RPMs from OTN:

      # rpm -ivh oracle-instantclient12.1-basic-12.1.0.2.0-1.x86_64.rpm
      # rpm -ivh oracle-instantclient12.1-devel-12.1.0.2.0-1.x86_64.rpm
      
    • Download Carsten's demo code from here and extract it:

      $ cd /home/cjones
      $ mkdir wsdemo
      $ cd wsdemo
      $ mv $HOME/Downloads/nodejs-beispielprogramme.zip .
      $ unzip nodejs-beispielprogramme.zip
      
    • Create a new package.json file:

      {
          "name": "ccwebsockets",
          "version": "1.0.0",
          "description": "Carsten's WebSocket Demo application using node-oracledb 0.3.1.",
          "scripts": {
      	"start": "node 05-websockets.js"
          },
          "dependencies": {
      	"oracledb": "oracle/node-oracledb#619e9a8fa6625a2c5ca3e1a2ba10dbdaab5ae900",
      	"websocket": "^1.0",
      	"express": "^4.11"
          }
      }
      
    • Edit 05-websockets.js and change the database credentials at line 111. The schema needs to have the EMP table.

      oracledb.createPool(
        {
          user          : "scott",
          password      : "tiger",
          connectString : "localhost/pdborcl",
          poolMin       : 5,
          poolMax       : 10
        },
      
    • Also in 05-websockets.js, change the path name at line 65 to your current directory name:

      filename = path.join("/home/cjones/wsdemo", uri);
      
    • Use npm to automatically install the node-oracle driver and the "websocket" and "express" dependencies listed in package.json:

      $ npm install
      
    • To run the demo, use the package.json script "start" target to load 05-websockets.js:

      $ npm start
      

      The server will start:

      > ccwebsockets@1.0.0 start /home/cjones/wsdemo
      > node 05-websockets.js
      
      Websocket Control Server listening at http://0.0.0.0:9000
      Database connection pool established
      
    • Open a couple of browser windows to http://127.0.0.1:9000/html/websocket.html. These are the clients listening for messages.

      The output is the starting point of the demo. Let's send a message to those clients.

    • Open a third browser window for the URL http://127.0.0.1:9000/update/CLARK. The two listening windows will be updated with the "message" containing the query result payload. My screenshot shows this, and also has evidence that I had previously visited http://127.0.0.1:9000/update/KING :

    You might have noticed the screen shots were made on OS X. If you are not on Linux, refer to INSTALL to see how to install Node.js and node-oracledb. The package.json file I created will download node-oracledb 0.3.1 so you don't need to manually get it from GitHub. You will have to set OCI_LIB_DIR and OCI_INC_DIR during installation, and then set LD_LIBRARY_PATH, DYLD_LIBRARY_PATH or PATH when you want to run node.

    You can follow Carsten at @cczarski.

    Code Insight on SQLcl

    Barry McGillin - Thu, 2015-02-19 17:29
    Here's a little preview of the code insight we have in SQLcl.  These changes are part of EA2 which are coming out very soon.  This also shows the buffer and cursor management which was introduced in SQLcl


    This allows you to move around the buffer easily and add and change text as you would in a normal text editor, not a console window like this.

    We're also adding hotkeys to run the buffer from anywhere or to jump out of the buffer to do something else without losing the contents of the buffer.

    Stayed tuned for this soon.
    B