Skip navigation.

Feed aggregator

Auditing Enhancements (Audit Policies and Unified Audit Trail) in Oracle Database 12c

Tim Hall - Mon, 2015-06-29 00:12

security_image1_smallA little over a year ago I was at the BGOUG Spring Conference and I watched a session by Maja Veselica about auditing in Oracle Database 12c. At the time I noted that I really needed to take a look at this new functionality, as is was quite different to what had come before. Fast forward a year and I’ve finally got around to doing just that. :)

I’ve tried to keep the article quite light and fluffy. The Oracle documentation on this subject is really pretty good, so you should definitely invest some time reading it, but if you need a quick overview to get you started, my article might help. :)

My 12c learning experience continues…

Cheers

Tim…

Auditing Enhancements (Audit Policies and Unified Audit Trail) in Oracle Database 12c was first posted on June 29, 2015 at 7:12 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Prior Learning Assessments Done Right

Michael Feldstein - Sun, 2015-06-28 21:53

By Michael FeldsteinMore Posts (1033)

This post has nothing to do with educational technology but everything to do with the kind of humane and truly personal education that we should be talking about when we throw around phrases like “personalized education.” Prior Learning Assessments (PLAs) go hand-in-glove with the trendy Competency-Based Education (CBE). The basic idea is that you test students on what they have learned in their own lives and give them credit toward their degrees based on what they already know. But it is often executed in a fairly mechanical way. Students are tested against the precise curriculum or competencies that a particular school has chosen for a particular class. Not too long ago, I heard somebody say, “We don’t need more college-ready students; we need more student-ready colleges.” In a logical and just world, we would start with what the student knows, rather than the with what one professor or group of professors decided one semester would be “the curriculum,” and we would give the student credit for whatever college-level knowledge she has.

It turns out that’s exactly what Empire State College (ESC) does. When we visited the college for an e-Literate TV case study, we learned quite a bit about this program and, in particular, about their PLA program for women of color.

But before we get into that, it’s worth backing up and looking at the larger context of ESC as an institution. Founded in 1971, the school was focused from the very beginning on “personalized learning”—but personalized in a sense that liberal intellectuals from the 1960s and 1970s would recognize and celebrate. Here’s Alan Mandell, who was one of the pioneering members of the faculty at ESC, on why the school has “mentors” rather than “professors”:

Alan Mandell: Every single person is called a mentor.

It’s valuable because of an assumption that is pretty much a kind of critique of the hierarchical model of teaching and learning that was the norm and remains the norm where there is a very, very clear sense of a professor professing to a student who is kind of taking in what one has to say.

Part of the idea of Empire State, and other institutions, more and more, is that there was something radically wrong with that. A, that students had something to teach us, as faculty, and that faculty had to learn to engage students in a more meaningful way to respond to their personal, academic, professional interests. It was part of the time. It was a notion of a kind of equality.

This was really interesting to me actually because I came here, and I was 25 years old. Every single student was older than I was, so the idea of learning from somebody else was actually not very difficult at all. It was just taken for granted. People would come with long professional lives, doing really interesting things, and I was a graduate student.

I feel, after many years, that this is still very much the case—that this is a more equal situation of faculty serving as guides to students who bring in much to the teaching and learning situation.

Unlike some of the recent adoptions of PLA, which are tied to CBE and the idea of getting students through their degree programs quickly, Empire State College approaches prior learning assessment in very much the spirit that Alan describes above. Here’s Associate Dean Cathy Leaker talking about their approach:

Cathy Leaker What makes Empire State College unique, even in the prior learning assessment field, is that many institutions that do prior learning assessment do what’s called a “course match.” In other words, a student would have to demonstrate—for example, if they wanted to claim credit for Introduction to Psychology, they would look at the learning objectives of the Introduction to Psychology course, and they would match their learning to that. We are much more open-ended, and as an institution, we really believe that learning happens everywhere, all the time. So, we try to look at learning organically, and we don’t assume that we already know exactly what might be required.

One of my colleagues, Elana Michelson, works on prior learning assessment. She started working in South Africa where they were—there it’s called “recognition for prior learning.” And she gives the example of some of the people who were involved in bringing down Apartheid, and how they, sort of as an institution working with the government, thought it might be ridiculous to ask those students to demonstrate problem solving skills, right? How the institution might look at problem-solving skills, and then if there was a strict match, they would say, “Well, wait a second. You don’t have it,” and yet, they’re activists that brought down the government and changed the world.

Those are some examples of why we really think we need to look at learning organically.

Students like Melinda come to us, talk about their learning, and then we try to help them identify it, come up with a name for it, and determine an amount of credit before submitting it for evaluation.

This is not personalized in the sense trying to figure out which institution-defined competencies you can check off on you way to an institution-defined collection of competencies that they call a “degree.” Rather, it’s an effort to have credentialed experts look at what you’ve done and what you know to find existing strengths that deserve to be recognized and credentialed. The Apartheid example is a particularly great one because it shows that traditional academic institutions may be poorly equipped to recognized and certify real-world demonstrations of competencies, particularly among people who come from disadvantaged or “marked” backgrounds. Here’s ESC faculty member Frances Boyce talking about why the school recognized a need to develop a particular PLA program for women of color:

Frances Boyce: Our project, Women of Color and Prior Learning Assessment, is based on a 2010 study done by Rebecca Klein-Collins and Richard Olson, “Fueling the Race to Success.” That found that students who do prior learning assessments are two and a half times more likely to graduate. When you start to unpack that data and you look at the graduation rates for students of color, for African American students the graduation rate increases fourfold. For Latino students it increases eightfold. Then, when you look at it in terms of gender, a woman who gets one to six credits in prior learning assessment will graduate more quickly than her male counterpart given the same amount of credit.

That seemed very important to us, and we decided, “Well, let’s see what we could do to improve the uptake rate for women of color.” So, we designed four workshops to help women of color, not only identify their learning—the value of their learning—but identify what they bring with them to the institution.

What’s going on here? Why is PLA more impactful than average for women and people of color? In addition to the fact that our institutions are not always prepared to recognize real-world knowledge and skills, as in the Apartheid example, people in non-privileged positions in our society are tacitly taught that college is not “for them.” That they don’t have what it takes to succeed there. By recognizing that they have, in fact, already acquired college-level skills and knowledge, PLA helps them get past the insults to their self-image and dignity and helps them to envision themselves as successful college graduates. Listen to ESC student Melinda Wills-Stallings’ story:

Michael Feldstein: I’m wondering if you can tell me, do you remember a particular moment, early on, when the lightbulb went off and you said to yourself, “Oh, that thing that’s part of my life counts”?

Melinda Wills-Stallings: I think when I was talking to my sons about the importance of their college education and how they couldn’t be successful without it and them saying to me, “But, Mom, you are successful. You run a school. You run a business.” To be told on days that I wasn’t there, the business wasn’t running properly or to be told by parents, “Oh, my, God. We’re so glad you’re back because we couldn’t get a bill, we couldn’t get a statement,” or, “No one knew how to get the payroll done.”

That’s when I knew, OK, but being told by an employer who said I wasn’t needed and I wasn’t relied on, I came to realize that it flipped on me. And I realized that’s what I had been told to keep me in my place, to keep me from aspiring to do the things that I knew that I was doing or I could do.

The lightbulb for me was when we were doing the interviews and Women of Color PLA, and Frances said to me, “That’s your navigational capital.” We would do these roundtables where you would interview with one mentor, and then you would go to another table. Then I went to another table, and she said, “Well, what do you hope to do with your college degree?” And I said, “I hope to pay it forward: to go continue doing what I love to do, but to come back to other women with like circumstances and inspire them and encourage them and support them to also getting their college degrees and always to be better today than I was yesterday, so that’s your aspirational capital.” And I went, “Oh, OK.” So, I have aspirational capital also, and then go to the next table and then I was like, I couldn’t wait to get to the next table because every table I went to, I walked away with one or two prior learning assessments.

And then to go home and to be able to put it into four- or five-page papers to submit that essay and to have it recognized as learning.

I was scared an awful lot of times from coming back to school because I felt, after I graduated high school and started college and decided I wanted to get married and have a family, I had missed the window to come back and get my college education. The light bulb was, “It’s never too late,” and that’s what I tell women who ask me, and I talk to them all the time about our school and our program. Like, “It’s never too late. You can always come back and get it done.”

Goals and dreams don’t have caps on them even though where I was, my employer had put a cap on where I could go on my salary and my position. Your goals and dreams don’t have a cap on it, so I think that was the light bulb for me—that it wasn’t too late.

It’s impossible to hear Melinda speak about her journey and not feel inspired. She built up the courage to walk into the doors of the college, despite being told repeatedly by her employer that she was not worthy. The PLA process quickly affirmed for her that she had done the right thing. At the same time, I recognize that traditionalists may feel uncomfortable with all this talk of “navigational capital” and “aspirational capital” and so on. Is there a danger of giving away degrees like candy and thus devaluing them? First, I don’t think there’s anything wrong with giving a person degree certification if they have become genuine experts in a college-appropriate subject through their life experience. In some ways, we are all the Scarecrow, the Tin Man, or the Cowardly Lion, waiting for some wizard to magically convey upon us a symbol that confers legitimacy upon our hard-won skills and attributes and thus somehow making them more real. But also, a funny thing happens when you treat a formal education as a tool for helping an individual reach her goals rather than a set of boxes that must be checked. Students start thinking about the work that education entails as something that is integral to them achieving those goals rather than a set of obstacles they have to get around in order to get the piece of paper that is the “real” value of college. Listen to ESC student Jessi Colón, a professional dancer who chose not to get all the credits she could have gotten for her dance knowledge because she wanted to focus on what she needed to learn for her next career working in animal welfare:

Jessi Colón: It was little bit tricky especially because I had really come here with the intention of maximizing and capitalizing on all this experience that I had. Part of the prior learning assessment and degree planning process is looking at other schools that may have somewhat relevant programs and trying to match what your learning is to those. As I was looking at other programs outside of New York or at other small, rural schools that do these little animal programs, I found that there were a lot of classes that I really wanted to take.

One of the really amazing things about Empire State is that they can also give you individualized courses, and I did a lot of those. So, once I saw these at other schools, I was like, “Man, I really want to take a class in animal-assisted therapy, and would I like to really, really indulge myself and do that or should I write another essay on jazz dance composition?” I knew that one would be more of a walk in the park than the other, but I was really excited about my degree and having this really personal degree allowed me to get excited about it. So, it made sense, though hard to let go of that prior learning in order to opt for the classes.

I could’ve written 20 different dance essays, but I wanted to really take a lot of classes. So, I filled that with taking more classes relevant to my degree, and then ended up only writing, I think, one or two dance-relevant essays.

It turns out that if you start from the assumption that the education they are coming for—not the certification, but the learning process itself—can and should have intrinsic value to them as tools toward pursuing their own ambitions, then people step up. They aspire to be more. They take on the work. If the education is designed to help them by recognizing how far they have come before they walk in the door and focusing on what they need to learn in order to do whatever it is they aspire to do after they leave, then students often come to see that gaming the system is just cheating themselves.

There are many ways to make schooling more personal but, in my opinion, what we see here is one of the deepest and most profound. This is what a student-ready college looks like. And in order to achieve it, there must be an institutional commitment to it that precedes the adoption of any educational technology. The software is just an enabler. If college community collectively commits to true personalization, then technology can help with that. If the community does not make such a commitment, then “personalized learning” software might help achieve other educational ends, but it will not personalize education in the sense that we see here.

I’m going to write a follow-up post how ESC is using that personalized learning software in their context, but you don’t have to wait to find out; you can just watch the second episode of the case study. While you’re at it, you should go back and watch the full ETV episode from which the above clips were excerpted. In addition to watching more great interview content, you can find a bunch of great related links to content that will let you dig deeper into many of the topics covered in the discussions.

The post Prior Learning Assessments Done Right appeared first on e-Literate.

Video Tutorial: XPLAN_ASH Active Session History - Part 6

Randolf Geist - Sun, 2015-06-28 15:30
The next part of the video tutorial explaining the XPLAN_ASH Active Session History functionality continuing the actual walk-through of the script output.

More parts to follow.

Oracle Log Writer and Write-Ahead-Logging

Yann Neuhaus - Sun, 2015-06-28 11:29

I posted a tweet with a link to a very old document - 20 years old - about 'internals of recovery'. It's a gem. All the complexity of the ACID mecanisms of Oracle are explained in a very simple way. It was written for Oracle 7.2 but it's incredible to see how much the basic things are still relevant today. Of course, there is  a reason for that: the mecanisms of recovery are critical and must be stable. There is one more reason in my opinion: the Oracle RDBMS software was very well designed, then the basic structures designed 20 years ago are still able to cope with new features, and to scale with very large databases, through the versions and the years. 

It's 20 years old but it's still the best written document I've read about how Oracle works http://t.co/4CAI4Q5MIm http://t.co/mmgA50JzMQ

— Franck Pachot (@FranckPachot) June 26, 2015

If you check the conversation that followed, a doubt has been raised about the following sentence:

According to write-ahead log protocol, before DBWR can write out a cache buffer containing a modified datablock, LGWR must write out the redo log buffer containing redo records describing changes to that datablock.

There are 2 ways to clear out that kind of doubt: read and test. And we need both of them because:

  • documentation may have bug
  • software may have bug

so you can be sure about a behaviour only when both documentation and test validates your assumption.

Documentation

The first documentation I find about it is another gem describing how Oracle works: Jonathan Lewis 'Oracle Core (Apress)'. And it's clearly stated that:

One of the most important features of the Oracle code is that the database writer will not write a changed block to disk before the log writer has written the redo that describes how the block was changed. This write-ahead logging strategy is critical to the whole recovery mechanism.

Then there is of course the Oracle Documentation:

Before DBW can write a dirty buffer, the database must write to disk the redo records associated with changes to the buffer (the write-ahead protocol). If DBW discovers that some redo records have not been written, it signals LGWR to write the records to disk, and waits for LGWR to complete before writing the data buffers to disk.

Test case

Ok, that should be enough. But I want to do a simple testcase in order to see if anything has changed in the latest version (12.1.0.2). My idea is to check two things:

  • whether a checkpoint is requesting so work to be done by logwriter
  • whether a change is written to redo log after a checkpoint, without waiting the usual 

I create a table:

19:07:21 SQL> create table DEMO as select '--VAL--1--'||to_char(current_timestamp,'hh24missffff') val from dual;

Table created.

19:07:21 SQL> select * from DEMO;

VAL
----------------------------------
--VAL--1--190721367902367902

 

I start with a new logfile:

19:07:21 SQL> alter system switch logfile;
System altered.

And I retrieve the log writer process id for future use:

19:07:21 SQL> column spid new_value pid
19:07:21 SQL> select spid,pname from v$process where pname='LGWR';

SPID PNAME
------------------------ -----
12402 LGWR

19:07:21 SQL> host ps -fp &pid
UID PID PPID C STIME TTY TIME CMD
oracle 12402 1 0 Jun25 ? 00:00:46 ora_lgwr_DEMO14
update and commit

Here is a scenario where I update and commit:

19:07:21 SQL> update DEMO set val='--VAL--2--'||to_char(current_timestamp,'hh24missffff');

1 row updated.

19:07:21 SQL> select * from DEMO;

VAL
----------------------------------
--VAL--2--190721443102443102

19:07:21 SQL> commit;

Commit complete.

I want to see if a checkpoint has something to wait from the log writer, so I freeze the log writer:

19:07:21 SQL> host kill -sigstop &pid

and I checkpoint:

19:07:21 SQL> alter system checkpoint;

System altered.

No problem. The checkpoint did not require anything from log writer in that case. Note that the dirty buffers related redo has already been written to disk at commit (and log writer was running at that time).

I un-freeze it for the next test:

19:07:21 SQL> host kill -sigcont &pid

update without commit

Now I'm doing the same but without commit. My goal is to see if uncommited dirty blocks need their redo to be written to disk.

19:07:51 SQL> select * from DEMO;

VAL
----------------------------------
--VAL--2--190721443102443102

19:07:51 SQL> host kill -sigstop &pid

19:07:51 SQL> update DEMO set val='--VAL--3--'||to_char(current_timestamp,'hh24missffff');

1 row updated.

19:07:51 SQL> alter system checkpoint;

Here it hangs. Look at the wait events:

b2ap3_thumbnail_CaptureLGWR.JPG

My checkpoint is waiting on 'rdbms ipc reply' until the log writer is woken up. 


$ kill -sigcont 12402

System altered.

19:09:37 SQL> select * from DEMO;

VAL
----------------------------------
--VAL--3--190751477395477395

The checkpoint is done.

 

Note that if I run the same but wait 3 seconds after the update (because I know that log writer writes redo at least every 3 seconds even not asked to do it):

21:33:35 SQL> update DEMO set val='--VAL--3--'||to_char(current_timestamp,'hh24missffff');

1 row updated.

21:33:35 SQL> host sleep 3

21:33:38 SQL> host kill -sigstop &pid

21:33:38 SQL> alter system checkpoint;

System altered.

21:33:38 SQL>

checkpoint is not waiting because all the redo that covers the dirty buffers are alerady written.

I've also checked that immediately after the checkpoint (without stopping the log writer here) the uncommited change is written to the redo log files:

21:56:38 SQL> select group#,v$log.status,member from v$log join v$logfile using(group#) where v$log.status='CURRENT';

GROUP# STATUS MEMBER
---------- ---------------- ----------------------------------------
2 CURRENT /u01/DEMO/oradata/DEMO14/redo02.log


21:56:38 SQL> update DEMO set val='--VAL--2--'||to_char(current_timestamp,'hh24missffff');

1 row updated.

21:56:38 SQL> select * from DEMO;

VAL
----------------------------------
--VAL--2--215638557183557183

21:56:38 SQL> alter system checkpoint;

System altered.

21:56:38 SQL> host strings &redo | grep "VAL--"
--VAL--1--215638376899376899
--VAL--2--2156385571

A simple grep reveals that redo has been written (I've no other activity in the database - so no concurrent commits here).

Conclusion

Even if some mecanisms have been improved (see Jonathan lewis book for them) for performance, the fundamentals have not changed.

I've said that there are two ways to validated an assumption: documention and test.
But there is a third one: understanding.

When you think about it, if you write uncommited changes to the files, then you must be able to rollback them in case of recovery. Where is the rollback information? In the undo blocks. Are the undo blocks written on disk when the database is written on disk? You don't know. Then where do you find the undo information in case of recovery? The redo genereated by the transaction contains change vectors for data blocks and for undo blocks. Then if you are sure that all redo is written before the block containing uncomitted changes, then you are sure to be able to rollback those uncommited changes.

Note that this occurs only for modifications through buffer cache. Direct-path insert do not need to be covered by redo to be undone. It's the change of high water mark that will be undone and this one is done in buffer cache, protected by redo.

QS15: Measurement with Meaning

Oracle AppsLab - Sun, 2015-06-28 10:02

Walking into something as a newcomer is always an adventure of reality interacting with expectations. Though I wasn’t quite sure what to expect at the Quantified Self conference, it wasn’t what I expected. But in a good way.

QS15 Twitter robot

Tweet-painting robot at QS15

The conference was structured around three main activities: talks given on the main stage, breakout sessions, which took place at different smaller areas during the talks, and break times, where one might check out the vendors, grab a snack, or chat with fellow attendees.

The talks, about ten minutes each, were mostly about the speaker’s successes in changing some aspect of their life via quantifying and analyzing it. This is partly what I wasn’t expecting—the goal-focused and very positive nature of (most) everyone’s projects.

True, some of the presenters might be tallied on the obsessive side of the spectrum, but by and large, it was all about improving your life, and not recording everything as a method of self-preservation.

On this last point, one presenter even provided this quote from Nabokov, which generated a touch of controversy: “the collecting of daily details … is always a poor method of self-preservation.”

One important theme I saw, however, is the role of measuring itself—that the very act of quantifying your behaviors, whether it’s diet, exercise, TV watching, or your productivity, can change your behavior for the better.

Granted, there can also be profound personal insights from analyzing the data, especially when combining multiple sources, but it’s possible some of these benefits come from simply tracking. Especially when it’s done manually, which takes a great deal of persistence, with many people petering out after a few weeks at the most.

This presents an interesting question about technology’s increasing proficiency at passive tracking, and the aim to provide insights automatically. For instance, the Jawbone UP platform’s Smart Coach is supposed look at your exercise and activity data with your sleep data and give you advice about how to get better sleep.

If someone had tracked this manually, and done the analysis themselves, they may not only be a lot more familiar with the facts about their own sleep and exercise, but any insights derived might be more likely to be absorbed and translate to genuine change.

When insights are automatically provided will they lead to just as much adoption?

Probably not, but they could reach a lot more people who may not be able to keep up with measuring. So it’s probably still a good thing in the end.

The other important theme was something that I’ve also been encountering in other areas of my work—the importance of good questions.

For most of the QS projects, this took the form of achieving a personal goal, but sometimes it was simply a specific inquiry into a realm of one’s life. Just looking at data can be interesting, but without a good question motivating an analysis, it’s often not very useful.

In the worst case, you can find spurious connections and correlations within a large set of data that may get you off in the wrong direction.

And while at the beginning of the conference it was made clear that QS15 was not a tech conference, there was plenty of cool technology in the main hall to check out and discuss.

There are too many to cover in much detail, but here are a few that intrigued me:

  • Spire, a breath tracking device that says it can measure focus by analyzing your breathing pattern. If someone is interested in examining their productivity, this could be a promising device to check out. Also, it can let you know when you need a deep breath, which has various physiological and emotional benefits.
  • Faurecia manufactures seats for automobiles, and they were showing off a prototype that uses piezoelectric bands within the chair itself to measure heart rate and breathing patterns. This is great because it can do this through your clothing, and detect when you’re falling asleep, and possibly institute some countermeasures. The data could also sync up with your phone, say through Apple’s Healthkit, if you want to add it to your logs.
  • Oura
    is an activity and sleep tracker that uses a ring form factor, which for some people may be easier to sleep with than a wrist band. Their focus is on sleep and measuring how restorative your rest is. I look forward to seeing how this one develops.

The conference had a lot to offer—some inspiration, some cool technologies, surprisingly good lunches, and quite a bit to think about.Possibly Related Posts:

Retrogaming on an Arduino/OLED "console"

Paul Gallagher - Sun, 2015-06-28 03:24
(blogarhythm ~ invaders must die - The Prodigy)
Tiny 128x64 monochrome OLED screens are cheap and easy to come by, and quite popular for adding visual display to a microcontroller project.

My first experiments in driving them with raw SPI commands had me feeling distinctly old school, as the last time remember programming a bitmap screen display was probably about 30 years ago!

So while in a retro mood, what better than to attempt an arcade classic? At first I wasn't sure it was going to be possible to make a playable game due to the limited Arduino memory and relative slow screen communication protocol.

But after a few tweaks of the low-level SPI implementation, I was surprised myself at how well it can run. Even had enough clock cycles left to throw in a sound track and effects.

Here's a quick video on YouTube of the latest version. ArdWinVaders! .. in full lo-rez monochrome glory, packed into 14kb and speeding along at 8MHz.



Full source and schematics are in the LittleArduinoProjects collection on Github.

OAM Training (4th July) : EBS & AD Integration : 11gR2 PS3 Launch

Online Apps DBA - Sun, 2015-06-28 01:43

We announced OAM Training on 4th of July (only 3 seats left) and since our announcement lot of you asked what integration we are going to cover.  Looking at kind of queries we received, I though its worth posting here. We are going to cover

  • Oracle E-Business Suite (R12 – 12.1) integration with Oracle Access Manager
  • Microsoft Active Directory (AD)/Windows Native Authentication (WNA) integration with Oracle Access Manager (OAM) for Zero Single Sign-On.

Register here for Oracle Access Manager Training (100 USD off if you register before 1st July, last 3 seats before we close registration)

 

Oracle announced OAM 11gR2 PS3 in May 2013, register here for Technical Update on OAM 11gR2 PS3.

Related Posts for Oracle Access Manager
  1. OAM Training (4th July) : EBS & AD Integration : 11gR2 PS3 Launch

The post OAM Training (4th July) : EBS & AD Integration : 11gR2 PS3 Launch appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

Oracle Database Cloud Service - My first trial

Yann Neuhaus - Sat, 2015-06-27 12:31

The cloud has been annouced, I want to try.

From the cloud.oracle.com/database website, there is Trial only for the 'Database Schema Service' so I asked fot it, received an e-mail with connection info and it works:

b2ap3_thumbnail_CaptureCloud001.JPG

 

Good. The password was temporary, so I have to change it, and set answers to 3 within 4 questions in case I forgot my password.

Java error: my user does not exist:

b2ap3_thumbnail_CaptureCloud002.JPG

Ok, I was too quick after receiving the e-mail... Let's wait 5 minutes and come back:

 

b2ap3_thumbnail_CaptureCloud003.JPG

 

What ? Same as old password ? 

let's try another one:

 

b2ap3_thumbnail_CaptureCloud004.JPG

 

Ok. Now I understand. When my user did not exist, the passward has been changed anway... Good I remember it.

I put the first password I tried as old password:

 

b2ap3_thumbnail_CaptureCloud005.JPG

Ok. But I know my previous passord, let's keep it.

I come back to the url and connect:

 

b2ap3_thumbnail_CaptureCloud006.JPG

 

It seems that I don't remember it... let's go through the 'Forgot Passord' screen. I entered my mother's maiden name, may pet name, my city of birth... well all thost top secret information that everybody has on his Facebook first page ;)

And enter a new password:

 

b2ap3_thumbnail_CaptureCloud007.JPG

Bad luck again...

Going back to the e-mail I see something strange: there is two spaces in front of my username:

 

b2ap3_thumbnail_CaptureCloud007b.JPG

Ok, I try everything: my username with and without space in front, the 3 passwords I tried to change previously... same error.

Let's start from the begining. Click on the first url in the mail.

Bingo. I don't know how but I'm logged. Here is the first screen of by first database in the cloud:

b2ap3_thumbnail_CaptureCloud008.JPG

 

Wait a minute... is this new ? An online APEX workspace that is available for years is now 'The' Oracle Cloud Database Schema Service available as a 30-days trial?

Without any exitation, I'll do my first Hello world in a database in the Cloud:

b2ap3_thumbnail_CaptureCloud009.JPG

I hope we will be able to get a trial account for the 'real' Database as a Service in the Cloud. I always loved to be able to download and try Oracle products for free. I don't think we are ready to pay to test it. Prices here.

More About Me at QS15

Oracle AppsLab - Sat, 2015-06-27 10:30

I always thought of myself as a control freak, Type A, self-aware (flaws and all) person but then I attended the Quantified Self Conference last week in San Francisco.

Image from QS15

Image from QS15

There is so much more one can do to learn about one’s self. The possibilities are endless on what I can quantify (measure about myself) and there are so many people capturing many surprising things.

Quantified Self, if you haven’t heard, is “a collaboration of users and tool makers who share an interest in self knowledge through self-tracking,” as described by by Gary Wolf and Kevin Kelly. I’ve also been an admirer of Nicholas Felton, who has beautiful visualizations of his data.

The two-day conference consisted of morning and afternoon plenary sessions, and in between, the day is filled with ten-minute talks on the main stage (where practitioners share their own QS work, tools, and personal data), with breakout sessions for group discussions and office hours for hands-on help happening concurrently. There were plenty of topics for a newbie QS-er like me or a longtime enthusiast.

My conference experience in numbers:

Videos and presentations should be posted in the coming weeks but until then, here is a summary of from Gary Wolf.

Beyond the numbers, I was surprised, inspired and learned a few lessons. It is amazing what quantified self-ers are capturing, the extent and effort they take, and their life changing impacts. There is plenty of fitness, diet, and health tracking happening, but others are tracking things such as:

The list goes on but this sampling gives you a sense of the range of self tracking.

While lots of recording was being done with commonly available sensors, devices, and apps, there was a lot of data being recorded manually through pen-paper journals and spreadsheets.

There are endless measures (and many low and high tech tools) but recording is not the end goal. The measures help inform our goals and the actions to achieve those goals. There were several talks about the importance of self-tracking to understand your numbers, your similarities and your differences to population normals.

In “Beyond Normal: A Conversation,” Dawn Nafus (@dawnnafus) and Anne Wright (@annerwright) discussed the importance of self-tracking to gain awareness on whether the standards, baselines, and conventions apply to you. Population normals are a good starting point but they shouldn’t define your target as you are unique and the normals may not be right for you (#resistemplotment).

Image from QS15

Image from QS15

My takeaway, don’t worry about getting the perfect device or tool. Start with finding a goal or change that is important to you. Record, measure, and analyze – glean insights that move you along to being your best self. It is not about the Q but the S.Possibly Related Posts:

Native Network Encryption and SSL/TLS are not part of the Advanced Security Option

Tim Hall - Sat, 2015-06-27 06:09

security_image1_smallI had a little surprise the other day. I was asked to set up a SSL/TLS connection to a database and I refused, saying it would break our license agreement as we don’t have the Advanced Security Option. I opened the 11gR2 licensing manual to include a link in my email response and found this.

“Network encryption (native network encryption and SSL/TLS) and strong authentication services (Kerberos, PKI, and RADIUS) are no longer part of Oracle Advanced Security and are available in all licensed editions of all supported releases of the Oracle database.”

I checked the 11gR1, and 10gR2 docs also. Sure enough, it was removed from the Advanced Security Option from 10gR2 onward (check out update below). Check out the 10g licensing doc here, specifically the last paragraph in that linked section.

The documentation on this configuration is split among a number of manuals, most of which still say it is part of the Advanced Security Option. That made me a little nervous, so I raised an SR with Oracle to confirm the licensing situation and file bug reports against the docs to correct the inconsistency. Their response was it is definitely free and the docs are being amended to bring them in line with the licensing manual. Happy days! :)

Lessons learned here are:

  • Skim through the licensing manual for every new release to see what bits are now free.
  • Don’t trust the technical docs for licensing information. Always cross check with the licensing manual and assume that’s got the correct information. If in doubt, raise an SR to check.

As far as the configuration is concerned, I had never written about this functionality before, so I thought I should do backfill articles on it.

The documentation for TCP/IP with SSL/TCP is rather convoluted, so you could be forgiven for thinking it was rocket science. Actually, it’s pretty simple to set up. It was only after I finished doing it I found a reference to the following MOS note.

It would have saved me a lot of bloody time if the documentation included this. I would never have bothered to write the article in the first place!

cloudFor a lot of people, encrypting database connections is probably not that big a deal. If your databases and application servers are sitting behind a firewall in a “safe” part of your network, then why bother?

If there are direct database connections crossing network zones, that’s a different matter! Did anyone mention “cloud”? If you need to connect to your cloud databases from application servers or client tool sitting on-premise, I guess encrypted database connections are pretty high up your list of requirements, or at least they should be. Good job it is free now. :)

It seems I’m not the only person behind the times on this licensing change. The Amazon AWS RDS for Oracle documentation has made the same mistake. I’ve written to them to ask them to correct this page also. :)

Cheers

Tim…

Update: Simon, Jacco, Franck and Patrick all pointed out this licensing change was due to this security exploit. It was made public during 11.2, but the license change was made retrospectively back to 10.2. I don’t feel so bad about it now. :)

Update2: I’ve added a link to the Native Network Encryption stuff, based on the comment by Markus.

Native Network Encryption and SSL/TLS are not part of the Advanced Security Option was first posted on June 27, 2015 at 1:09 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Apple Watch Impressions with Jeremy Ashley: Time for the Best User Experience in the Enterprise Cloud

Usable Apps - Sat, 2015-06-27 03:40

In part two of a three-part series, Ultan O'Broin (@usableapps) talks with Jeremy Ashley (@jrwashley) about his impressions of the Apple Watch and other insights during a day in the life of a Group Vice President in Oracle. Read part one.

"Perhaps it's an English thing,” says Oracle Applications User Experience Group Vice President, Jeremy Ashley, "but just being able to keep eye contact with someone when we're talking means I can pay closer attention to people."

 Inspiring Cloud UX in Oracle

Jeremy Ashley: Inspiring user experience leadership of strategy, science, and storytelling.

"A glance at my Apple Watch and I know immediately if something is important. I can decide if I need to respond or it can wait. I don't have to pull out my smartphone for that."

This story of combining the personal convenience of wearable technology with empathy for people is typical of the man who sets the vision for the Oracle Applications Cloud user experience (UX).

It’s just one of Jeremy's impressions of the iWatch, as it's known. Now that he's used the Apple Watch for a while since we first chatted, I wanted to find out about his experience and what it all means for enterprise UX.

iWatch iMpressions

"I just love the sheer build quality of the watch; so utterly Apple," Jeremy begins. His industrial design background surfaces, bringing together traditions of functionality, classic craftsmanship, and exuberance for innovation: "Sweet. I can even use it to tell the time!"

A bloke with an eye for pixel-level detail, Jeremy has explored how to get the best from the Apple Watch, right down to the exact precision needed for the force touch action on the built-in Maps app. He's crafted a mix of apps and favorite glances to suit his world, such as for battery life, his calendar, and stocks. He admires the simplicity and visualizations of the built-in Activity app too, swiping the watch face to see his latest progress as we talk in his office full of what's hot in technology and a selection of clocks and traditional woodworking tools.

Microtransactions at a glance from the wrist

Microtransactions at a glance from the wrist delight the wearer and make life—and work—more convenient.

"The watch really shows how the idea of context automates the routine and looks after the little things that make life easier and delight you in simple ways, such as not having to swipe a credit card to pay for coffee."

In the enterprise world, these kinds of little experiences, or "microtransactions" as Jeremy calls them, translate to wearer convenience when working. For example:

  • Automatically recording the time spent and location of a field service job
  • Accepting terms and conditions when attending a confidential demo meeting as you check in at reception
  • Adding data to the cloud, such as updating a win probability as you walk away from a sales engagement
  • Seeing at a glance that a supply chain fulfillment is complete

Oracle Glance and the Enterprise

Oracle's concept of glance is device agnostic and reflects a key UX strategy—mobility—or how we work flexibly today: across devices, pivoting around secure enterprise data in the Oracle Cloud.

"Smartwatches are like mobile dialog boxes," Jeremy explains. "They start that user conversation with the cloud in simple, 'in-the-moment,' deeply contextual ways. Glance and the cloud together automatically detect and deliver the who, what, and where of microtransactions, yet because it's all on a watch, the experience remains personal and familiar. That really resonates with wearers."

The smartwatch is a personal and familiar paradigm in the enterprise too

Jeremy Ashley: The smartwatch is a personal and familiar paradigm that also resonates in the enterprise.

Jeremy shared some thoughts on where such innovation is heading:

"The Apple Watch won't replace the smartphone, for now. We still need that identifier device—a kind of personal beacon or chip, if you like—that lets us make an elegant 'handoff' from a glance on our wrist to a scan for denser levels of information or to a commit to doing less frequent tasks on other devices. The watch just isn't designed for all that."

Activity app Stand goal glances on Apple Watch

Apple Watch Activity glances for stand goal progress

A perfect example of Oracle Cloud UX strategy and design philosophy together. Jeremy glances back at his Activity app and sees his new stand goal progress. That standing desk is paying off!

But, innovating user experience in Oracle is an activity that definitely does not stand still. We'll explore how such innovation and design progress pays off for enterprise users in a future blog post.

Got Time Now?

Discover more:  

New PeopleTools Mobile Book

Jim Marion - Fri, 2015-06-26 18:44

My wife and I have been writing another book. We are reviewing proofs now, which means we are getting close to publication. I did a quick search on Amazon and see that Amazon is taking pre-orders: PeopleSoft PeopleTools: Mobile Applications Development. Publication date is currently set for October 16, 2015, which means it will publish just before OpenWorld. Fluid and MAP have been out for about a year. If you guessed that a new PeopleTools mobile book would cover these mobile technologies, you guessed correctly. But I saw no reason to stop there. After describing how to use Fluid and MAP, the book moves on to building responsive mobile applications using standard HTML5 development tools and libraries including jQuery mobile and AngularJS. Just today I spoke with a customer still using PeopleTools 8.50. What are the odds that customer will be using PeopleTools 8.54 in the next year? The second section of this book, using HTML5 is perfect for a customer in this situation because it describes how to connect a modern single page application to a PeopleSoft back end using iScripts and REST services (one chapter for each back end solution). The book finishes with examples of building native and hybrid applications for PeopleSoft using the Android SDK, Apache Cordova (my personal favorite), and Oracle's Mobile Application Framework. Here is a rough outline:

Chapter 1 shows you how to prepare your workstation for mobile development. This includes configuring HTML5 browsers, developer tools, and emulators.

Chapter 2 digs into Fluid, showing two examples of creating Fluid pages. The first is a basic page whereas the second is a two-column responsive page. This chapter covers search pages; toolbar actions; and fluid field, page, and component features. The point of this chapter is to help the reader feel comfortable with Fluid. Fluid includes a lot of new features based on HTML5, CSS3, and JavaScript. I really want customers to understand, however, that they can build Fluid pages using core PeopleTools without any knowledge of the modern web concepts. Of course, you can build some really amazing solutions if you know HTML5, CSS3, and JavaScript.

Chapter 3 explains the new Mobile Application Platform (MAP): what it is, when to use it, and how to use it. A chapter wouldn't be complete without examples, so there are plenty of examples to help you start your next MAP project.

Chapter 4 segues into modern mobile development. The rest of the book takes the user interface outside of PeopleTools. Before moving away from Application Designer, however, we need a data model and a scenario. This chapter presents the scenario and lays the foundation for the rest of the chapters. In this chapter you will work with SQL and the Documents module.

Chapter 5 shows us how to create our first HTML5 front end. I wanted to make this chapter as simple as possible so I used jQuery Mobile. In this chapter the reader will write very basic HTML and have the opportunity to see how jQuery Mobile progressively enhances simple markup to create impressive mobile solutions.

Chapter 6 is the exact opposite of chapter 5. Here I wanted to demonstrate flexibility and performance. This chapter is intentionally designed to provide a challenge for developers. Readers tell me it is a good chapter, perhaps a little intimidating, but very worthwhile. In this chapter you will work with AngularJS, Topcoat, and FontAwesome.

Chapter 7 shows the reader how to build back-end services for Chapters 5 and 6 using iScripts.

Chapter 8 is the same as Chapter 7 but uses REST services instead of iScripts. If you are new to PeopleSoft REST services and want to learn how to configure REST services as well as how to work with Documents to serve JSON from Integration Broker, then you will find this chapter very valuable.

Chapter 9 shifts from HTML5 to native. In this chapter the reader will learn how to use the Android SDK to consume the services built in chapter 8. The point of this chapter is not to teach Android development but rather how to consume PeopleSoft services from Android.

Chapter 10 turns to a native application category described as Hybrid applications. In this Chapter the reader will learn how to convert the Chapter 6 prototype into an on-device application that has access to device specific features such as the camera. In fact, the example shows how to use the Cordova API to take a selfie.

Chapter 11 brings us back to Oracle-specific technology by showing how to build a hybrid application using Oracle's Mobile Application Framework (MAF). I chose to spend a little more time in this chapter to teach some of the specifics of MAF. For example, I wasn't very excited about the default appearance of buttons on Android so I included steps showing how to extend the MAF skin.

Publication is still a few months away, but we are getting close. I'm really hoping to be able to give away copies during my OpenWorld session this year.

Release of Empire State College Case Study on e-Literate TV

Michael Feldstein - Fri, 2015-06-26 15:03

By Phil HillMore Posts (333)

Today we are thrilled to release the fourth case study in our new e-Literate TV series on “personalized learning”. In this series, we examine how that term, which is heavily marketed but poorly defined, is implemented on the ground at a variety of colleges and universities.

We are adding two episodes from Empire State College (ESC), a school that was founded in 1971 as part of the State University of New York. Through a lot of one-on-one, student-faculty interactions, the school was designed to serve the needs of students who don’t do well at traditional colleges. What problems are they trying to solve? How do students view some of the changes? What role does the practice of granting prior-learning assessments (PLA) play in non-traditional students’ education?

You can see all the case studies (either 2 or 3 per case study) at the series link, and you can access individual episodes below.

ESC Case Study: Personalized Prior Learning Assessments

ESC Case Study: Personalizing Personalization

e-Literate TV, owned and run by MindWires Consulting, is funded in part by the Bill & Melinda Gates Foundation. When we first talked about the series with the Gates Foundation, they agreed to give us the editorial independence to report what we find, whether it is good, bad, or indifferent.

As with the previous series, we are working in collaboration with In the Telling, our partners providing the platform and video production. Their Telling Story platform allows people to choose their level of engagement, from just watching the video to accessing synchronized transcripts and accessing transmedia. We have added content directly to the timeline of each video, bringing up further references, like e-Literate blog posts or relevant scholarly articles, in context. With In The Telling’s help, we are crafting episodes that we hope will be appealing and informative to those faculty, presidents, provosts, and other important college and university stakeholders who are not ed tech junkies.

We will release one more case study in early July, and we also have two episodes discussing the common themes we observed on the campuses. We welcome your feedback, either in comments or on Twitter using the hashtag #eLiterateTV.

Enjoy!

The post Release of Empire State College Case Study on e-Literate TV appeared first on e-Literate.

SQL*Plus tips #7: How to find the current script directory

XTended Oracle SQL - Fri, 2015-06-26 13:06

You know that if we want to execute another script from the current script directory, we can call it through @@, but sometimes we want to know the current path exactly, for example if we want to spool something into the file in the same directory.
Unfortunately we cannot use “spool @spoolfile”, but it is easy to find this path, because we know that SQL*Plus shows this path in the error when it can’t to find @@filename.

So we can simply get this path from the error text:

rem Simple example how to get path (@@) of the current script.
rem This script will set "cur_path" variable, so we can use &cur_path later.
 
set termout off
spool _cur_path.remove
@@notfound
spool off;
 
var cur_path varchar2(100);
declare 
  v varchar2(100);
  m varchar2(100):='SP2-0310: unable to open file "';
begin v :=rtrim(ltrim( 
                        q'[
                            @_cur_path.remove
                        ]',' '||chr(10)),' '||chr(10));
  v:=substr(v,instr(v,m)+length(m));
  v:=substr(v,1,instr(v,'notfound.')-1);
  :cur_path:=v;
end;
/
set scan off;
ho (rm _cur_path.remove 2>&1  | echo .)
ho (del _cur_path.remove 2>&1 | echo .)
col cur_path new_val cur_path noprint;
select :cur_path cur_path from dual;
set scan on;
set termout on;
 
prompt Current path: &cur_path

I used here the reading file content into variable, that I already showed in the “SQL*Plus tips. #1″.
UPDATE: I’ve replaced this script with a cross platform version.

Also I did it with SED and rtrim+ltrim, because 1) I have sed even on windows; and 2) I’m too lazy to write big PL/SQL script that will support 9i-12c, i.e. without regexp_substr/regexp_replace, etc.
But of course you can rewrite it without depending on sed, if you use windows without cygwin.

PS. Note that “host pwd” returns only directory where SQL*Plus was started, but not executed script directory.

Download latest version

Categories: Development

Quickly create a hundred databases and users

Yann Neuhaus - Fri, 2015-06-26 13:00
Do you need a hundred databases and users for training etc. in PostgreSQL?

Swiss Postgres Conference 2015

Yann Neuhaus - Fri, 2015-06-26 11:00

At the 26th of June I had the chance to attend the second Swiss Postgres Conference at the HSR Rapperswil. It was packed with interesting sessions.

Log Buffer #429: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-06-26 06:47

This Log Buffer Edition gathers a wide sample of blogs and then purifies the best ones from Oracle, SQL Server and MySQL.

Oracle:

  • If you take a look at the “alter user” command in the old 9i documentation, you’ll see this: DEFAULT ROLE Clause.
  • There’s been an interesting recent discussion on the OTN Database forum regarding “Index blank blocks after a large update that was rolled back.”
  • 12c Parallel Execution New Features: 1 SLAVE distribution
  • Index Tree Dumps in Oracle 12c Database (New Age)
  • Is it possible to cause tables to be stale with only tiny amounts of change?

SQL Server:

  • Making Data Analytics Simpler: SQL Server and R
  • Challenges with integrating MySQL data in an ETL regime and the amazing FMTONLY trick!
  • Azure Stream Analytics aims to extract knowledge structures from continuous ordered streams of data by real-time analysis.
  • Grant User Access to All SQL Server Databases
  • SQL SERVER – How Do We Find Deadlocks?

MySQL:

  • Efficient Use of Indexes in MySQL
  • SHOW ENGINE INNODB MUTEX is back!
  • Business-critical MySQL with DR in vCloud Air
  • Become a MySQL DBA blog series – Common operations – Schema Changes.
  • Building a Better CREATE USER Command

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

Categories: DBA Blogs

Epoch

Dominic Brooks - Fri, 2015-06-26 05:15

Note to self because it’s just one of those date/timezone-related topics which just doesn’t seem to stick…

Epoch/Unix time – See https://en.wikipedia.org/wiki/Unix_time

Unix time (also known as POSIX time or erroneously as Epoch time) is a system for describing instants in time, defined as the number of seconds that have elapsed since 00:00:00 Coordinated Universal Time (UTC), Thursday, 1 January 1970,

Firstly when converting from Oracle date or timestamp – we need to work from UTC not local time.

select systimestamp
,      ((extract( day    from systimestamp - TO_TIMESTAMP('01/01/1970', 'MM/DD/YYYY'))*24*60*60)
      + (extract( hour   from systimestamp - TO_TIMESTAMP('01/01/1970', 'MM/DD/YYYY'))*60*60)
      + (extract( minute from systimestamp - TO_TIMESTAMP('01/01/1970', 'MM/DD/YYYY'))*60)
      + (round(extract( second from systimestamp - TO_TIMESTAMP('01/01/1970', 'MM/DD/YYYY')))))*1000
        now_epoch
from dual;

SYSTIMESTAMP                         NOW_EPOCH
----------------------------------- -------------
26-JUN-15 11.57.09.634813000 +01:00 1435319830000 

If we plug that epoch into somewhere like http://www.epochconverter.com/ , we can see it’s wrong – it’s 1 hour ahead of where it should be:

Assuming that this timestamp is in milliseconds:
GMT: Fri, 26 Jun 2015 11:57:10 GMT
Your time zone: 26 June 2015 12:57:10 GMT+1:00 DST

That’s because we need to work in UTC using SYS_EXTRACT_UTC, i.e.

with now as
(select systimestamp                  now_ts
 ,      sys_extract_utc(systimestamp) now_utc
 ,      ((extract( day    from sys_extract_utc(systimestamp) - TO_TIMESTAMP('01/01/1970', 'MM/DD/YYYY'))*24*60*60)
       + (extract( hour   from sys_extract_utc(systimestamp) - TO_TIMESTAMP('01/01/1970', 'MM/DD/YYYY'))*60*60)
       + (extract( minute from sys_extract_utc(systimestamp) - TO_TIMESTAMP('01/01/1970', 'MM/DD/YYYY'))*60)
       + (round(extract( second from sys_extract_utc(systimestamp) - TO_TIMESTAMP('01/01/1970', 'MM/DD/YYYY')))))*1000
         now_epoch
 from dual)
select *
from   now;

NOW_TS                              NOW_UTC             NOW_EPOCH
----------------------------------- ------------------ -------------
26-JUN-15 12.03.35.231688000 +01:00 26-JUN-15 11.03.35 1435316626000 

Better!

This came about because there is a table storing epoch/unix time format which is originating from Java code and the developer said that the conversion was losing 1 hour and speculated that the DB might be “unsafe” when dealing with epoch time.

So, let’s convert this number back to a date or a timestamp and crush that notion.

with now as
(select systimestamp                  now_ts
 ,      sys_extract_utc(systimestamp) now_utc
 ,      ((extract( day    from sys_extract_utc(systimestamp) - TO_TIMESTAMP('01/01/1970', 'MM/DD/YYYY'))*24*60*60)
       + (extract( hour   from sys_extract_utc(systimestamp) - TO_TIMESTAMP('01/01/1970', 'MM/DD/YYYY'))*60*60)
       + (extract( minute from sys_extract_utc(systimestamp) - TO_TIMESTAMP('01/01/1970', 'MM/DD/YYYY'))*60)
       + (round(extract( second from sys_extract_utc(systimestamp) - TO_TIMESTAMP('01/01/1970', 'MM/DD/YYYY')))))*1000
         now_epoch
 from dual)
select now_ts
,      now_utc
,      now_epoch
,      TO_TIMESTAMP('01-01-1970','DD-MM-YYYY') + NUMTODSINTERVAL(now_epoch/1000,'SECOND') 
       epoch_back_to_ts
from   now;

NOW_TS                              NOW_UTC             NOW_EPOCH    EPOCH_BACK_TO_TS 
----------------------------------- ------------------ ------------- ------------------
26-JUN-15 12.09.45.671605000 +01:00 26-JUN-15 11.09.45 1435316986000 26-JUN-15 11.09.46 

Our conversion back is still in UTC so we need to convert, and there are numerous ways that we might want to convert that back:

with now as
(select systimestamp                  now_ts
 ,      sys_extract_utc(systimestamp) now_utc
 ,      ((extract( day    from sys_extract_utc(systimestamp) - TO_TIMESTAMP('01/01/1970', 'MM/DD/YYYY'))*24*60*60)
       + (extract( hour   from sys_extract_utc(systimestamp) - TO_TIMESTAMP('01/01/1970', 'MM/DD/YYYY'))*60*60)
       + (extract( minute from sys_extract_utc(systimestamp) - TO_TIMESTAMP('01/01/1970', 'MM/DD/YYYY'))*60)
       + (round(extract( second from sys_extract_utc(systimestamp) - TO_TIMESTAMP('01/01/1970', 'MM/DD/YYYY')))))*1000
         now_epoch
 from dual)
select now_ts
,      now_utc
,      now_epoch
,      TO_TIMESTAMP('01-01-1970','DD-MM-YYYY') + NUMTODSINTERVAL(now_epoch/1000,'SECOND') 
       epoch_back_to_utc
       ,      CAST(
       FROM_TZ(TO_TIMESTAMP('01-01-1970','DD-MM-YYYY') + NUMTODSINTERVAL(now_epoch/1000,'SECOND'),'UTC')
            AT TIME ZONE 'Europe/London' AS TIMESTAMP) 
       back_to_name
,      CAST(
       FROM_TZ(TO_TIMESTAMP('01-01-1970','DD-MM-YYYY') + NUMTODSINTERVAL(now_epoch/1000,'SECOND'),'UTC')
            AT LOCAL AS TIMESTAMP) 
       back_to_local
,      CAST(
       FROM_TZ(TO_TIMESTAMP('01-01-1970','DD-MM-YYYY') + NUMTODSINTERVAL(now_epoch/1000,'SECOND'),'UTC')
            AT TIME ZONE DBTIMEZONE AS TIMESTAMP) 
       back_to_dblocal
FROM   now;

NOW_TS                              NOW_UTC             NOW_EPOCH     EPOCH_BACK_TO_UTC  BACK_TO_NAME       BACK_TO_LOCAL      BACK_TO_DBLOCAL  
----------------------------------- ------------------ -------------- ------------------ ------------------ ------------------ ------------------
26-JUN-15 12.12.23.936868000 +01:00 26-JUN-15 11.12.23 1435317144000  26-JUN-15 11.12.24 26-JUN-15 12.12.24 26-JUN-15 12.12.24 26-JUN-15 12.12.24 

OTN Virtual Technology Summit - Spotlight on Java Track

OTN TechBlog - Thu, 2015-06-25 14:19
OTN Virtual Technology Summit - Spotlight on Java Track

The OTN Virtual Technology Summit is a quarterly series of interactive online events featuring hands-on sessions by expert presenters drawn from the community. The events are free, but registration is required. Each event has four tracks: Java, Database, Systems, and Middleware. Registration gets you access to all four tracks, along with on-demand access to all sessions after the event so you can binge on all that technical expertise. 

Here's the skinny on the Java track for the next event.

Java Sessions:

Docker for Java Developers
By Roland Huss
Docker, the OS-level virtualisation platform, takes the IT world by storm. In this session, we will see what features Docker has for us Java developers. It is now possible to create truly isolated, self-contained and robust integration tests in which external dependencies are realised as Docker containers. Docker also changes the way we ship applications in that we are not only deploying application artifacts like WARs or EARs but also their execution contexts. Besides elaborating on these concepts and more, this presentation will focus on how Docker can best be integrated into the Java build process by introducing a dedicated Docker Maven plugin which is shown in a live demo.

Pi on Wheels - Make Your Own Robot
By Roland Huss
The Pi on Wheels is an affordable open source DIY robot that is ideal for learning Java-related technologies in the context of the Internet of Things. In this session we will talk about how 3D printing works and how it can be utilized to build robots. The most fascinating aspect of 3D printing is that it is astonishingly easy to customize the robot. It allows you to build something completely new and different. We provide a Java based IDE that allows you to control and program the robot. In addition to that it can be used to programmatically design 3D geometries.

Shakespeare Plays Scrabble
By Jose Paumard
This session will show how lambdas and Streams can be used to solve a toy problem based on Scrabble. We are going to solve this problem with the Scrabble dictionary, the list of the words used by Shakespeare, and the Stream API. The three main steps shown will be the mapping, filtering and reduction. The mapping step converts a stream of a given type into a stream of another type. Then the filtering step is used to sort out the words not allowed by the Scrabble dictionary. Finally, the reduction can be as simple as computing a max over a given stream, but can also be used to compute more complex structures. We will use these tools to extract the three best words Shakespeare could have played.

OTN Wants You!

Become a member of the OTN Community: Register here to start participating in our online community. Share your expertise with other community members!

NEW REWARDS! If you attend this virtual technology summit and are a member of the Oracle Technology Network Community you will earn 150 points towards our new Rewards and Recognition program (use the same email for both). Read all about it: Oracle Community - Rewards & Recognition FAQ.

RAC buffer states: XCUR, SCUR, PI, CI

Yann Neuhaus - Thu, 2015-06-25 13:43

In RAC, blocks are copied across instances by the Global Cache Service. In single instance, we have only two status: CR for consistent read clones where undo is applied, and CUR for the current version that can be modified (then being a dirty block). I'ts a bit more complex in RAC. Here is a brief example to show the buffer status in Global Cache.

SCUR: shared current

I connect to one instance (I have a few singleton services. service ONE is on instance 3 and service TWO is on instance 1)

SQL> connect demo/demo@//192.168.78.252/ONE.racattack
Connected.
and I query a row by ROWID in order to read only one block
SQL> select rowid,DEMO.* from DEMO where rowid='&rowid1';
old   1: select rowid,DEMO.* from DEMO where rowid='&rowid1'
new   1: select rowid,DEMO.* from DEMO where rowid='AAAXqxAALAAACUkAAD'

ROWID                      ID          N
------------------ ---------- ----------
AAAXqxAALAAACUkAAD         10         10
Here is the status of the buffer in the buffer cache:
SQL> select inst_id,class#,status,lock_element_addr,dirty,temp,ping,stale,direct,new from gv$bh where objd=(select data_object_id from dba_objects where owner='DEMO' and object_name='DEMO') and status!='free' order by inst_id;

   INST_ID     CLASS# STATUS     LOCK_ELEMENT_ADD D T P S D N
---------- ---------- ---------- ---------------- - - - - - -
         3          1 scur       00000000B9FEA060 N N N N N N
The block has been read from disk by my instance. Without modification it is in SCUR status: it's the current version of the block and can be shared.

SCUR copies

Now connecting to another instance

SQL> connect demo/demo@//192.168.78.252/TWO.racattack
Connected.
and reading the same block
SQL> select rowid,DEMO.* from DEMO where rowid='&rowid1';
old   1: select rowid,DEMO.* from DEMO where rowid='&rowid1'
new   1: select rowid,DEMO.* from DEMO where rowid='AAAXqxAALAAACUkAAD'

ROWID                      ID          N
------------------ ---------- ----------
AAAXqxAALAAACUkAAD         10         10
let's see what I have in my Global Cache:
SQL> select inst_id,class#,status,lock_element_addr,dirty,temp,ping,stale,direct,new from gv$bh where objd=(select data_object_id from dba_objects where owner='DEMO' and object_name='DEMO') and status!='free' order by inst_id,lock_element_addr;

   INST_ID     CLASS# STATUS     LOCK_ELEMENT_ADD D T P S D N
---------- ---------- ---------- ---------------- - - - - - -
         1          1 scur       00000000B0FAADC0 N N N N N N
         3          1 scur       00000000B9FEA060 N N N N N N
non modified blocks can be shared: I have a copy on each instance.

XCUR: exclusive current

I'll start a new case, I flush the buffer cache

connecting to the first instance

SQL> connect demo/demo@//192.168.78.252/ONE.racattack
Connected.
I'm now doing a modification with a select for update (which writes the lock in the block, so it's a modification)
SQL> select rowid,DEMO.* from DEMO where rowid='&rowid1' for update;
old   1: select rowid,DEMO.* from DEMO where rowid='&rowid1' for update
new   1: select rowid,DEMO.* from DEMO where rowid='AAAXqxAALAAACUkAAD' for update

ROWID                      ID          N
------------------ ---------- ----------
AAAXqxAALAAACUkAAD         10         10
now the status in buffer cache is different:
SQL> select inst_id,class#,status,lock_element_addr,dirty,temp,ping,stale,direct,new from gv$bh where objd=(select data_object_id from dba_objects where owner='DEMO' and object_name='DEMO') and status!='free' order by inst_id,lock_element_addr;

   INST_ID     CLASS# STATUS     LOCK_ELEMENT_ADD D T P S D N
---------- ---------- ---------- ---------------- - - - - - -
         3          1 cr         00               N N N N N N
         3          1 xcur       00000000B9FEA060 Y N N N N N
So I have two buffers for the same block. The buffer that has been read and will not be current anymore because it has the rows before the modifications. It stays in consistent read (CR) status. The modified one is then the current one but cannot be shared: its the XCUR buffer where modifications will be done.

CR consistent read

Now I'll read it from the second instance

SQL> connect demo/demo@//192.168.78.252/TWO.racattack
Connected.
SQL> select rowid,DEMO.* from DEMO where rowid='&rowid1';
old   1: select rowid,DEMO.* from DEMO where rowid='&rowid1'
new   1: select rowid,DEMO.* from DEMO where rowid='AAAXqxAALAAACUkAAD'

ROWID                      ID          N
------------------ ---------- ----------
AAAXqxAALAAACUkAAD         10         10
the block is read and I've another CR buffer:
SQL> select inst_id,class#,status,lock_element_addr,dirty,temp,ping,stale,direct,new from gv$bh where objd=(select data_object_id from dba_objects where owner='DEMO' and object_name='DEMO') and status!='free' order by inst_id,lock_element_addr;

   INST_ID     CLASS# STATUS     LOCK_ELEMENT_ADD D T P S D N
---------- ---------- ---------- ---------------- - - - - - -
         1          1 cr         00               N N N N N N
         3          1 cr         00               N N N N N N
         3          1 xcur       00000000B9FEA060 Y N N N N N
the CR buffer is at another SCN. A block can have several CR blocks (by default up to 6 per instance)

PI: past image

Let's do a modification from the other instance

SQL> connect demo/demo@//192.168.78.252/TWO.racattack
Connected.
SQL> select rowid,DEMO.* from DEMO where rowid='&rowid1' for update;
old   1: select rowid,DEMO.* from DEMO where rowid='&rowid1' for update
new   1: select rowid,DEMO.* from DEMO where rowid='AAAXqxAALAAACUkAAD' for update

ROWID                      ID          N
------------------ ---------- ----------
AAAXqxAALAAACUkAAD         10         10
My modification must be done on the current version, which must be shipped to my instance
SQL> select inst_id,class#,status,lock_element_addr,dirty,temp,ping,stale,direct,new from gv$bh where objd=(select data_object_id from dba_objects where owner='DEMO' and object_name='DEMO') and status!='free' order by inst_id,lock_element_addr;

   INST_ID     CLASS# STATUS     LOCK_ELEMENT_ADD D T P S D N
---------- ---------- ---------- ---------------- - - - - - -
         1          1 cr         00               N N N N N N
         1          1 cr         00               N N N N N N
         1          1 xcur       00000000B0FAADC0 Y N N N N N
         3          1 cr         00               N N N N N N
         3          1 pi         00000000B9FEA060 Y N N N N N
and the previous current version remains as a PI - past image. It cannot be used for consistent reads but it is kept for recovery: if current block is lost, redo can be applied to the past image to recover it. See Jonathan Lewis explanation.

Checkpoint

As the past images are there in case of recovery, they are not needed once an instance has checkpointed the current block.

SQL> connect sys/oracle@//192.168.78.252/ONE.racattack as sysdba
Connected.
SQL> alter system checkpoint;
System altered.
afer the checkpoint on the instance that has the XCUR, there is no dirty buffer in any instance:
SQL> select inst_id,class#,status,lock_element_addr,dirty,temp,ping,stale,direct,new from gv$bh where objd=(select data_object_id from dba_objects where owner='DEMO' and object_name='DEMO') and status!='free' order by inst_id,lock_element_addr;

   INST_ID     CLASS# STATUS     LOCK_ELEMENT_ADD D T P S D N
---------- ---------- ---------- ---------------- - - - - - -
         1          1 cr         00               N N N N N N
         1          1 cr         00               N N N N N N
         1          1 xcur       00000000B0FAADC0 N N N N N N
         3          1 cr         00               N N N N N N
         3          1 cr         00               N N N N N N
the PI became a consistent read.

Summary

Here are the states we have seen here:

XCUR: current version of the block - holding an exclusive lock for it

SCUR: current version of the block that can be share because no modification were done

CR: only valid for consistent read, after applying the necessary undo to get it back to requried SCN

PI: past image of a modified current block, kept until the latest version is checkpointed

and the other possible states:

FREE: The buffer is not currently in use.

READ: when the block is being read from disk

MREC: when the block is being recovered for media recovery

IREC: when the block is being recovered for crash recovery