Skip navigation.

Feed aggregator

The Importance Of Student Control Of Learning, Especially For Working Adults

Michael Feldstein - Tue, 2015-07-07 13:32

By Phil HillMore Posts (343)

When giving keynotes at conferences over the past two years, I have observed that some of the best non-verbal feedback occurs when pointing out that personalized and adaptive learning does not equal black-box algorithms choosing content for students. Yes, there are plenty of approaches pitching that solution (Knewton in its early state being the best-known if not most-current example), but there are other approaches designed to give faculty or instructional designers control over learning paths or even to give students control. There seems to be a sense of relief, particularly from faculty members, when discussing the latter approach.

In the Empire State College case study on e-Literate TV, I found the conversation Michael had with [faculty member] Maya Richardson to be a great example of not just giving faculty insight into student learning but also giving students control over their own learning. As Maya explains, this is particularly important for the working adult population going back to school. The software used in this pedagogical approach is CogBooks.

Michael Feldstein:While so-called personalized learning programs are sometimes criticized for moving students lockstep through a linear process, Maya emphasizes the choice and control that students have regarding how they go through the content.

Maya Richardson:What it is—it’s a concept mapping, so they take concepts here, concepts here, and then there’s a split-off, and those concepts then split off and then split off and split off. And then, depending on the student, now students can go, “OK, I understood that concept. I already know that concept, so I don’t need to go to that one right now. I can skip and go here.” This is where the individualized and personalized learning comes in—like a smorgasbord, you pick and choose what you want to learn.

And then you come in; you do the discussion, and you either have more to add to it and a greater enrichment of the experience for yourself but also for your classmates. Then there are those who go, “OK, I need to go through each one of these, step by step, and learn each one, and then move down to learning these and then these and then these and then these,” and then at the end, they’ve gotten so much more out of it.

Maya then goes on to describe her visibility enabled by this pedagogical approach – not just of which concepts the student has mastered but also the learning process and choices that the student makes.

Maya Richardson:It’s that kind of opportunity that I can now watch and go, “OK, so you’re the kind of learner that I can just basically let you go and do what you need to do. I am not going to be interrupting your learning path because you have a very positive learning path. I can watch you do this. “It’s a great pattern. You’re going for it,” and I’m just going, “Wonderful. Just come in, do the discussion, do your test,” and I’m like, “A-student, perfect, great, way to go.” Then I see the ones that sort of the sporadic. They come in, they touch and go, and I go, “OK, let me see how you’re doing.”

There’s a lot more in this conversation, but I want to skip ahead a minute or so in the conversation to this key point about student control, or agency.

Michael Feldstein:Maya and her colleagues are thoughtful about how this kind of software fits with the holistic approach that ESC takes towards education.

Maya Richardson:The personalized learning part of it is taking ownership. I think it motivates. As an adult learner, it’s really important to find that you have some control over—when I go in, I know what I want to learn. I hope I know what I want to learn, and I hope I learn it at the end.

There are disciplines and contexts where adaptive algorithms choosing appropriate content makes sense, but I find that too often this is the assumption for all of personalized learning. This example from Empire State College illuminates the growing importance of student control, especially for the growing working adult populations.

The post The Importance Of Student Control Of Learning, Especially For Working Adults appeared first on e-Literate.

OTN Virtual Technology Summit – Spotlight on Operating Systems, Virtualization Technologies and Hardware

OTN TechBlog - Tue, 2015-07-07 12:56


The Virtual Technology Summit is a series of interactive online events with hands-on sessions and presenters answering technical questions. The events are sponsored by the Oracle Technology Network (OTN). These are free events but you must register.

Operating Systems, Virtualization Technologies and Hardware Sessions:

Designing a Multi-Layered Security Strategy - Security is a concern of every IT manager and it is clear that perimeter defense, trying to keep hackers out of your network, is not enough. At some point someone with bad intentions will penetrate your network and to prevent significant damage it is necessary to make sure there are multiple layers of defense. Hear about Oracle’s defense in depth for data centers including some new and unique security features built into the new SPARC M7 processor.

How to Increase Application Security & Reliability with Software in Silicon Technology - Learn about Software in Silicon Application Data Integrity (ADI) and how you can use this revolutionary technology to catch memory access errors in production code. Also explore key features for developers that make it easy and simple to create secure and reliable high performance applications.

Eliminate Cloud Security Threats with Oracle Systems - Learn and understand about the security threats to your public and private cloud and gain insight into how the Oracle Security Architecture helps reduce risk. This webcast will provide detailed information on the top 20 cloud security threats and how different parts of the Oracle systems stack help eliminate each threat.

Become a member of the OTN Community: Register here to start participating in our online community. Share your expertise with other community members!

NEW REWARDS! If you attend this virtual technology summit and are a member of the Oracle Technology Network Community you will earn 150 points towards our new Rewards and Recognition program (use the same email for both). Read all about it in our FAQ: Oracle Community – Rewards & Recognition FAQ.

Lots of OAUX Updates

Oracle AppsLab - Tue, 2015-07-07 11:35

While I spent June wrapping up conference season at OHUG and Kscope, Ultan (@ultan), Misha (@mishavaughan) and company (@usableapps) have been busily publishing content.

This here is a wrap-up of that content, but let’s be honest. If you like OAUX content, you really should follow the official blogs of OAUX: Usable Apps in the CloudVoXuser experience assistance: cloud design & development.

Oh and follow @usableapps too. That’s done, so let’s get recapping.

Strategy Anyone?

Over on VoX, you can read all about Oracle’s Cloud Application user experience strategy in three short posts.

In the first part, read about how we apply Simplicity, Mobility, Extensibility to Cloud Applications. In part two, read about big-picture innovation and how it drives our Glance, Scan, Commit design philosophy. Finally, in the big finish, read about how we ap
ply all this to designing and building experiences for our cloud users.

As a bonus, our team, our projects and our strategic approach to emerging technologies are mentioned in each post. So, yay us.

jaarticle2_img1

More Apple Watch

You’ve read our takes, and Ultan’s, on the Apple Watch, and now our GVP, Jeremy Ashley (@jrwashley) has shared his impressions. Good stuff in there, check it out if you’re looking for reasons to buy a smartwatch.

jaarticle2-img2

Not convinced of the value? Longtime friend of the ‘Lab, David Haimes (@dhaimes) might have what you need to go from cynic to believer.

We Heart APIs

Channeling his inner Mark (@mvilrokx), Ultan has a two-minute tech tip for Bob Rhubart of OTN (@OTNArchbeat) about APIs, how valuable they are, and how good ones make all the difference.

We love us some APIs, especially the good ones. Developers are users too.

Speaking of APIs and developers, check out two videos that tie developer use cases with PaaS4SaaS.

Big Finish, ERP Cloud and Cake

And finally, let’s finish with some ERP Cloud goodness, a post on UX, ROI and cake and a post on cake starring David.

davidcakes_new1

Told you they’ve been busy.Possibly Related Posts:

Oracle Process Cloud Service - SOAP Web Service Integration

Andrejus Baranovski - Tue, 2015-07-07 09:03
Oracle Process Cloud (https://cloud.oracle.com/process) allows to invoke SOAP services, along with REST services. You can define a library of SOAP services in Oracle Process Cloud application and use them within the process. This helps to integrate with business logic running on premise.

I have implemented regular SOAP service in ADF BC, just to test how it works to invoke it from Process Cloud:


SOAP service method accepts employee ID and checks the status. ADF BC application is deployed on standalone WLS. Make sure to select correct deployment profile, when deploying ADF BC with SOAP on standalone server, otherwise SOAP service reference will be missing in deployed application. You should use ADF BC service deployment profile:


Once SOAP service is deployed, you can access WSDL file through Enterprise Manager. Oracle Process Cloud currently doesn't support direct SOAP references, instead you must provide WSDL file copy for Web Service registration:


Download WSDL file from URL (here is the sample WSDL, I was using for my test - HrModuleService.wsdl). This is how it looks like, it contains SOAP service description:


Here you can see Oracle Process Cloud interface. It looks quite user friendly and functional, this is what I like about it. On the left side, there is menu with different objects - Processes, Web Forms, Web Services, etc.:


To add new SOAP service reference is fairly easy, provide a name and upload WSDL file:


Once service is registered, we can go to the process diagram and select Service component from the palette:


Drag and drop it on the diagram, Implementation section will be opened, where you should specify SOAP service reference to be invoked:


Select Service Call from the list, this allows to invoke SOAP Web Service:


Once option for Service Call will be selected, you will be able to select Web Service reference. I have defined only one service, it is listed in the popup - select it:


Operation field is listing all available methods from the service, select one of the operations to be invoked:


Data associations are available to define correct mapping for the input/output values for the SOAP Web Service call, this can be done through Process Cloud composer window:


If you are interested in Human Task UI implementation in Oracle Process Cloud, read my previous post about REST service integration: Oracle Process Cloud Service - Consuming ADF BC REST Service in Web Form.

Oracle Announces Plans for Visual Studio 2015 Support

Christian Shay - Tue, 2015-07-07 07:36

Oracle plans to offer a new version of the Oracle Developer Tools for Visual Studio integrated with Microsoft Visual Studio 2015. This new version is planned to be available within one month of Visual Studio 2015's Release to Manufacturing (RTM) date. As with earlier releases of Visual Studio, Oracle has partnered closely with Microsoft as part of the Visual Studio Industry Partner Program to make this release possible. Keep an eye on the OTN .NET Developer Center or follow us on Twitter for the upcoming release announcements.

CBO series

Jonathan Lewis - Tue, 2015-07-07 06:32

Update 9th July 2015:  part 4 now published.

I’ve changed the catalogue from a post to a page so that it gets a static address: https://jonathanlewis.wordpress.com/cbo-series/

I’ll leave this posting here for a while, but will probably remember to remove it some time in the future.


Pythian Introduces Cassandra Consulting & Operational Support

Pythian Group - Tue, 2015-07-07 06:00

Over the past ten years, the technology industry has begun to adopt a whole new way of thinking about the database and data storage. This is largely the result of the fast moving, high-volume and non-traditional types of data that are being generated to support both internal business processes and real-time web applications. Using a NoSQL database enables businesses to quickly and cost-effectively process large amounts of data, whether the data is hosted in the enterprise or on the cloud.

Pythian’s new Cassandra services address the needs of customers deploying Apache Cassandra. Pythian offers highly knowledgeable and experienced Cassandra experts who can guide you to success with your Cassandra deployment by filling critical skills and capacity gaps, getting your Cassandra instance up and running quickly, and ensuring that it performs optimally as you move forward.

If you’re thinking of implementing Cassandra, or already have, watch our webinar, Getting Started With Cassandra, which covers key topics for starting out, such as when you should use Cassandra, potential challenges, real world Cassandra applications and benefits, and more.

Learn more about Pythian’s Cassandra Services.

The post Pythian Introduces Cassandra Consulting & Operational Support appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Oracle Support Essentials Blog

Joshua Solomin - Tue, 2015-07-07 01:25
Untitled Document


GPIcon

Do not miss this opportunity to attend a live Training Event for My Oracle Support or Cloud Support Portal.

Do you want to be more effective in using Cloud Support Portal or My Oracle Support? Then the My Oracle Support Essentials Live Webcast Series is for you. It covers the basics such as: An overview of the Cloud Support Portal, How to use My Oracle Support and Working effectively with Oracle Support. It also covers more detailed feature based topics such as How CUA’s can group users and assets to improve their efficiency when using My Oracle Support.

The benefits of the My Oracle Support Essentials is the ability for you to engage directly with Support subject matter experts by asking questions in the live session. You can also download and review PDF files for the session’s materials or make use of the accompanying Oracle Support Training How-To video series Document 603505.1 to re watch specific examples in more detail.

You can verify your knowledge after completing the Oracle Support Essentials series by undertaking the Oracle Support Accreditation Level 1 for My Oracle Support Users Document 1579751.1

CALL TO ACTION:

Never miss out on a session: View the schedule Document 1676694.1 and enroll for topics that interest you. Stay informed of new topics via an email notification process by clicking the star icon to mark this document as a favourite in My Oracle Support. Complete the notification process by turning on Hot-topics. For example click My Account > Hot Topics E-Mail and select notifications for your favourite documents.

Influence future topics by voting and supplying feedback via the community poll. If your area of interest is not in the current schedule or listed in the poll, then add your topic as a comment and describe features or tasks you would like included in this series.

 




Kerberos SSO with Liferay 6.1 v2

Yann Neuhaus - Mon, 2015-07-06 23:00


A little bit less than one year ago, I wrote a blog about how to setup the Kerberos SSO on Liferay using Apache httpd as a front-end, Tomcat as a back-end and with auth_mod_kerb and mod_jk to transfer the information between these two components. Unfortunately, after some months, it seems that something changed either on the Liferay side or in the Apache side because the exact same configuration wasn't working anymore. I solved this issue (or at least applied a workaround) two or three months ago because I needed a working environment for some tests but until now, I didn't have the time to share my findings.


That's why in this blog I will try to explain what was needed in my case to avoid an issue with the configuration I shared in my previous blog. I decided not to update my previous blog but rather to create a new one because my previous blog may still be working as expected with certain versions of Liferay!


I. The issue


In my case, after some months where everything was fine, the SSO suddenly wasn't working anymore. The access to the Liferay welcome page took a looooooong time or even never ended (or rejected by Apache httpd depending on the configuration).


With the proper logging information or debugging steps, if you face the same issue, you should be able to see that in fact Liferay doesn't understand anymore the information that Apache httpd provides and the result of that is an almost infinite loop in the login process.


From a debugging point of view, my previous blog post prevented an administrator to access Liferay without a valid Kerberos ticket. That's why in this solution I also incorporated some lines of code to prevent such cases and to allow any users to still access Liferay even without a valid ticket. This can be easily removed for strict access requirements. Please be aware that this guess access is, by default, only valid for requests directly addresses to Tomcat (not the ones coming from Apache httpd)!


II. A possible solution


So the first thing that changed is the "KerberosAutoLogin" itself. I changed a few lines to get something cleaner and I added some others to handle the "guest" access in case a valid Kerberos ticket wasn't found (line 64 to 77). The result is the following:

KerberosAutoLogin1.pngKerberosAutoLogin2.png


Please find at the end of this blog the squeleton of the HOOK I built for my dev environment. This is basically the same that in my previous blog except that I changed the code for you.


In this Java Class, I used the "LogFactoryUtil" for the logging to get something more standard but that requires you to configure the log level that you want. If you just want these logs for the debug phase, then you may want to replace all "logger.debug" or "logger.warn" with a simple "System.out.println". That should redirect all elements that you want to log to the default catalina.out log file.



The second thing to do it to modify the Java Class named "AutoLoginFilter". This Class is a kind of dispatcher for the login requests that are coming to Liferay. From what I saw, there is a little issue with this class that prevent our KerberosAutoLogin code to be executed properly or at all... Indeed in our case, the "REMOTE_USER" is set by Apache and when this method is executed, our KerberosAutoLogin which is one of the "_autoLogins" (see the code below) isn't called because the "remoteUser" variable isn't null!


To enable our KerberosAutoLogin to be executed even if the remoteUser isn't null, you can modify the default code. You can find the Java Class at the following URL: the code is on grepcode. Please be aware that the code of this Java Class is version dependent, which means that the code for Liferay 6.0 may not exactly be the same that for Liferay 6.1... So be careful ;).


I highlighted the lines I added for this specific case. If you take a look at the code, I also added a variable "REALM" to compare the Kerberos REALM that came from Apache httpd to the one that should be used (your Active Directory or KDC). This check can be used to prevent a user from a different REALM to login to your Liferay. If you don't want or don't need this additional security, you can just remove this variable and also remove the "if" test that use it (remove line 25, 65 and 88).

AutoLoginFilter1.pngAutoLoginFilter2.png


Please find at the end of this blog the squeleton of the EXT I built for my dev environment. Please also be aware that the KerberosAutoLogin is defined in a hook (custom-hook if you followed my previous blog) but you can't do the same for the AutoLoginFilter. Indeed, you will have to create your own EXT using the SDK, add this class in it and then deploy it to be able to see the updated version of the class loaded.


I only printed here the method "processFilter" because it's the only thing that needs to be modified, at least in my Liferay version 6.1.1! If you compare this method to the one on grepcode you will see that I compacted the code quite a lot to take less space but I didn't change the code that isn't highlighted.


Once these two Java Classes are modified, you should be able to login properly to Liferay using Kerberos tickets. If not, then enable the logging and that should help you to find out where your issue is. Of course you can still post a comment below and I will do my best to help you. I also already saw some issues that were related to the new version of Apache... You may want to try the same thing with Apache 2.2 instead of 2.4! From my side, this was the first thing I tried to get the SSO working again and I still use Apache httpd 2.2 (instead of 2.4 as described in my previous blog). Another important thing to check is that your browser is properly setup for the SSO!


Squeleton of the EXT
Squeleton of the HOOK

 

Enterprise Software's Life Lessons

Floyd Teter - Mon, 2015-07-06 13:13
Well, I ain't always right but I've never been wrong.
Seldom turns out the way it does in a song.
Once in a while you get shown the light
In the strangest of places if you look at it right.
                               - From Jerry Garcia's "Scarlet Begonias"

Somebody asked the other day what I've learned from from 25-plus years of working with, implementing, and developing enterprise software applications.  Kind of a life's lessons thing.  A pretty innocent question, but one that got me thinking.  And the more I thought, the longer the list became. I've been shown the light several times, usually in the strangest places and when I least expected it.  So the list grew to the point that I thought it might be worth sharing.  The list is not organized into subject areas, importance, thought streams, or sand piles.  I just wrote 'em down as I thought of them.

One caveat:  you may get the impression that my perspective here is a little negative...dark even.  Nothing could be further from the truth.  I love my job, I love being in this industry, and I love what I do and who I do it with...every day.  But, like much of what we learn, most of these ideas came from painful experiences.  I just wrote 'em the way I learned 'em.


So, without further delay:


  1. The Prime Directive: at no time shall anyone interfere with the natural progression of a project by introducing unnecessarily advanced technologies or over-engineered features.
  2. Nobody buys enterprise applications for the technology.  They buy for desired outcomes…end states.  Everything else is white noise.  The software vendor that best understands those desired outcomes and how to achieve them usually wins the business.  Enterprise software is a set of tools, not a finished house.
  3. If you’re in product management or product development, the most important word in the English language is “No”.
  4. If project success is measured by achieving the originally desired outcomes on time and within budget, enterprise applications projects (both developing and implementing) only succeed 39% of the time.  This stuff is hard.
  5. In the world of SaaS providers, 4 measures matter:  A) subscription revenue growth; B) subscription recurring revenue (often referred to in the negative sense as “churn”); C) percentage of subscription customers who have “gone live”; D) the time from original subscription to “go live” date.
  6. If you’re a consultant, your client will “vote you off the island” more often than not…usually at the next speed bump encountered.  It’s similar to being the manager of a sports team…the team manager’s reign usually ends with termination because something isn’t going right.  Don’t worry about it.  Be as good as you can be at doing what you do, be honest and transparent, and understand that it’s just the nature of the business.  So travel light.  You can’t please all of the people all of the time.
  7. In the SaaS market, good product gets you a seat at the table.  But good service keeps you there.
  8. Usable tools to compliment enterprise applications are just as important as the applications themselves.
  9. There will always be glitches at launch that did not appear in testing.  Keep cool, tune out the screaming, and work the issue.
  10. When estimating a project for internal or external customers, the required deadline is never negotiable.  Someone in leadership stuck that stick in the sand long before you learned about the need.  It’s not changing until it slips.
  11. When estimating a project for internal or external customers, the required budget is never negotiable.  Someone in leadership stuck that stick in the sand long before you learned about the need.  It’s not changing until the need for change is manifested.
  12. Fast, inexpensive, restrained (in scope) and elegant (simple but effective) projects with small teams are much more likely to succeed than projects with big scope, long schedules, bug budgets and big teams.
  13. If you find yourself speaking more than 20 percent of the time in any given exchange, you may want to consider shutting your yap and listening for a few minutes.
  14. If all of your stakeholders can’t describe your project in 30 seconds or less, you have organizational change managements issues to resolve before you can hope to deliver.
  15. Whenever your budget burn rate varies more than 15 percent (plus or minus), you have a problem that must be corrected before it grows worse.
  16. A great user experience is no longer a competitive advantage; it’s simply a requirement for entering the market.
  17. There is a “love cycle” in enterprise software implementation projects.  Early on, it’s a love fest.  The love fest shortly becomes an “us against them, but we’re stuck with each other” atmosphere.  Finally comes the “thank goodness this thing is over.”  The same cycle always applies, whether the project is successful or not.  It’s just a symptom of the stress that comes from taking the risks represented by the project itself.
  18. Packaged integrations out of the box don’t work.  They’ll need revising or rebuilding. Just build it into the plan up front.
  19. Be prepared.
  20. Always, always, always follow the money.
  21. Customizing packaged software is the most difficult and expensive choice.  Be sure you understand what you can accomplish within the functionality of the software before you decide to customize.
  22. Burn your calories where it matters.  Moving buttons on a screen does not change delivery of a desired outcome.  Tweaking a business process might.
  23. Ease of use trumps depth of features every time. Complexity is not a sign of sophistication.  The most important product and project decisions often revolve around what to leave out…minimalism generally breeds success.
  24. When you reveal innovation, you have about a 24-month window before a competitor does it better, faster or cheaper.
  25. When you feel like you’re out of the communication loop, it’s probably to late to do better with communicating - the vote of no confidence has already taken place and you’re on the way out.  So, right from the beginning of any effort, keep in mind that you can never over-communicate.  Listen and talk - it’s the key to gain the confidence of others.
  26. The customer is not always right.  Nevertheless, listen to, empathize with, and guide your customers (in that order)…regardless.
  27. Reporting and business intelligence are best planned earlier rather than later.  And information without context is just data.
  28. Mobile matters:  if you can’t offer productivity from a phone, it doesn’t much matter what else you offer.
  29. A project leader’s influence is inversely proportional to the size of the budget.
  30. If you’re fast because you’re quick, that’s good.  If you’re fast because you hurry, that’s bad.  The former indicates efficiency, while the latter just breeds mistakes.  As famed college basketball coach John Wooden often said:  "Be quick, but don’t hurry."
So, how about you?  Have some pearls of wisdom to share with the class?  Comment away!

Keep your Database Tidy – making sure a file exists before DBMS_DATAPUMP makes a mess

The Anti-Kyte - Mon, 2015-07-06 13:06

There are times when I wonder whether DBMS_DATAPUMP isn’t modelled on your average teenager’s bedroom floor.
If you’ve ever tried to start an import by specifying a file that doesn’t exist ( or that DBMS_DATAPUMP can’t see) you’ll know what I mean.
The job fails, which is fair enough. However, DBMS_DATAPUMP then goes into a huff and refuses to “clean up it’s room”.
Deb has suggested that this sort of thing is also applicable to husbands.
Not that I have any idea of whose husband she’s talking about.
Anyway, you may consider it preferable to check that the export file you want to import from actually exists in the appropriate directory before risking the wrath of the temperamental datapump API.
This apparently simple check can get a bit interesting, especially if you’re on a Linux server…

For what follows, I’ll be using the DATA_PUMP_DIR directory object. To check where this is pointing to…

select directory_path
from dba_directories
where directory_name = 'DATA_PUMP_DIR'
/

DIRECTORY_PATH
--------------------------------------------------------------------------------
/u01/app/oracle/admin/XE/dpdump/

The owner of the functions I’ll be creating will need to have READ privileges granted directly to them on this directory…

select privilege
from user_tab_privs
where table_name = 'DATA_PUMP_DIR'
/

PRIVILEGE
----------------------------------------
READ

SQL> 

If the user does not have this privilege then you can grant it ( connecting as sysdba) with the following :

grant read on directory data_pump_dir to user
/

…where user is the name of the schema in which you are going to create the function.

Now to create a file in this directory so that we can test for it’s existence…

sudo su oracle
[sudo] password for mike: 

touch /u01/app/oracle/admin/XE/dpdump/test.txt
ls -l  /u01/app/oracle/admin/XE/dpdump/test.txt
-rw-r--r-- 1 oracle dba 0 Jul  3 18:08 /u01/app/oracle/admin/XE/dpdump/test.txt

In order to check for the existence of this file from within PL/SQL, we have a couple of options…

UTL_FILE

The UTL_FILE.FGETATTR procedure retrieves details of a file, including whether or not it exists…

set serveroutput on size unlimited
declare

    l_filename varchar2(4000) := 'test.txt';
    l_exists boolean;
    l_length number;
    l_bsize number;
begin
    utl_file.fgetattr
    (
        location => 'DATA_PUMP_DIR',
        filename => l_filename, 
        fexists => l_exists,
        file_length => l_length,
        block_size => l_bsize
    );
    if l_exists then
        dbms_output.put_line( l_filename ||' exists in DATA_PUMP_DIR : ');
        dbms_output.put_line( 'Length : '||l_length);
        dbms_output.put_line( 'Block Size : '||l_bsize);
    else
        dbms_output.put_line('File does not exist in DATA_PUMP_DIR');
    end if;
end;
/

Run this and we get :

test.txt exists in DATA_PUMP_DIR :
Length : 0
Block Size : 4096

PL/SQL procedure successfully completed.

That’s handy. Let’s put it into a function…

create or replace function file_exists_fn
(
    i_dir in all_directories.directory_name%type,
    i_filename in varchar2
)
    return varchar2
is

    l_exists boolean;
    l_length number;
    l_block_size number;
    
    l_return varchar2(4000);
    
begin
    utl_file.fgetattr
    (
        location => upper(i_dir),
        filename => i_filename,
        fexists => l_exists,
        file_length => l_length,
        block_size => l_block_size
    );
    if l_exists then
        l_return := i_filename||' in '||upper(i_dir)||' - Length : '||l_length||' - Block Size : '||l_block_size;
    else
        l_return := i_filename||' does not exist in '||upper(i_dir);
    end if;
    
    return l_return;
end;
/

Now let’s see what happens with a Symbolic Link…

touch /home/mike/symlink.txt

sudo su oracle
[sudo] password for mike: 

ln -s /home/mike/symlink.txt /u01/app/oracle/admin/XE/dpdump/symlink.txt
ls -l /u01/app/oracle/admin/XE/dpdump/symlink.txt
lrwxrwxrwx 1 oracle dba 22 Jul  3 18:29 /u01/app/oracle/admin/XE/dpdump/symlink.txt -> /home/mike/symlink.txt

If we now call our function to fine symlink.txt in DATA_PUMP_DIR…

select file_exists_fn('DATA_PUMP_DIR', 'symlink.txt')
from dual
/

FILE_EXISTS_FN('DATA_PUMP_DIR','SYMLINK.TXT')
--------------------------------------------------------------------------------
symlink.txt does not exist in DATA_PUMP_DIR

SQL> 

It is at this point that I realise that I really should have read the manual, which states that, for UTL_FILE : “neither hard nor symbolic links are supported.”

So, if we’re to handle links, a different approach is required…

The DBMS_LOB approach

The DBMS_LOB has a FILEEXISTS function which looks like it could come in handy here…

set serveroutput on size unlimited
declare

    l_filename varchar2(4000) := 'symlink.txt';
    l_loc bfile;
begin
    l_loc := bfilename('DATA_PUMP_DIR', l_filename);
    if dbms_lob.fileexists(l_loc) = 1 then
        dbms_output.put_line( l_filename||' exists');
    else
        dbms_output.put_l1ine('File not found');
    end if;
end;
/

symlink.txt exists 

PL/SQL procedure successfully completed.

That’s better. After amending the function…

create or replace function file_exists_fn
(
    i_dir in all_directories.directory_name%type,
    i_filename in varchar2
)
    return varchar2
is

    l_loc bfile;
    l_return varchar2(4000);

begin
    l_loc := bfilename(upper(i_dir), i_filename);
    if dbms_lob.fileexists(l_loc) = 1 then
        l_return :=  i_filename||' exists in '||upper(i_dir);
    else
        l_return := 'File '||i_filename||' not found';
    end if;
    return l_return;
end;
/

…we can see that this also works just fine for conventional files…

select file_exists_fn('DATA_PUMP_DIR', 'test.txt')
from dual
/

FILE_EXISTS_FN('DATA_PUMP_DIR','TEST.TXT')
--------------------------------------------------------------------------------
test.txt exists in DATA_PUMP_DIR

SQL> 

Let’s check that is works for hard links as well…

touch /home/mike/hardlink.txt
chmod a+rw /home/mike/hardlink.txt
sudo su oracle
[sudo] password for mike: 

cd /u01/app/oracle/admin/XE/dpdump/
ln /home/mike/hardlink.txt hardlink.txt
ls -l hardlink.txt
-rw-rw-rw- 2 mike mike 0 Jul  3 18:50 hardlink.txt

And the test….

select file_exists_fn('DATA_PUMP_DIR', 'hardlink.txt')
from dual
/

FILE_EXISTS_FN('DATA_PUMP_DIR','HARDLINK.TXT')
--------------------------------------------------------------------------------
hardlink.txt exists in DATA_PUMP_DIR

SQL> 

So, if you want to minimise the prospect of muttering “I’m not your mother, you know!” to your database, then the DBMS_LOB approach would seem to be the way to go.


Filed under: Linux, Oracle, PL/SQL Tagged: dbms_lob.fileexists, hard link, symbolic link, utl_file.fgetattr

Kscope15 Impressions

Oracle AppsLab - Mon, 2015-07-06 10:34

As per Jake’s post, we got to spend a few days in Florida to support the Scavenger Hunt that we created for the Kscope15 conference.  Since it ran pretty smoothly, we were able to attend a few sessions and mingle with the attendees and speakers, here are my impressions of the event.

IMG_20150620_062305

This was my first time at Kscope.  Jake hyped it up as a not-to-miss conference for Oracle developers and despite my high expectations of the event, it did not disappoint.  The actual conference started Sunday but we arrived Saturday to setup everything for the Scavenger Hunt, dot a few i’s and cross some t’s.

We also ran a quick training session for the organizers helping with the administration of the Scavenger Hunt and later that night started with actually registering players for the hunt.  We signed up about 100 people on the first evening.  Registration continued Sunday morning and we picked up about 50 more players for a grand total of 150, not bad for our first Scavenger Hunt.

IMG_20150621_180905

The number of sessions was a bit overwhelming so I decided to focus on the Database Development and the Application Express track and picked a few session from those tracks.  The first one I attended was called “JSON and Oracle: A Powerful Combination” where Dan McGhan (@dmcghan) from Oracle, explained how to produce JSON from data in an Oracle Database, how to consume JSON in the Oracle Database and even how to use it in queries.

It turns out that Oracle 12.1.0.2 has some new, really cool features to work with JSON so be sure to check those out.  Interestingly, our Scavenger Hunt backend is using some of these techniques, and we got some great tips from Dan on how to improve what we were doing. So thanks for that Dan!

Next I went to “A Primer on Web Components in APEX” presented by my countryman Dimitri Gielis (@dgielis).  In this session, Dimitri demonstrated how you can easily integrate Web Components into an APEX application.  He showed an impressive demo of a camera component that took a picture right from the web application and stored it on the database.  He also demoed a component that integrated voice control into an APEX application, this allowed him to “ask” the database for a row and it would retrieve that row and show it on the screen, very cool stuff.

That night also featured the infamous “APEX Open Mic” where anybody can walk up to the mic and get five minutes to show off what they’ve built with APEX, no judging, no winners or losers, just sharing with the community, and I must say, some really impressive applications where shown, not the least of which one by Ed Jones (@edhjones) from Oracle, who managed to create a Minecraft-like game based on Oracle Social Network (OSN) data where treasure chests in the game represent OSN conversations. Opening the chest opens the conversation in OSN. Be sure to check out his video!

The next day, I attend two more sessions, one by our very own Noel Portugal (@noelportugal) and our Group Vice President, Jeremy Ashley (@jrwashley), I am sure they will tell you all about this through this channel or another so I am leaving that one for them.

IMG_20150621_181205

The other session was called “An Introduction to JavaScript Apps on the Oracle Database,” presented by Dan McGhan.  Dan demonstrated how you can use Node.js to enhance your APEX application with among other things, WebSocket functionality, something not natively offered by APEX.  Here I also learned that Oracle 12c has a feature that allows you to “listen” for particular changes in the database and then broadcast these changes to interested parties (Node.js and then WebSockets in this case), this is for sure something that we are going to be using in the future in some of our demos.

The 3rd day was Hands-On day and I attend 2 more sessions , first “Intro to Oracle REST Data Services” by Kris Rice (@krisrice) from Oracle, and then “Soup-to-Nuts of Building APEX Applications” by David Peake (@orcl_dpeake) from Oracle.

In the first one we were introduced to ORDS, a feature in the Oracle DB that allows you to create REST services straight on top of the Database, no middle tier required!  I’ve seen this before in MySQL, but I did not know you could also do this in an Oracle DB. Again this is a supper powerful feature that we will be using for sure in future projects.

The second, two-hour, session was a walk through of a full fledged APEX application from start to finish by the always entertaining David Peake.  I must admit that by that time I was pretty much done, and I left the session half way through building my application. However, Raymond @yuhuaxie) managed to sit through the whole thing so maybe he can give some comments on this session.

All I can say is that APEX 5.0 was extremely easy to get started with and build a nice Web Application with.

And that was KScope15 in a nutshell for me.  It was an awesome, exhausting experience, and I hope I can be there again in 2016.

Cheers,

Mark.Possibly Related Posts:

Don’t call it test

Laurent Schneider - Mon, 2015-07-06 09:00

There are quite a few names to avoid in your scripts. Even if there are not reserved-words, keep away !

I’ll start with test


cd $HOME/bin
vi test
  echo hello world
chmod +x test
./test
  hello world

The problem is that it may break your other scripts


$ ssh localhost test 1 = 2 && echo WHAT???
hello world
WHAT???

And it may break sooner or later, depending on your OS / version / default shell / default path / others.

There are quite a few filenames you should not use, like test, date, time, hostname, mail, view, touch, sort and make. The command type lists some of those as reserved word, shell builtin, tracked alias, shell keyword. But again it is not consistent over Unix flavors.


$ uname -sr; type date
SunOS 5.10
date is /usr/bin/date
$ uname -sr; type date
Linux 2.6
date is a tracked alias for /bin/date

Your sysadmin may also alias things for colors and safety in the common profile: for instance vi, ls, rm. But if it annoys you, then use \vi instead of vi.

So, What Kind of Person Are You?

Pythian Group - Mon, 2015-07-06 08:48

It’s my not-so-humble opinion that it takes a special kind of ‘someone’ to work in a successful and innovative collective such as Pythian.  We’re a diverse team of thought-leaders, technology forecasters, technical prodigies and individual contributors.  When we look for people to join our global company we’re looking for people who want to see that their work really matters…that they matter.  We have truly discerning tastes when it comes to who gets to have “Pythian” in their email signature – you have to love data and value what it does for people.

Oh.  And you have to like people (we’re funny like that).

Our global team is brimming with top talent dedicated to building something larger than them.  We haven’t gotten this far by playing it safe.  We play it smart.  We’re strategic.  We have a vision to be the most trusted and admired technology services organization in the world….

….And we’re looking for fantastic people to join us.   In order to take your career to the next level at Pythian, you have to be able to:

Lend a hand – There are times when it’s easier to allocate blame but in the end, the problem still exists.  It may not be ‘your fault’ but it can become everyone’s problem.  In situations where there isn’t a lot of time for advanced planning it’s the people who take steps towards a solution that will make the greatest (and often most favorable) impact.

Play to your strengths – Maybe you’re a whiz with numbers or an I.T. genius.  Perhaps your level of organization is outstanding or you have incredible leadership skills. Play to what energizes you.  Cultivate it, infuse your work with it and success will follow.

Lean into the unknown – Opportunity is often found in the things we never knew existed.  Many of the talented people that I’ve come to know at Pythian can dive fearlessly into projects and own them.   If they don’t know how to do something, they learn it and they learn how to do it well.  That’s just the way it’s done.

Embrace diversity – We believe that every employee that works with us is deserving of dignity and respect.

Be approachable –Typically there’s a good mix of personalities in any successful company.  While introverts seem to be at the helm of hands on I.T. work, extroverts also contribute significantly to getting things done.  Regardless of which way your personality leans, always be approachable.  A positive disposition is often contagious.

Put your best face forward – Remember that the skill and professionalism that you demonstrate every day will inevitably become your business card.  Maya Angelou once said, “People will forget what you said, people will forget what you did, but people will never forget how you made them feel.”

Do you think you can picture yourself here? Discover more about what it’s like to be part of the Pythian team.  You just might be our kind of people!

The post So, What Kind of Person Are You? appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Possible Truncation Attack? logged in #em12c nodemanager.log file

DBASolved - Mon, 2015-07-06 08:38

Recently I’ve come across issues with restarting Oracle Enterprise Manager and seeing messages in the nodemanager.log. The message that I’m referring to is (followed by a java trace stack):

<Jul 2, 2015 11:46:11 PM> <WARNING> Uncaught exception in server handlerjavax.net.ssl.SSLException: 
Inbound closed before receiving peer's close_notify: possible truncation attack? 
javax.net.ssl.SSLException: 
Inbound closed before receiving peer's close_notify: possible truncation attack?

What is interesting about this message is the panic that may come over a person when they see the word “attack”. The first time I saw this message, I was working on a client site and I was distressed because I was worried about an “attack” on EM. After some investigation, this message is a bit misleading. So, what was the cause of the message?

The “possible truncation attack” is due to the IP address of the host where the OMS runs changed. Here in my test environment, I recently upgraded my wireless router which effected my whole network. The router upgrade changed all the addresses on the network. When OEM was initially installed, the host had an address of 192.168.65.67 after the upgraded the addressed changed to 192.168.65.7. Not a big deal; how to fix though?

In the case of my test lab, I needed to change the /etc/hosts files to ensure that the correct IP address was picked up. In the enterprise, what needs to happen is your local DNS needs to be updated along as the /etc/hosts file. OEM upon start up will look at DNS then /etc/hosts when trying to resolve host to IP resolution. The order of preference can be changed in the /etc/resolv.conf as well.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

SQL vs. PL/SQL

Jonathan Lewis - Mon, 2015-07-06 03:23

Which piece of code will be faster (clue – the table in question has no indexes):

Option 1 – pure SQL


update join1 set
        data = data||'#'
where   key_1=500
and     key_2=23851
and     key_3=57012
and     key_4=521
and     key_6=1
and     key_7=23352
;

Option 2 – a silly PL/SQL row by row approach:


declare
        type rowid_type is table of urowid index by binary_integer;
        tab_rowid           rowid_type;  

        lv_rows_updated     number :=0;  

        cursor my_cursor is
                select  rowid rid
                from    join1
                where   key_1=500
                and     key_2=23851
                and     key_3=57012
                and     key_4=521
                and     key_6=1
                and     key_7=23352
        ;

begin
        open my_cursor;

        -- We know that the number of rows to be updated is very small
        fetch my_cursor bulk collect into tab_rowid limit 10000;

        forall lv_row in tab_rowid.first .. tab_rowid.last
             update join1 set data = data||'#' where  rowid = tab_rowid(lv_row);

        lv_rows_updated := sql%rowcount;
        close my_cursor;
end;
/

It’s a trick question, of course, and although the automatic response from any DBA-type is likely to be “the SQL”, the correct answer is (as so often) “it depends”.

This question appeared as a problem on the OTN database forum a few days ago. In it’s original form it asked why a select statement should be much faster than a select for update or an update – even though the volume identified and updated was very small (just one row in 100M).The note then went on to show that using PL/SQL to select the rowids of the target rows then doing the bulk update by rowid was faster than the basic SQL update. The answer didn’t spring to mind immediately; but fortunately someone asked for some run-time statistics (v$sesstat) and the supplied statistics told me what was going on.

Conveniently the OP gave us the code to recreate the test case – all 100M rows of it; I cut this back to 16M rows (ca. 1.5GB of disc space), and then ran the tests with ny db_cache_size set to 256MB (another clue). I got similar results to the OP – not so dramatic, but the PL/SQL ran faster than the SQL and the difference was due to an obvious change in the CPU usage.

If you haven’t guess from the clue in the 256MB db_cache_size (which means the table is more than 5 times the size of the cache), the answer is “serial direct path reads”. For a sufficiently large table (and that’s not easy to define – start here and follow a few links) it’s fairly common knowledge that from 11g a tablescan can use a serial direct path read, and that’s what the PL/SQL was doing to select the required rowids. However, here’s a detail that’s not often mentioned: an update has to take place in public where everyone can see it so when Oracle executed the simple SQL update or select for update statement it had to scan the table through the buffer cache. Pulling all those blocks into the buffer cache, grabbing latches to link them to the right cache buffers chains, pinning them, then unpinning them uses a lot of CPU – which isn’t needed for the direct path read. The PL/SQL with its pure select used far less CPU than the basic SQL with its update/select for update, and because the OP had a very high-powered machine with plenty of CPU and loads of (disc-)caching effects all over the place the difference in CPU time was exremely visible as a fraction of the total DB time.

This was, inevitably, a very special case where a little detail became a significant fraction of the workload. The OP scanned 100M rows to update 1 row (in 7 – 13 seconds!). This doesn’t sound like a strategy you would normally want to adopt for frequent use; and for occasional use we might be happy to use the slower (13 second) approach to avoid the coding requirement of the fast (7 second) solution.

Footnote:

It’s worth pointing out that the PL/SQL strategy is not safe. In the few seconds between the select statement starting and the row being identified and updated by rowid it’s possible that another session could have updated (or deleted) the row. In the former case the update statement is now updating a row which doesn’t match the specification; in the latter case the code will raise an exception.

We can make the PL/SQL safer by including the original predicates in the update statement – but that still leaves the question of what the code should do if the select statement finds a row and the update fails to update it. Should it, perhaps, assume that there is still a row in the table that needs an update and re-run (using up all the time you saved by adopting a PL/SQL solution).

 

 

 


Why A Brand New Index Might Benefit From An Immediate Coalesce (One Slip)

Richard Foote - Mon, 2015-07-06 01:58
A recent question on the OTN Forums Reg: Index – Gathering Statistics vs. Rebuild got me thinking on a scenario not unlike the one raised in the question where a newly populated index might immediately benefit from a coalesce. I’ve previously discussed some of the pertinent concepts such as how index rebuilds can make indexes bigger, not smaller […]
Categories: DBA Blogs

Transport Tablespace using RMAN Backupsets in #Oracle 12c

The Oracle Instructor - Mon, 2015-07-06 01:29

Using backupsets for Transportable Tablespaces reduces the volume of data you need to ship to the destination database. See how that works:

RMAN TTS on the source database

RMAN TTS on the source database

The tablespace is made READ ONLY before the new BACKUP FOR TRANSPORT command is done. At this point, you can also convert the platform and the endian format if required. Then on the destination site:

RMAN TTS on the destination database

RMAN TTS on the destination database

The FOREIGN keyword indicates that this doesn’t use a backup taken at the destination. Practical example:

 

[oracle@uhesse ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jul 6 08:36:30 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select * from v$tablespace;

       TS# NAME                           INC BIG FLA ENC     CON_ID
---------- ------------------------------ --- --- --- --- ----------
         0 SYSTEM                         YES NO  YES              0
         1 SYSAUX                         YES NO  YES              0
         2 UNDOTBS1                       YES NO  YES              0
         3 TEMP                           NO  NO  YES              0
         4 USERS                          YES NO  YES              0
         5 TBS1                           YES NO  YES              0

6 rows selected.

SQL> select table_name,owner from dba_tables where tablespace_name='TBS1';

TABLE_NAME
-------------------- 
OWNER
--------------------
T
ADAM


SQL> alter tablespace tbs1 read only;

Tablespace altered.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@uhesse ~]$ rman target sys/oracle@prima

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Jul 6 08:37:28 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PRIMA (DBID=2113606181)

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name PRIMA

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    347      SYSTEM               YES     /u01/app/oracle/oradata/prima/system01.dbf
2    244      SYSAUX               NO      /u01/app/oracle/oradata/prima/sysaux01.dbf
3    241      UNDOTBS1             YES     /u01/app/oracle/oradata/prima/undotbs01.dbf
4    602      USERS                NO      /u01/app/oracle/oradata/prima/users01.dbf
5    100      TBS1                 NO      /u01/app/oracle/oradata/prima/tbs1.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    40       TEMP                 32767       /u01/app/oracle/oradata/prima/temp01.dbt

RMAN> host 'mkdir /tmp/stage';

host command complete

RMAN> configure device type disk backup type to compressed backupset;

old RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 1;
new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 1;
new RMAN configuration parameters are successfully stored

RMAN> backup for transport format '/tmp/stage/tbs1.bkset'
      datapump format '/tmp/stage/tbs1.dmp'
      tablespace tbs1;

Starting backup at 06-JUL-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=40 device type=DISK
Running TRANSPORT_SET_CHECK on specified tablespaces
TRANSPORT_SET_CHECK completed successfully

Performing export of metadata for specified tablespaces...
   EXPDP> Starting "SYS"."TRANSPORT_EXP_PRIMA_yvym":
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   EXPDP> Master table "SYS"."TRANSPORT_EXP_PRIMA_yvym" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TRANSPORT_EXP_PRIMA_yvym is:
   EXPDP>   /u01/app/oracle/product/12.1.0/dbhome_1/dbs/backup_tts_PRIMA_25997.dmp
   EXPDP> ******************************************************************************
   EXPDP> Datafiles required for transportable tablespace TBS1:
   EXPDP>   /u01/app/oracle/oradata/prima/tbs1.dbf
   EXPDP> Job "SYS"."TRANSPORT_EXP_PRIMA_yvym" successfully completed at Mon Jul 6 08:39:50 2015 elapsed 0 00:00:26
Export completed

channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/u01/app/oracle/oradata/prima/tbs1.dbf
channel ORA_DISK_1: starting piece 1 at 06-JUL-15
channel ORA_DISK_1: finished piece 1 at 06-JUL-15
piece handle=/tmp/stage/tbs1.bkset tag=TAG20150706T083917 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting compressed full datafile backup set
input Data Pump dump file=/u01/app/oracle/product/12.1.0/dbhome_1/dbs/backup_tts_PRIMA_25997.dmp
channel ORA_DISK_1: starting piece 1 at 06-JUL-15
channel ORA_DISK_1: finished piece 1 at 06-JUL-15
piece handle=/tmp/stage/tbs1.dmp tag=TAG20150706T083917 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 06-JUL-15

RMAN> alter tablespace tbs1 read write;

Statement processed

RMAN> exit


Recovery Manager complete.
[oracle@uhesse ~]$ ls -rtl /tmp/stage
total 5608
-rw-r-----. 1 oracle oinstall 5578752 Jul  6 08:39 tbs1.bkset
-rw-r-----. 1 oracle oinstall  163840 Jul  6 08:39 tbs1.dmp
[oracle@uhesse ~]$ rman target sys/oracle@sekunda

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Jul 6 08:40:49 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: SEKUNDA (DBID=3356258651)

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name SEKUNDA

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    347      SYSTEM               YES     /u01/app/oracle/oradata/sekunda/system01.dbf
2    249      SYSAUX               NO      /u01/app/oracle/oradata/sekunda/sysaux01.dbf
3    241      UNDOTBS1             YES     /u01/app/oracle/oradata/sekunda/undotbs01.dbf
4    602      USERS                NO      /u01/app/oracle/oradata/sekunda/users01.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    40       TEMP                 32767       /u01/app/oracle/oradata/sekunda/temp01.dbt

RMAN> restore foreign tablespace tbs1
      format '/u01/app/oracle/oradata/sekunda/tbs1.dbf'
      from backupset '/tmp/stage/tbs1.bkset'
      dump file from backupset '/tmp/stage/tbs1.dmp';

Starting restore at 06-JUL-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=37 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring all files in foreign tablespace TBS1
channel ORA_DISK_1: reading from backup piece /tmp/stage/tbs1.bkset
channel ORA_DISK_1: restoring foreign file 5 to /u01/app/oracle/oradata/sekunda/tbs1.dbf
channel ORA_DISK_1: foreign piece handle=/tmp/stage/tbs1.bkset
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring Data Pump dump file to /u01/app/oracle/product/12.1.0/dbhome_1/dbs/backup_tts_SEKUNDA_85631.dmp
channel ORA_DISK_1: reading from backup piece /tmp/stage/tbs1.dmp
channel ORA_DISK_1: foreign piece handle=/tmp/stage/tbs1.dmp
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02

Performing import of metadata...
   IMPDP> Master table "SYS"."TSPITR_IMP_SEKUNDA_ppol" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_SEKUNDA_ppol":
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   IMPDP> Job "SYS"."TSPITR_IMP_SEKUNDA_ppol" successfully completed at Mon Jul 6 08:42:51 2015 elapsed 0 00:00:20
Import completed

Finished restore at 06-JUL-15

RMAN> report schema;

Report of database schema for database with db_unique_name SEKUNDA

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    347      SYSTEM               YES     /u01/app/oracle/oradata/sekunda/system01.dbf
2    249      SYSAUX               NO      /u01/app/oracle/oradata/sekunda/sysaux01.dbf
3    241      UNDOTBS1             YES     /u01/app/oracle/oradata/sekunda/undotbs01.dbf
4    602      USERS                NO      /u01/app/oracle/oradata/sekunda/users01.dbf
5    100      TBS1                 NO      /u01/app/oracle/oradata/sekunda/tbs1.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    40       TEMP                 32767       /u01/app/oracle/oradata/sekunda/temp01.dbt

RMAN> alter tablespace tbs1 read write;

Statement processed

RMAN> select count(*) from adam.t;

  COUNT(*)
----------
   1000000

Hope you find it useful :-)


Tagged: 12c New Features, PracticalGuide, RMAN
Categories: DBA Blogs

Upgrade Cloud Control 12cR4 to 12cR5

Tim Hall - Mon, 2015-07-06 00:32

em-12cA couple of weeks ago I wrote a post about doing a Cloud Control 12cR5 installation and said I would be testing the upgrade from 12cR4. I’ve now done that.

The upgrade documentation is quite extensive and the prerequisites are going to be different depending on the database and cloud control versions you are starting with, so this is no way a “recommended” way to do the upgrade. Each one will need to be approached on a case-by-case basis. It’s just meant to give a flavour of what you have to do.

Suffice to say, it worked fine for me. :)

Cheers

Tim…

Upgrade Cloud Control 12cR4 to 12cR5 was first posted on July 6, 2015 at 7:32 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Avoiding the Inbound Monkey

Anthony Shorten - Sun, 2015-07-05 23:32

This is the first in a series of articles on implementation advice to optimize the designs and configuration of Oracle Utilities products.

Early in my career my mentor at the time suggested that I expand my knowledge outside the technical area. The idea is that non-technical techniques would augment my technical knowledge. He suggested a series of books and articles that would expand my thinking. Today I treasure those books and articles and regularly reread them to reinforce my skills.

Recently I was chatting to customers about optimizing their interface designs using a techniques typically called "Responsibility Led Design". The principle is basically that each participant in an interface has distinct responsibilities for the data interchanged and it was important to make sure designs took this into account. This reminded me of one of favorite books titled "The One Minute Manager Meets The Monkey" by Ken Blanchard, William Oncken Jr. and Hal Burrows. I even have a copy of the audio version which is both informative and very entertaining. The book was based on a very popular Harvard Review article entitled "Management Time: Who's Got The Monkey" and expands on that original idea.

To paraphrase the article, a monkey is a task that is not your responsibility that is somehow assigned to you. The terms for this is the "monkey jumping on your back" or simply "Monkey on your back". This epitomizes the concepts of responsibility.

So what has this got to with design or even Oracle Utilities products, you might ask?

One of the key designs for all implementation is sending data INTO the Oracle Utilities products. These are inbound interfaces, for obvious reasons. In every interface there is a source application and a target application. The responsibility of the source application is to send valid data to the target application for processing. Now, one of the problems I see with implementations is when the source application sends invalid data to the target. There are two choices in this case:

  • Send back the invalid request - This means that if the data transferred from the source in invalid for the target then the target should reject the data and ask the source to resend. Most implementations use various techniques for achieve this. This keeps the target clean of invalid data and ensures the source corrects their data before sending it off again. This is what I call correct behavior.
  • Accept the invalid request (usually in a staging area) and correct it within the target for reprocessing - This means the data is accepted by the target. regardless of the error and corrected within the target application to complete the processing.

More and more I am seeing implementations taking the latter design philosophy.  This is not efficient as the responsibility for data clensing (the monkey in this context) has jumped on the back of the target application. At this point, the source application has no responsibility for cleaning their own data and has no real incentive to ever send clean data to the target as the target is now has the monkey firmly on their back. This has consequences for the target application as it can increase resource usage (human and non-human) to now correct data errors from the source application. Some of the customers I chatted to found that while initially they found the volume of these types of transactions were low that over time the same errors kept being sent, and over time the cumulative effect of the data clensing on the target started to get out of control. Typically, at this point, customers ask for advice to try and reduce the impact.

In an Oracle Utilities product world, this exhibits itself as a large number of interface To Do's to manage as well as staging records to manage and additional storage to manage. The latter is quite important as typically implementations keep forgetting to remove completed transactions that have been corrected once they have been applied from the staging area.  The product ship special purge jobs to remove complete staged transactions and we recently added support for ILM to staging records.

My advice to these customers are:

  • Make sure that you assign invalid transactions back to the source application. This will ensure the source application maximizes the quality of their data and also hopefully prevents common transaction errors to reoccur. In other words, the monkey does not jump from the source to the target.
  • If you choose to let the monkey jump on the target's back, for any reason, then use staging tables and make sure they are cleaned up to minimize the impact. Monitor the error rates and number of errors and ensure the source application is informed to correct the error trends.

In short, avoid the monkey in your inbound transactions. This will make sure the resources you allocate to both your source and target are responsible and are allocated in an efficient manner.