Oracle Database 12c Release 22.214.171.124 – My First Observations. Licensed Features Usage Concerns – Part II.
In this post you’ll see that I provide an scenario of accidental paid-feature “use.” The key elements of the scenario are: 1) I enabled the feature (by “accident”) but 2) I didn’t actually use the feature because I neither created nor altered any tables.
In Part I of this series I aimed to bring to people’s attention what I see as a significant variation from the norm when it comes to Oracle licensed-option usage triggers and how to prevent them from being triggered. Oracle Database Enterprise Edition supports several separately licensed options such as Real Application Clusters, Partitioning, and so on. A feature like Real Application Clusters is very expensive but if “accidental usage” of this feature is a worry on an administrator’s mind there is a simple remedy: unlink it. If the bits aren’t in the executable you’re safe. Is that a convoluted procedure? No. An administrator simply executes make -d ins_rdbms.mk rac_off and then relinks the Oracle executable. Done.
What about other separately licensed options like Partitioning? As I learned from Paul Bullen, once can use the Oracle-supplied chopt command to remove any chance of using Partitioning if, in fact, one does not want to use Partitioning. I thought chopt might be the solution to the issue of possible, accidental usage of the In-Memory Column Store feature/option. However, I found that chopt, as of this point, does not offer the ability to neutralize the feature as per the following screenshot.
There is yet no way I know of to prevent accidental use of the In-Memory Column Store feature/option. Am I just making a mountain out of a mole hill? I’ll let you be the judge. And if you side with folks that do feel this is a mountainous-mole hill you’d be in really good company.
Lest folks think that we Oaktable Network Members are a blind, mutual admiration society, allow me to share the rather sizzling feedback I got for raising awareness to this aspect of Oracle Database 12c:
No, I didn’t just want to dismiss this feedback. Instead I pushed the belt-sander off of my face and read the words a couple of times. The author of this email asserted I’m conveying misinformation ( aka “BS”) and to fortify that position it was pointed out that one must:
- Set a database (instance initialization) parameter.
- Bounce the instance.
- Alter any object to use the feature. I’ll interpret that as a DDL action (e.g., ALTER TABLE, CREATE TABLE).
Even before I read this email I knew these assertions were false. We all make mistakes–this I know! I should point out that unlike every release of Oracle from 5.1.17 to 11gR2 I was not invited to participate in the Beta for this feature. I think a lot of Oaktable Network members were in the program–perhaps even the author of the above email snippet–but I don’t know that for certain. Had I encountered this during a Beta test I would have raised it to the Beta manager as an issue and maybe, just maybe, the feature behavior might have changed before first customer ship. Why am I blabbering on about the Beta program? Well, given the fact that even Oaktable Network members with pre-release experience with this feature evidently do not know what I’m about to show in the remainder of this post.What Is An Accident?
Better yet, what is an accident and how full of “BS” must one be to fall prey? Maybe the remainder of the post will answer that rhetorical question. Whether or not it does, in fact, answer the question I’ll be done with this blog series and move on to the exciting work of performance characterization of this new, incredibly important feature.Anatomy of a “Stupid Accident.”
Consider a scenario. Let’s say a DBA likes to use the CREATE DATABASE statement to create a database. Imagine that! Let’s pretend for a moment that DBAs can be very busy and operate in chaotic conditions. In the fog of this chaos, a DBA could, conceivably, pull the wrong database instance initialization file (e.g., init.ora or SPFILE) and use it when creating a database. Let’s pretend for a moment I was that busy, overworked DBA and I’ll show you in the following happeneds:
- I executed sqlplus from the bash command prompt.
- I directed sqlplus to execute a SQL script called cr_db.sql. Many will recognize this as the simple little create script I supply with SLOB.
- The cr_db.sql script uses a local initialization parameter file called create.ora
- sqlplus finished creating the database. NOTE: this procedure does not create a single user table.
- After the database was created I connected to the instance and forced the feature usage tracking views to be updated (thanks to Morgan’s Library for that know-how as well…remember, I’m a database platform engineer not a DBA so I learn all the time in that space).
- I executed a SQL script to report feature usage of only those features that match a predicate such as “In-%’
This screen shot shows that the list of three asserted must-happen steps (provided me by a fellow Oaktable Network member) were not, in fact, the required recipe of doom. The output of the features.sql script proves that I didn’t even need to create a user table to trigger the feature.
The following screen shot shows what the cr_db.sql script does:
The following screenshot shows the scripts I used to update the feature usage tracking views and to report against same:
Stepping on a landmine doesn’t just happen. You have to sort of be on your feet and walking around for that to happen. In the same vein, triggering usage of the separately licensed Oracle Database 12c Release 126.96.36.199 In-Memory Column Store feature/option required me to be “on my feet and walking around” the landmine–as it were. Did I have to jump through hoops and be a raging, bumbling idiot to accidentally trigger usage of this feature? No. Or, indeed, did I issue a single CREATE TABLE or ALTER TABLE DDL statement? No. What was my transgression? I simply grabbed the wrong database initialization parameter file from my repository–in the age old I’m-only-human sort of way these things often happen.
To err to such a degree would certainly not be human, would it?
The following screenshot shows the parameter file I used to prove:
- You do not need to alter parameters and bounce an instance to trigger this feature usage in spite of BS-asserting feedback from experts.
- You don’t even have to create a single application table to trigger this feature usage.
This blog thread has made me a feel a little like David Litchfield must have surely felt for challenging the Oracle9i-era claims of how Oracle Database was impenetrable by database security breaches. We all know how erroneous those claims where. Unbreakable, can’t break it, can’t break in?
Folks, I know we all have our different reasons to be fans of Oracle technology–and, indeed, I am a fan. However, I’m not convinced that unconditional love of a supposed omnipotent and omniscient god-like idol are all that healthy for the IT ecosystem. So, for that reason alone I have presented these findings. I hope it makes at least a couple of DBAs aware of how this licensed feature differs from other high-dollar features like Real Application Clusters in exactly what it takes to “use” the feature–and, moreover, how to prevent stepping on a landmine as it were.
…and now, I’m done with this series.
Filed under: oracle
In a previous post, I have described Adaptive Plans. Even if I prefer to show plans with the SQL Monitor active html format, I had to stick with the dbms_xplan for that because SQL Monitoring did not show all information about adaptive plans.
This has been fixed in the Patchset 1 and I have run the same query to show the new feature.
First, an adaptive plan can be in two states: 'resolving' where all alternatives are possible and 'resolved' then the final plan has been choosen. It is resolved once the first execution statistics collector has made the decision about the inflection point. We can see the state in the SQL Monitor header:
Here my plan is resolved because the first execution is finished.
The plan with rowsource statistics show only the current plan, but the 'Plan Note' shows that it is an adaptive plan:
Now we have to go to the 'Plan' tab which show the equivalent of dbms_xplan.display_cursor:
Here the format is equivalent to format=>'adaptive'. It's the 'Full' plan where all branches are shown but inactive part is grayed. We have here the Statistics Collector after reading DEPARTMENTS, and we have the inactive full table scan hash join of EMPLOYEES.
Just choose the 'Final' Plan (or 'Current' if it is not yet resolved) to get only the active part:
I often prefer the tabular format to the graphical one:
We have all information: the 7 rows from DEPARTMENTS have gone through STATISTICS COLLECTOR and NESTED LOOP with index access has been choosen. Note that it is different from the previous post where HASH JOIN with full table scan was choosen because the 7 rows were higher than the inflection point.
In my current example, because I have system statistics that costs full table scan higher:
DP: Found point of inflection for NLJ vs. HJ: card = 8.35
This is higher than ny 7 rows from DEPARTMENTS.
Here is the whole sqlmon report: sqlmon.zip and how I got it:
alter session set current_schema=HR;
select /*+ monitor */ distinct DEPARTMENT_NAME from DEPARTMENTS
join EMPLOYEES using(DEPARTMENT_ID)
where DEPARTMENT_NAME like '%ing' and SALARY>20000;
alter session set events='emx_control compress_xml=none';set pagesize 0 linesize 10000 trimspool on serveroutput off long 100000000 longc 100000000 echo off feedback off
select dbms_sqltune.report_sql_monitor(report_level=>'all',type=>'html') from dual;
Note that I used the script exposed here and I used the emx_event to get the uncompressed xml, which I got from Tyler Muth:July 25, 2014
A Guest Post by Meg Bear, Oracle Vice President
Last week, Oracle Social proudly announced the launch of LinkedIn
support features for the Oracle Social Relationship Management (SRM)
platform, and joined LinkedIn’s Certified Company Page Partners
Program. It’s undeniable how valuable LinkedIn is to companies with its
user base of more than 300 million from 200+ countries and territories.
Investis, an international digital corporate communications company,
says LinkedIn accounts for 64 percent of all visits to corporate
websites from social media sites.
In response to the announcement, Alan Lepofsky, VP & Principal Analyst at Constellation Research said, “LinkedIn Company and Showcase pages are an important destination for brands, so at a time where LinkedIn is removing access to their API from almost all partners, this new feature is an excellent addition for Oracle customers.”
From a customer perspective, this allows Oracle Social customers to now have the ability to publish, engage, automate, and analyze LinkedIn activities within the SRM platform. Customers can leverage SRM’s proprietary features including Dynamic Link Tracking and Smart Publishing, as well as existing SRM integrations with Oracle Marketing Cloud (Eloqua) for LinkedIn.
This new capability is in direct response to the field requesting LinkedIn support. Customers have enthusiastically welcomed this new feature by turning it on the same day as the release.
Social is Becoming a B2B Priority
From a business perspective this integration fills an important business need with B2B customers and companies allowing them to execute a multichannel social strategy on the network that has distinguished itself as the social network for business. The LinkedIn release coupled with integration with Oracle Marketing Cloud (Eloqua) provides B2B marketers the ability to promote Eloqua landing pages, generate leads, and benefit from digital body language via LinkedIn. I discussed in my Oracle Social Spotlight blog how LinkedIn support solidifies Oracle’s SRM platform “as the clear choice for B2B marketers.” This is the latest in our integration and Oracle Social revolutionary story.
For more details:
LinkedIn Sales Materials
Talent Management Excellence Essentials is a go-to e-publication that covers talent, performance, and compensation strategies.
Mark Bennett, Director of Product Strategy at Oracle, contributed an article entitled “Workforce Reputation Management: Is social media the vital skill you aren’t tracking?”
He starts with the premise that employees on social media present opportunities, as well as risks for organizations. When their presence is managed well, it can become a rich source of influence, collaboration, and brand recognition.
“Workforce reputation management technology helps organizations discover, mobilize and retain talent by providing insight into the social reputation and influence of the workforce while also helping organizations monitor employee social media policy compliance and mitigate social media risk.”
Bennett says it is a tool that can support HR by helping to harness social media. He cites using it to improve talent processes, such as hiring, training and development, and discovering hidden talent. It can aid in uncovering a more complete picture of an employee’s knowledge, skills and attributes. Workforce reputation technology provides a clearer picture of how a candidate or employee is viewed by peers and the communities he / she works across.
“Social media holds untapped insights about your talent, brand and business, and workforce reputation management can help unlock them,” says Bennett. He asks his readers to think about how much more productive and efficient their organizations could be with this valuable information.
Read the full article on page 15 of the publication. No need to register or login
Plenty of runway. “The company estimates there is a $74 billion addressable market for its cloud apps,” Hurd said.
Getting customers on board. It is anticipated that many of the thousands of Oracle on-premises customers will extend their installations by adding cloud modules. Others will migrate completely to the cloud.
Oracle’s cloud army. Hurd reported Oracle has thousands of SaaS sales reps in more than 60 countries and about 19,000 certified consultants.
Fusion taking hold? “The confidence our sales force has in our (Fusion) products now over the past couple of years has leaped exponentially.” That confidence is translating into sales for Fusion.
All ISVs are welcome. “Oracle has 19 data centers around the world. That coverage, combined with its unified technology stack makes a full-fledged move a no-brainer for ISVs.”
Ready for the world. Oracle supports its cloud apps in 33 languages with localizations for more than 100 countries. Hurd said Oracle will continue to expand its data center footprint.
That’s all great, but Hurd isn’t satisfied. “Make no mistake, we’re laser-focused on being No. 1 in the cloud.”
Read the full article, which includes Hurd’s presentation.
Jeb Dasteel, senior vice president and chief customer officer at Oracle, describes the value of user groups this way: “Over and over again, we see that user group members are by far the most active and satisfied customers in our entire customer base. One of the best things that any Oracle customer can do to maximize the value from the investment is to join a user group.”
Sixteen hundred members of OHUG gathered in Las Vegas June 9-13, 2014, to learn, network, and grow. This is a high-energy, relationship-rich conference.
We have two items of interest to share with you from OHUG:
1) Scan the digest of tweets (#ohugglobal) from the conference.
2) Read four mini interviews from Oracle HCM Cloud customers. The questions they were asked included What solutions are you using? What benefits are you realizing? and What advice do you have for others looking to move their system to the cloud?
- WAXIE Sanitary Supply―Q&A with Melissa Halverson, Benefits & HRIS Manager
- eVerge Group―Q&A with Bob Moser, Senior Director, Oracle HCM Applications
- Sonepar―Q&A with Jim Duran, Training Manager at Sonepar
- National Marrow Donor Program―Q&A with Tiffany Lyons, Senior HR Technology & Process Analyst
How far flung are the people with whom you work? It’s pretty rare in our
global workplace to find a team with all members sharing the same
location. In this new environment, the talent pool can literally be the
whole world, but of course that scenario comes with its own set of
challenges, according to Oracle’s Zach Thomas, Vice President of HCM
The challenges include: How to unify employees? How to attract the best new talent? How to manage the organization’s culture, values, and goals? How to stay compliant with laws and practices in various countries? And how to cultivate high performing teams?
“At the heart of the process is technology — specifically, consumer-designed, intuitive tools available in the cloud and accessible through a gamut of devices,” says Thomas in an article published by Talent Management Magazine entitled “The 7 Rules for Selecting Global HR Technology.”
Here are the seven tenets, minus the intelligent discourse for each that Thomas includes:
- Find, recruit and hire the best talent no matter the location.
- Know each employee like he or she works right next to you.
- Empower social collaboration.
- Prioritize great performance.
- Streamline global payroll.
- Deliver compensation and rewards equitably and competitively.
- Make security and compliance central to HR processes.
Organizations can follow these seven principles to effectively manage
global workplaces, and as a result, says Thomas, give them “a
competitive advantage by helping pull in the best talent, increase
retention, streamline organizational processes, stimulate productive
collaboration and avoid legal snafus.”
Read the full article.
A Guest Post by Oracle's Senior Director, Brian DaytonOracle Sales Cloud and Marketing Cloud customer Apex IT gained just that―a 724% return on investment (ROI) when it implemented these Oracle
Congratulations Apex IT!
Apex IT was just announced as a winner of the Nucleus Research 11th annual Technology ROI Awards. The award, given by the analyst firm, highlights organizations that have successfully leveraged IT deployments to maximize value per dollar spent.
- Return on Investment – 724%
- Payback – 2 months
- Average annual benefit – $91,534
- Cost : Benefit Ratio – 1:48
In addition to the ROI and cost metrics the award calls out improvements in Apex IT’s business operations—across both Sales and Marketing teams:
- Improved ability to identify new opportunities and focus sales resources on higher-probability deals
- Reduced administration and manual lead tracking—resulting in more time selling and a net new client increase of 46%
- Increased campaign productivity for both Marketing and Sales, including Oracle Marketing Cloud’s automation of campaign tracking and nurture programs
- Improved margins with more structured and disciplined sales processes—resulting in more effective deal negotiations
Please join us in congratulating Apex IT on this award and the business improvements it made.
Want More Details?
Don’t take our word for it. Read the full Apex IT ROI Case Study. You also can learn more about Apex IT’s business, including the company’s work with Oracle Sales and Marketing Cloud on behalf of its clients.
Cloud. Big data. Mobile. Social media.
These mega trends in technology have had a profound impact on our lives.
And now according to SVP Ravi Puri from North America Oracle Consulting Services, these trends are starting to converge and will affect us even more. His article, “Cloud, Analytics, Mobile, And Social: Convergence Will Bring Even More Disruption” appeared in Forbes on June 6.
For example, mobile and social are causing huge changes in the business world. Big data and cloud are coming together to help us with deep analytical insights. And much more.
These convergences are causing another wave of disruption, which can drive all kinds of improvements in such things as customer satisfaction, competitive advantage, and growth.
But, according to Puri, companies need to be prepared.
In this article, Puri urges companies to get out in front of the new innovations. H3 gives good directions on how to do so to accelerate time to value and minimize risk.
The post is a good thought leadership piece to pass on to your customers.
A Guest Post by Kathy Miedema, Oracle Market Research Analyst
Vice President Jeremy Ashley and his Oracle Applications UX team are
continually asking: “What’s the best way to deliver simplicity,
mobility, and extensibility so users can be even more productive?”
But didn’t we just release the simplified UI for Release 8? Isn’t that UI all about productivity?
Of course it is, but Ashley’s team is making regular, incremental improvements to the user experience.
“Our mobile devices and a need for simplicity are behind this continued evolution of the Oracle user experience design philosophy,” he explains. Ashley describes the newest approach to design as “glance, scan, commit.” And at the heart of that philosophy is the infolet.
In this post, Kathy Miedema, Oracle market research analyst, pulls back the curtain to reveal what’s going on in the Oracle UX research lab. It’s new and exciting. It also reflects the deep knowledge and investment we’re making in our user experience, which will keep us ahead of the competition.
Strategic design philosophy pushes Oracle cloud user experience to lofty new heights.
Thy rhythm of blog posts regarding database technology has remained consistent throughout the week. Few of those posts have been plucked by this Log Buffer Edition for your pleasure.
Sayan has shared a Standalone sqlplus script for plans comparing.
Gartner Analysis: PeopleSoft Update Manager Delivers Significant Improvements to the Upgrade Tools and Processes.
Timely blackouts, of course, are essential to keeping the numbers up and (more importantly) preventing Target Down notifications from being sent out.
Are you experiencing analytics pain points?
Bug with xmltable, xmlnamespaces and xquery_string specified using bind variable.
SQL Server 2012 introduced columnstore indexes, which can immensely improve the performance of OLAP queries.
Restoring the SQL Server Master Database Even Without a Backup .
There times when you need to write T-SQL code that creates specific T-SQL Code and executes it. When you do this you are creating dynamic T-SQL code.
A lot of numbers that we use everyday such as Bank Card numbers, Identification numbers, and ISBN codes, have check digits.
SQL-only ETL using a bulk insert into a temporary table (SQL Spackle).
How MariaDB makes Stored Procedures usable.
DBaaS, OpenStack and Trove 101: Introduction to the basics.
MySQL Fabric is a tool included on MySQL Utilities that helps you to manage your MySQL instances.
Showing all available MySQL data types when creating a new table with MySQL for Excel.
Why TokuDB hates Transparent HugePages.
Today is our day. July 25, 2014 marks the 15th annual System Administrator Appreciation Day. On this day we pause and take a moment to forget the impossible tasks, nonexistent budgets, and often unrealistic timelines to say thank you to those people who keeps everything working — system administrators.
So much of what has become a part of everyday life, from doing our jobs, to playing games online, shopping, and connecting with friends and family around the world is only possible due in large part to the tireless efforts of the system administrators who are in the trenches every hour of every day of the year keeping the tubes clear and the packets flowing. The fact that technology has become so common place in our lives, and more often than not “just works” has afforded us the luxury of forgetting (or not evening knowing) the immense infrastructure complexity which the system administrator works with to deliver the services we have come to rely on.
SysAdmin Appreciation Day started 15 years ago thanks to Ted Kekatos. According to Wikipedia, “Kekatos was inspired to create the special day by a Hewlett-Packard magazine advertisement in which a system administrator is presented with flowers and fruit-baskets by grateful co-workers as thanks for installing new printers. Kekatos had just installed several of the same model printers at his workplace.” Ever since then, SysAdmin Appreciation Day has been celebrated on the last Friday in July.
At Pythian, I have the privilege of being part of the Enterprise Infrastructure Services group. We are a SysAdmin dream team of the best of the best, from around the globe. Day in and day out, our team is responsible for countless servers, networks, and services that millions of people use every day.
To all my colleagues and to anyone who considers themselves a SysAdmin, regardless of which flavour – thank you, and know that you are truly doing work that matters.
Sample application, tested with iOS platform - MAFMobileLocalApp_v2.zip. Original version of this application brings Locations screen, where both City Name and Street Address are displayed:
We are going to customise original application with MDS, without changing source code directly. Customized application is based on two MDS tip layers - gold and silver level partners. Gold level partners are able to see State Name in addition to the Street Address:
While silver level partners are allowed to see only City Name:
To achieve MDS functionality behaviour for MAF application, you must define first MDS customisation class, this class must extend from standard MDS CustomizationClass and implement several methods. Important method is getName(), you must define MDS customisation name, the same name will be used for customisation (JDeveloper automatically reads this name during MAF application design time):
Customization class must be set in add-config.xml MDS section, in order to be registered for customisation use:
Once customisation class is registered, we can switch JDeveloper to the Customization Developer mode and start customizing application:
Make sure MDS layers are properly configured in CustomizationLayerValues.xml file. Customization layer must be set to the same name as you set in Customization class above. Layer values should specify different layers supported for customization:
I have defined two layers - gold and silver partnership. MDS layer can be selected for customisation, we start from Gold:
Locations page is updated to include State Name:
However, actual change is stored not in Location page, but in generated MDS file for Location page - it keeps delta for the changes made in customization mode. This file is generated under Gold Partnership profile folder:
State Name addition required to update Page Definition file for Locations, there is extra MDS file created with delta information for Locations Page Definition:
Next we can customize for Silver layer - change layer value:
Here we should keep only City Name and remove Street Address:
This change is reflected in MDS delta for Locations page, stored under Silver Partnership level:
It is well known thing and you can even find it on MOS, but I have a little more simple script for it, so I want to show little example.
First of all we need to start script on local database:
SQL> SQL> @transactions/global.sql Enter filters(empty for any)... Sid : Globalid mask : Remote_db mask: INST_ID SID SERIAL# USERNAME REMOTE_DB REMOTE_DBID TRANS_ID DIRECTION GLOBALID EVENT -------- ---- ---------- -------- --------- ----------- ---------------- ----------- -------------------------------------------------- --------------------------- 1 275 4469 XTENDER BAIKAL 1742630060 8.20.7119 FROM REMOTE 4241494B414C2E63616336656437362E382E32302E37313139 SQL*Net message from client
Then we need to copy GLOBALID of interested session and run script on database that shown in column REMOTE_DBID, but with specifieng GLOBALID:
SQL> SQL> conn sys/syspass@baikal as sysdba Connected. ====================================================================== ======= Connected to SYS@BAIKAL(baikal)(BAIKAL) ======= SID 203 ======= SERIAL# 38399 ======= SPID 6536 ======= DB_VERSION 188.8.131.52.0 ====================================================================== SQL> @transactions/global.sql Enter filters(empty for any)... Sid : Globalid mask : 4241494B414C2E63616336656437362E382E32302E37313139 Remote_db mask: INST_ID SID SERIAL# USERNAME REMOTE_DB REMOTE_DBID TRANS_ID DIRECTION GLOBALID STATE ------- ----- ---------- --------- ---------- ----------- ---------- ----------- -------------------------------------------------- -------------------------- 1 9 39637 XTENDER BAIKAL 1742630060 8.20.7119 TO REMOTE 4241494B414C2E63616336656437362E382E32302E37313139 [ORACLE COORDINATED]ACTIVE
It’s quite simple and fast.
I have a couple scripts for plans comparing:
But they have dependencies on other scripts, so I decided to create a standalone script for more convenient use without the need to download other scripts and to set up the sql*plus environment.
I’ve tested it already with firefox, so you can try it now: http://github.com/xtender/xt_scripts/blob/master/plans/diff_plans_active_standalone.sql
SQL> @plans_active 0ws7ahf1d78qa
SQL> @diff_plans 0ws7ahf1d78qa *** Diff plans by sql_id. Version with package XT_PLANS. Usage: @plans/diff_plans2 sqlid [+awr] [-v$sql] P_AWR P_VSQL --------------- --------------- false true
Strictly speaking, we can do it sometimes easier: it’s quite simple to compare plans without first column “ID”, so we can simply compare “select .. from v$sql_plan/v$sql_plan_statistics_all/v$sql_plan_monitor” output with any comparing tool.
I am a simple person who likes simple things, especially RMAN backup implementation.
I have yet to understand why RMAN backup implementation does not use configure command, and if you have a good explanation, please share.
Examples for RMAN configure command
configure device type disk parallelism 2 backup type to compressed backupset; configure channel device type disk format '/oradata/backup/%d_%I_%T_%U' maxopenfiles 1; configure channel 1 device type disk format '/oradata/backup1/%d_%I_%T_%U' maxopenfiles 1; configure archivelog deletion policy to backed up 2 times to disk; configure backup optimization on;
Do you know if backup is using parallelism?
Where is the backup to?
Is the backup to tape?
RMAN> show all; RMAN configuration parameters for database with db_unique_name SAN are: CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default CONFIGURE BACKUP OPTIMIZATION ON; CONFIGURE DEFAULT DEVICE TYPE TO DISK; CONFIGURE CONTROLFILE AUTOBACKUP ON; CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/oradata/backup/%d_%F.ctl'; CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET; CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/oradata/backup/%d_%I_%T_%U' MAXOPENFILES 1; CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT '/oradata/backup1/%d_%I_%T_%U' MAXOPENFILES 1; CONFIGURE MAXSETSIZE TO UNLIMITED; # default CONFIGURE ENCRYPTION FOR DATABASE OFF; # default CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 2 TIMES TO DISK; CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/snapcf_san.f'; # default RMAN>
Simple RMAN script.
set echo on; connect target; show all; backup incremental level 0 check logical database filesperset 1 tag "fulldb" plus archivelog filesperset 8 tag "archivelog";
Simple RMAN run.
$ rman @simple.rman Recovery Manager: Release 184.108.40.206.0 - Production on Thu Jul 24 11:12:19 2014 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. RMAN> set echo on; 2> connect target; 3> show all; 4> backup incremental level 0 check logical database filesperset 1 tag "fulldb" 5> plus archivelog filesperset 8 tag "archivelog"; 6> echo set on connected to target database: SAN (DBID=2792912513) using target database control file instead of recovery catalog RMAN configuration parameters for database with db_unique_name SAN are: CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default CONFIGURE BACKUP OPTIMIZATION ON; CONFIGURE DEFAULT DEVICE TYPE TO DISK; CONFIGURE CONTROLFILE AUTOBACKUP ON; CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/oradata/backup/%d_%F.ctl'; CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET; CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/oradata/backup/%d_%I_%T_%U' MAXOPENFILES 1; CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT '/oradata/backup1/%d_%I_%T_%U' MAXOPENFILES 1; CONFIGURE MAXSETSIZE TO UNLIMITED; # default CONFIGURE ENCRYPTION FOR DATABASE OFF; # default CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 2 TIMES TO DISK; CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/snapcf_san.f'; # default Starting backup at 2014-JUL-24 11:12:21 current log archived allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=20 device type=DISK allocated channel: ORA_DISK_2 channel ORA_DISK_2: SID=108 device type=DISK channel ORA_DISK_1: starting compressed archived log backup set channel ORA_DISK_1: specifying archived log(s) in backup set input archived log thread=1 sequence=326 RECID=337 STAMP=853758742 channel ORA_DISK_1: starting piece 1 at 2014-JUL-24 11:12:24 channel ORA_DISK_1: finished piece 1 at 2014-JUL-24 11:12:25 piece handle=/oradata/backup1/SAN_2792912513_20140724_8dpe6koo_1_1 tag=ARCHIVELOG comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 2014-JUL-24 11:12:25 Starting backup at 2014-JUL-24 11:12:25 using channel ORA_DISK_1 using channel ORA_DISK_2 channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00003 name=/oradata/SAN/datafile/o1_mf_undotbs1_9oqwsjk6_.dbf channel ORA_DISK_1: starting piece 1 at 2014-JUL-24 11:12:26 channel ORA_DISK_2: starting compressed incremental level 0 datafile backup set channel ORA_DISK_2: specifying datafile(s) in backup set input datafile file number=00008 name=/oradata/SAN/datafile/o1_mf_user_dat_9wvp8s78_.dbf channel ORA_DISK_2: starting piece 1 at 2014-JUL-24 11:12:26 channel ORA_DISK_1: finished piece 1 at 2014-JUL-24 11:13:01 piece handle=/oradata/backup1/SAN_2792912513_20140724_8epe6koq_1_1 tag=FULLDB comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35 channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00001 name=/oradata/SAN/datafile/o1_mf_system_9oqwr5tm_.dbf channel ORA_DISK_1: starting piece 1 at 2014-JUL-24 11:13:04 channel ORA_DISK_1: finished piece 1 at 2014-JUL-24 11:13:29 piece handle=/oradata/backup1/SAN_2792912513_20140724_8gpe6kpu_1_1 tag=FULLDB comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25 channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00002 name=/oradata/SAN/datafile/o1_mf_sysaux_9oqwrv2b_.dbf channel ORA_DISK_1: starting piece 1 at 2014-JUL-24 11:13:30 channel ORA_DISK_1: finished piece 1 at 2014-JUL-24 11:13:45 piece handle=/oradata/backup1/SAN_2792912513_20140724_8hpe6kqp_1_1 tag=FULLDB comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15 channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00005 name=/oradata/SAN/datafile/o1_mf_ggs_data_9or2h3tw_.dbf channel ORA_DISK_1: starting piece 1 at 2014-JUL-24 11:13:45 channel ORA_DISK_1: finished piece 1 at 2014-JUL-24 11:13:48 piece handle=/oradata/backup1/SAN_2792912513_20140724_8ipe6kr9_1_1 tag=FULLDB comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03 channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00006 name=/oradata/SAN/datafile/o1_mf_testing_9rgp1q31_.dbf channel ORA_DISK_1: starting piece 1 at 2014-JUL-24 11:13:49 channel ORA_DISK_1: finished piece 1 at 2014-JUL-24 11:13:52 piece handle=/oradata/backup1/SAN_2792912513_20140724_8jpe6krc_1_1 tag=FULLDB comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03 channel ORA_DISK_2: finished piece 1 at 2014-JUL-24 11:14:44 piece handle=/oradata/backup/SAN_2792912513_20140724_8fpe6koq_1_1 tag=FULLDB comment=NONE channel ORA_DISK_2: backup set complete, elapsed time: 00:02:18 Finished backup at 2014-JUL-24 11:14:44 Starting backup at 2014-JUL-24 11:14:44 current log archived using channel ORA_DISK_1 using channel ORA_DISK_2 channel ORA_DISK_1: starting compressed archived log backup set channel ORA_DISK_1: specifying archived log(s) in backup set input archived log thread=1 sequence=327 RECID=338 STAMP=853758885 channel ORA_DISK_1: starting piece 1 at 2014-JUL-24 11:14:46 channel ORA_DISK_1: finished piece 1 at 2014-JUL-24 11:14:47 piece handle=/oradata/backup1/SAN_2792912513_20140724_8kpe6kt6_1_1 tag=ARCHIVELOG comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 2014-JUL-24 11:14:47 Starting Control File Autobackup at 2014-JUL-24 11:14:48 piece handle=/oradata/backup/SAN_c-2792912513-20140724-05.ctl comment=NONE Finished Control File Autobackup at 2014-JUL-24 11:14:55 Recovery Manager complete. ----- $ ls -l backup* backup: total 501172 -rw-r-----. 1 oracle oinstall 505167872 Jul 24 11:14 SAN_2792912513_20140724_8fpe6koq_1_1 -rw-r-----. 1 oracle oinstall 8028160 Jul 24 11:14 SAN_c-2792912513-20140724-05.ctl backup1: total 77108 -rw-r-----. 1 oracle oinstall 237056 Jul 24 11:12 SAN_2792912513_20140724_8dpe6koo_1_1 -rw-r-----. 1 oracle oinstall 1236992 Jul 24 11:12 SAN_2792912513_20140724_8epe6koq_1_1 -rw-r-----. 1 oracle oinstall 39452672 Jul 24 11:13 SAN_2792912513_20140724_8gpe6kpu_1_1 -rw-r-----. 1 oracle oinstall 34349056 Jul 24 11:13 SAN_2792912513_20140724_8hpe6kqp_1_1 -rw-r-----. 1 oracle oinstall 2539520 Jul 24 11:13 SAN_2792912513_20140724_8ipe6kr9_1_1 -rw-r-----. 1 oracle oinstall 1073152 Jul 24 11:13 SAN_2792912513_20140724_8jpe6krc_1_1 -rw-r-----. 1 oracle oinstall 67072 Jul 24 11:14 SAN_2792912513_20140724_8kpe6kt6_1_1
If this does not hit the nail on the head, then I don’t know what will.
Imagine someone, maybe me or yourself, deleting archivelog accidentally.
RMAN> delete noprompt archivelog all; using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=108 device type=DISK allocated channel: ORA_DISK_2 channel ORA_DISK_2: SID=20 device type=DISK RMAN-08138: WARNING: archived log not deleted - must create more backups archived log file name=/oradata/SAN/archivelog/arc_845895297_1_326.dbf thread=1 sequence=326 RMAN-08138: WARNING: archived log not deleted - must create more backups archived log file name=/oradata/SAN/archivelog/arc_845895297_1_327.dbf thread=1 sequence=327 RMAN> ----- RMAN> configure archivelog deletion policy to none; old RMAN configuration parameters: CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 2 TIMES TO DISK; new RMAN configuration parameters: CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; new RMAN configuration parameters are successfully stored RMAN> delete noprompt archivelog all; released channel: ORA_DISK_1 released channel: ORA_DISK_2 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=108 device type=DISK allocated channel: ORA_DISK_2 channel ORA_DISK_2: SID=20 device type=DISK List of Archived Log Copies for database with db_unique_name SAN ===================================================================== Key Thrd Seq S Low Time ------- ---- ------- - -------------------- 337 1 326 A 2014-JUL-24 11:04:17 Name: /oradata/SAN/archivelog/arc_845895297_1_326.dbf 338 1 327 A 2014-JUL-24 11:12:21 Name: /oradata/SAN/archivelog/arc_845895297_1_327.dbf deleted archived log archived log file name=/oradata/SAN/archivelog/arc_845895297_1_326.dbf RECID=337 STAMP=853758742 deleted archived log archived log file name=/oradata/SAN/archivelog/arc_845895297_1_327.dbf RECID=338 STAMP=853758885 Deleted 2 objects RMAN>
Will you be using configure for your next RMAN implementation?
Today I was asked about strange problem: xmltable does not return data, if xquery specified by bind variable and xml data has xmlnamespaces:
SQL> var x_path varchar2(100); SQL> var x_xml varchar2(4000); SQL> col x format a100; SQL> begin 2 :x_path:='/table/tr/td'; 3 :x_xml :=q'[ 4 <table xmlns="http://www.w3.org/tr/html4/"> 5 <tr> 6 <td>apples</td> 7 <td>bananas</td> 8 </tr> 9 </table> 10 ]'; 11 end; 12 / PL/SQL procedure successfully completed. SQL> select 2 i, x 3 from xmltable( xmlnamespaces(default 'http://www.w3.org/tr/html4/'), 4 :x_path -- bind variable 5 --'/table/tr/td' -- same value as in the variable "X_PATH" 6 passing xmltype(:x_xml) 7 columns i for ordinality, 8 x xmltype path '.' 9 ); no rows selected
But if we comment bind variable and comment out literal x_query ‘/table/tr/td’, query will return data:
SQL> select 2 i, x 3 from xmltable( xmlnamespaces(default 'http://www.w3.org/tr/html4/'), 4 --:x_path -- bind variable 5 '/table/tr/td' -- same value as in the variable "X_PATH" 6 passing xmltype(:x_xml) 7 columns i for ordinality, 8 x xmltype path '.' 9 ); I X ---------- ------------------------------------------------------------------- 1 <td xmlns="http://www.w3.org/tr/html4/">apples</td> 2 <td xmlns="http://www.w3.org/tr/html4/">bananas</td> 2 rows selected.
The only workaround I found is the specifying any namespace in the x_query – ‘/*:table/*:tr/*:td’
SQL> exec :x_path:='/*:table/*:tr/*:td' PL/SQL procedure successfully completed. SQL> select 2 i, x 3 from xmltable( xmlnamespaces(default 'http://www.w3.org/tr/html4/'), 4 :x_path -- bind variable 5 passing xmltype(:x_xml) 6 columns i for ordinality, 7 x xmltype path '.' 8 ); I X ---------- ------------------------------------------------------------------- 1 <td xmlns="http://www.w3.org/tr/html4/">apples</td> 2 <td xmlns="http://www.w3.org/tr/html4/">bananas</td> 2 rows selected.
It’s quite ugly solution, but I’m not sure whether there is another solution…
So, back in January, Noel (@noelportugal) took a team of developers to the AT&T Developer Summit Hackathon in Las Vegas.
Although they didn’t win, the built some very cool stuff, combining Google Glass, Philips Hue, Internet of Things, and possibly a kitchen sink in there somewhere, into what can only be described as a smart holster. You know, for guns.
You read that right. This project was way out of our usual wheelhouse, which is what made it so much fun, or so I’m told.
Friend of the ‘Lab Martin Taylor was kind enough to produce, direct and edit the following video, in which Noel describes and demonstrates the holster’s capabilities.
Did you catch the bit at 3:06? That’s Raymond behind the mask.
Enjoy.Possibly Related Posts:
- How to Make Your Own Raspberry Pi-Powered Smart Target
- Even More Internet of Things, SmartThings
- Oracle Social Network Developer Challenge Winners
- AppsLab at the Maker Faire
- Fourfecta! Java, IoT, Making and Raspi