Skip navigation.

Feed aggregator

How to measure Oracle index fragmentation

Yann Neuhaus - Mon, 2014-10-13 14:39

At Oracle Open World 2014, or rather the Oaktable World, Chris Antognini has presented 'Indexes: Structure, Splits and Free Space Management Internals'. It's not something new, but it's still something that is not always well understood: how index space is managed, block splits, fragmentation, coalesce and rebuilds. Kyle Hailey has made a video of it available here.
For me, it is the occasion to share the script I use to see if an index is fragmented or not.

First, forget about those 'analyze index validate structure' which locks the table, and the DEL_LEAF_ROWS that counts only the deletion flags that are transient. The problem is not the amount of free space. The problem is where is that free space. Because if you will insert again in the same range of values, then that space will be reused. Wasted space occurs only when lot of rows were deleted in a range where you will not insert again. For exemple, when you purge old ORDERS, then the index on the ORDER_DATE - or on the ORDER_ID coming from a sequence - will be affected. Note that the problem occurs only for sparse purges because full blocks are reclaimed when needed and can get rows from an other range of value.

I have a script that shows the number of rows per block, as well as used and free space per block, and aggregates that by range of values.

First, let's create a table with a date and an index on it:

SQL> create table DEMOTABLE as select sysdate-900+rownum/1000 order_date,decode(mod(rownum,100),0,'N','Y') delivered , dbms_random.string('U',16) cust_id
  2  from (select * from dual connect by level

My script shows 10 buckets with begin and end value and for each of them the averge number of rows per block and the free space:

SQL> @index_fragmentation 

ORDER_DAT -> ORDER_DAT rows/block bytes/block %free space     blocks free                   
--------- -- --------- ---------- ----------- ----------- ---------- -----                     
24-APR-12 -> 02-AUG-12        377        7163          11        266                        
03-AUG-12 -> 11-NOV-12        377        7163          11        266                        
11-NOV-12 -> 19-FEB-13        377        7163          11        266                        
19-FEB-13 -> 30-MAY-13        377        7163          11        265                        
30-MAY-13 -> 07-SEP-13        377        7163          11        265                        
07-SEP-13 -> 16-DEC-13        377        7163          11        265                        
16-DEC-13 -> 26-MAR-14        377        7163          11        265                        
26-MAR-14 -> 03-JUL-14        377        7163          11        265                        
04-JUL-14 -> 11-OCT-14        377        7163          11        265                        
12-OCT-14 -> 19-JAN-15        376        7150          11        265                        

Note that the script reads all the table (it can do a sample but here it is 100%). Not exactly the table but only the index. It counts the index leaf blocks with the undocumented function sys_op_lbid() which is used by oracle to estimate the clustering factor.

So here I have no fragmentation. All blocks have about 377 rows and no free space. This is because I inserted them in increasing order and the so colled '90-10' block split occured.

Let's see what I get if I delete most of the rows before the 01-JAN-2014:

SQL> delete from DEMOTABLE where order_dateSQL> @index_fragmentation 

ORDER_DAT -> ORDER_DAT rows/block bytes/block %free space     blocks free                   
--------- -- --------- ---------- ----------- ----------- ---------- -----                     
25-APR-12 -> 02-AUG-12          4          72          99        266 oooo                   
03-AUG-12 -> 11-NOV-12          4          72          99        266 oooo                   
11-NOV-12 -> 19-FEB-13          4          72          99        266 oooo                   
19-FEB-13 -> 30-MAY-13          4          72          99        265 oooo                   
30-MAY-13 -> 07-SEP-13          4          72          99        265 oooo                   
07-SEP-13 -> 16-DEC-13          4          72          99        265 oooo                   
16-DEC-13 -> 26-MAR-14          4          72          99        265 oooo                   
26-MAR-14 -> 03-JUL-14          4          72          99        265 oooo                   
04-JUL-14 -> 11-OCT-14         46         870          89        265 oooo                   
12-OCT-14 -> 19-JAN-15        376        7150          11        265                        

I have the same buckets, and same number of blocks. But blocks which are in the range below 01-JAN-2014 have only 4 rows and a lot of free space. This is exactly what I want to detect: I can check if that free space will be reused.

Here I know I will not enter any orders with a date in the past, so those blocks will never have an insert into them. I can reclaim that free space with a COALESCE:

SQL> alter index DEMOINDEX coalesce;

Index altered.
SQL> @index_fragmentation 

ORDER_DAT to ORDER_DAT rows/block bytes/block %free space     blocks free                      
--------- -- --------- ---------- ----------- ----------- ---------- -----                     
25-APR-12 -> 03-OCT-14        358        6809          15         32                        
03-OCT-14 -> 15-OCT-14        377        7163          11         32                        
15-OCT-14 -> 27-OCT-14        377        7163          11         32                        
27-OCT-14 -> 08-NOV-14        377        7163          11         32                        
08-NOV-14 -> 20-NOV-14        377        7163          11         32                        
20-NOV-14 -> 02-DEC-14        377        7163          11         32                        
02-DEC-14 -> 14-DEC-14        377        7163          11         32                        
14-DEC-14 -> 26-DEC-14        377        7163          11         32                        
27-DEC-14 -> 07-JAN-15        377        7163          11         32                        
08-JAN-15 -> 19-JAN-15        371        7056          12         32                        

I still have 10 buckets because this is defined in my script, but each bucket noew has less rows. I've defragmented the blocks and reclaimed the free blocks.

Time to share the script now. Here it is: index_fragmentation.zip

The script is quite ugly. It's SQL generated by PL/SQL. It's generated because it selects the index columns. And as I don't want to have it too large it is not indented nor commented. However, if you run it with set servertoutput on you will see the generated query.

How to use it? Just change the owner, table_name, and index name. It reads the whole index so if you have a very large index you may want to change the sample size.

Color Analysis of Flags – Patterns and symbols – Visualizations and Dashboards

Nilesh Jethwa - Mon, 2014-10-13 13:54

Recently, here at InfoCaptor we started a small research on the subject of flags. We wanted to answer certain questions like what are the most frequently used colors across all country flags, what are the different patterns etc.

Read more Color and Pattern analysis on Flags of Countries – Simple visualization but interesting data

ShareThis

Oracle Critical Patch Update October 2014 - Massive Patch

Just when you thought the Oracle Database world was getting safer, Oracle will be releasing fixes for 32 database security bugs on Tuesday, October 14th.  This is in stark contrast to the previous twenty-five quarters where the high was 16 database bugs and average per quarter was 8.2 database bugs.  For the previous two years, the most database bugs fixed in a single quarter was six.

In addition to the 32 database security bugs, there are a total of 155 security bugs fixed in 44 different products.

Here is a brief analysis of the pre-release announcement for the upcoming October 2014 Oracle Critical Patch Update (CPU).

Oracle Database

  • There are 32 database vulnerabilities; only one is remotely exploitable without authentication and 4 are applicable to client-side only installations.
  • Since at least one database vulnerability has a CVSS 2.0 metric of 9.0 (critical for a database vulnerability), this is a fairly important CPU due to severity and volume of fixes.
  • The remotely exploitable without authentication bug is likely in Application Express (APEX).  Any organizations running APEX externally on the Internet should look to apply the relevant patches immediately.  To patch APEX, the newest version must be installed, which requires appropriate testing and upgrading of applications.
  • There are four cilent-side only installations and likely most are in JDBC.
  • Core RDBMS and PL/SQL are listed as patched components, so most likely there are critical security vulnerabilities in all database implementations.

Oracle Fusion Middleware

  • There are 17 new Oracle Fusion Middleware vulnerabilities, 13 of which are remotely exploitable without authentication and the highest CVSS score being 7.5.
  • Various Fusion Middleware products are listed as vulnerable, so you should carefully review this CPU to determine the exact impact to your environment.
  • The core WebLogic Server is listed as a patched component, therefore, most likely all Fusion Middleware customers will have to apply the patch.

Oracle E-Business Suite 11i and R12

  • There are nine new Oracle E-Business Suite 11i and R12 vulnerabilities, seven of which are remotely exploitable without authentication.  Many of these are in core Oracle EBS components such as Oracle Applications Framework (OAF) and Application Object Library (AOL/FND).  Even though the maximum CVSS score is 5.0, most of these vulnerabilities should be considered high risk.
  • All DMZ implementations of Oracle EBS should carefully review the CPU to determine if there environment is vulnerable.  As all Oracle EBS CPU patches are now cumulative, the CPU patch should be prioritized or mitigating controls, such as AppDefend, be implemented.

Planning Impact

  • We anticipate this quarter's CPU to be higher risk than most and should be prioritized.  Based on the patched components, this may be a higher than average risk CPU for all Oracle database environments.
  • As with all previous CPUs, this quarter's security patches should be deemed critical and you should adhere to the established procedures and timing used for previous CPUs.
  • For Oracle E-Business Suite customers, DMZ implementations may have to apply this quarter's patch faster than previous quarters due to the number and severity of bugs.
Tags: Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

Here We Grow Again

Oracle AppsLab - Mon, 2014-10-13 12:18

Cheesy title aside, the AppsLab (@theappslab) is growing again, and this time, we’re branching out into new territory.

As part of the Oracle Applications User Experience (@usableapps) team, we regularly work with interaction designers, information architects and researchers, all of whom are pivotal to ensuring that what we build is what users want.

Makes sense, right?

So, we’re joining forces with the Emerging Interactions team within OAUX to formalize a collaboration that has been ongoing for a while now. In fact, if you read here, you’ll already recognize some of the voices, specifically John Cartan and Joyce Ohgi, who have authored posts for us.

For privacy reasons (read, because Jake is lazy), I won’t name the entire team, but I’m encouraging them to add their thoughts to this space, which could use a little variety. Semi-related, Noel (@noelportugal) was on a mission earlier this week to add content here and even rebrand this old blog. That seems to have run its course quickly.

One final note, another author has also joined the fold, Mark Vilrokx (@mvilrokx); Mark brings a long and decorated history of development experience with him.

So, welcome everyone to the AppsLab team.Possibly Related Posts:

Memory

Jonathan Lewis - Mon, 2014-10-13 10:24

On a client site recently, experimenting with a T5-2 – fortunately a test system – we decided to restart an instance with a larger SGA. It had been 100GB, but with 1TB of memory and 256 threads (2 sockets, 16 cores per socket, 8 threads per core) it seemed reasonable to crank this up to 400GB for the work we wanted to do.

It took about 15 minutes for the instance to start; worse, it took 10 to 15 seconds for a command-line call to SQL*Plus on the server to get a response; worse still, if I ran a simple “ps -ef” to check what processes were running the output started to appear almost instantly but stopped after about 3 lines and hung for about 10 to 15 seconds before continuing. The fact that the first process name to show after the “hang” was one of the Oracle background processes was a bit of hint, though.

Using truss on both the SQL*Plus call and on the ps call, I found that almost all the elapsed time was spent in a call to shmatt (shared memory attach); a quick check with “ipcs – ma” told me (as you might guess) that the chunk of shared memory identified by truss was one of the chunks allocated to Oracle’s SGA. Using pmap on the pmon process to take a closer look at the memory I found that it consisted of a few hundred pages sized at 256MB and a lot of pages sized at 8KB; this was a little strange since the alert log had told me that the instance was allocating about 1,600 memory pages of 256MB (i.e. 400GB) and 3 pages of 8KB – clearly a case of good intentions failing.

It wasn’t obvious what my next steps should be – so I bounced the case off the Oak Table … and got the advice to reboot the machine. (What! – it’s not my Windows PC, it’s a T5-2.) The suggestion worked: the instance came up in a few seconds, with shared memory consisting of a few 2GB pages, a fair number of 256MB pages, and a few pages of other sizes (including 8KB, 64KB and 2MB).

There was a little more to the advice than just rebooting, of course; and there was an explanation that fitted the circumstances. The machine was using ZFS and, in the absence of a set limit, the file system cache had at one point managed to acquire 896 GB of memory. In fact when we first tried to start the instance at with a 400GB SGA Oracle couldn’t start up at all until the system administrator had reduced the filesystem cache and freed up most of the memory; even then so much of the memory had been allocated originally in 8KB pages that Oracle had made a complete mess of building a 400GB memory map.

I hadn’t passed all these details to the Oak Table but the justification for the suggested course of action (which came from Tanel Poder) sounded like a perfect description of what had been happening up to that point. In total his advice was:

  • limit the ZFS ARC cache (with two possible strategies suggested)
  • use sga_target instead of memory_target (to avoid a similar problem on memory resize operations)
  • start the instance immediately after the reboot

Maxim: Sometimes the solution you produce after careful analysis of the facts looks exactly like the solution you produce when you can’t think of anything else to do.


Digital Learning – LVC: It’s the attitude, stupid!

The Oracle Instructor - Mon, 2014-10-13 06:33

The single most important factor for successful digital learning is the attitude both of the instructor as well as of the attendees towards the course format. Delivery of countless Live Virtual Classes (LVCs) for Oracle University made me realize that. There are technical prerequisites of course: A reliable and fast network connection and the usage of a good headset is mandatory, else the participant is doomed from the start. Other prerequisites are the same as for traditional courses: Good course material, working lab environment for hands on practices and last not least knowledgeable instructors. For that part notice that we have the very same courseware, lab environments and instructors like for our classroom courses at Oracle University education centers also for LVCs. The major difference is in your head :-)

Delivering my first couple of LVCs, I felt quite uncomfortable with that new format. Accordingly, my performance was not as good as usual. Meanwhile, I consider the LVC format as totally adequate for my courses and that attitude enables me to deliver them with the same performance as my classroom courses. Actually, they are even better to some degree: I always struggle producing clean sketches with readable handwriting on the whiteboard. Now look at this MS paint sketch from one of my Data Guard LVCs:

Data Guard Real-Time Apply

Data Guard Real-Time Apply

Attendees get all my sketches per email if they like afterwards.

In short: Because I’m happy delivering through LVC today, I’m now able to do it with high quality. The attitude defines the outcome.

Did you ever have a teacher in school that you just disliked for some reason? It was hard to learn anything from that teacher, right? Even if that person was competent.

So this is also true on the side of the attendee: The attitude defines the outcome. If you take an LVC thinking “This cannot work!”, chances are that you are right just because of your mindset. When you attend an LVC with an open mind – even after some initial trouble because you need to familiarize yourself with the learning platform and the way things are presented there – it is much more likely that you will benefit from it. You may even like it better than classroom courses because you can attend from home without the time and expenses it takes to travel :-)

Some common objections against LVC I have heard from customers and my usual responses:

An LVC doesn’t deliver the same amount of interaction like a classroom course!

That is not necessarily so: You are in a small group (mostly less than 10) that is constantly in an audio conference. Unmute yourself and say anything you like, just like in a classroom. Additionally, you have a chatbox available. This is sometimes extremely helpful, especially with non-native speakers in the class :-) You can easily exchange email addresses using the chatbox as well and stay in touch even after the LVC.

I have no appropriate working place to attend an LVC!

You have no appropriate working place at all, then, for something that requires a certain amount of concentration. Talk to your manager about it – maybe there is something like a quiet room available during the LVC.

I cannot keep up the attention when starring the whole day on the computer screen!

Of course not, that is why we have breaks and practices in between the lessons.

Finally, I would love to hear about your thoughts and experiences with online courses! What is your attitude towards Digital Learning?


Tagged: Digital Learning, LVC
Categories: DBA Blogs

PLS-00201 in stored procedures

Laurent Schneider - Mon, 2014-10-13 03:36

When you grant table access thru a role, you cannot use that role in a stored procedure or view.


create role r;

create user u1 identified by ***;
grant create procedure, create session to u1;

create user u2 identified by ***;
grant create procedure, create session, r to u2;

conn u1/***
create procedure u1.p1 is begin null; end; 
/

grant execute on u1.p1 to r;

conn u2/***

create procedure u2.p2 is begin u1.p1; end; 
/

sho err procedure u2.p2

Errors for PROCEDURE U2.P2:

L/COL ERROR
----- -------------------------------------------
1/26  PL/SQL: Statement ignored
1/26  PLS-201: identifier U1.P1 must be declared

However, If i run it in an anonymous block, it works


declare
  procedure p2 is 
  begin 
    u1.p1; 
  end; 
begin
  p2; 
end; 
/

PL/SQL procedure successfully completed.

But this only works when my role is active. If my role is no longer active, then it obviously fails.


set role none;

declare 
  procedure p2 is 
  begin 
    u1.p1; 
  end; 
begin 
  p2; 
end; 
/
ERROR at line 1:
ORA-06550: line 4, column 5:
PLS-00201: identifier 'U1.P1' must be declared
ORA-06550: line 4, column 5:
PL/SQL: Statement ignored

It is all written in the doc,

All roles are disabled in any named PL/SQL block (stored procedure, function, or trigger) that executes with definer’s rights

I knew the behavior but not the reason behind it. Thanks to Bryn for bringing me so much knowledge on plsql.

#OOW14, #DB12c, #BikeB4OOW(!) at 14:45 today, #CON8269: "Doing PL/SQL from SQL: Correctness and Performance" http://t.co/A9BKqrKygv

— Bryn Llewellyn (@BrynLite) September 29, 2014

Context for Cloudera

DBMS2 - Mon, 2014-10-13 02:02

Hadoop World/Strata is this week, so of course my clients at Cloudera will have a bunch of announcements. Without front-running those, I think it might be interesting to review the current state of the Cloudera product line. Details may be found on the Cloudera product comparison page. Examining those details helps, I think, with understanding where Cloudera does and doesn’t place sales and marketing focus, which given Cloudera’s Hadoop market stature is in my opinion an interesting thing to analyze.

So far as I can tell (and there may be some errors in this, as Cloudera is not always accurate in explaining the fine details):

  • CDH (Cloudera Distribution … Hadoop) contains a lot of Apache open source code.
  • Cloudera has a much longer list of Apache projects that it thinks comprise “Core Hadoop” than, say, Hortonworks does.
    • Specifically, that list currently is: Hadoop, Flume, HCatalog, Hive, Hue, Mahout, Oozie, Pig, Sentry, Sqoop, Whirr, ZooKeeper.
    • In addition to those projects, CDH also includes HBase, Impala, Spark and Cloudera Search.
  • Cloudera Manager is closed-source code, much of which is free to use. (I.e., “free like beer” but not “free like speech”.)
  • Cloudera Navigator is closed-source code that you have to pay for (free trials and the like excepted).
  • Cloudera Express is Cloudera’s favorite free subscription offering. It combines CDH with the free part of Cloudera Manager. Note: Cloudera Express was previously called Cloudera Standard, and that terminology is still reflected in parts of Cloudera’s website.
  • Cloudera Enterprise is the umbrella name for Cloudera’s three favorite paid offerings.
  • Cloudera Enterprise Basic Edition contains:
    • All the code in CDH and Cloudera Manager, and I guess Accumulo code as well.
    • Commercial licenses for all that code.
    • A license key to use the entirety of Cloudera Manager, not just the free part.
    • Support for the “Core Hadoop” part of CDH.
    • Support for Cloudera Manager. Note: Cloudera is lazy about saying this explicitly, but it seems obvious.
    • The code for Cloudera Navigator, but that’s moot, as the corresponding license key for Cloudera Navigator is not part of the package.
  • Cloudera Enterprise Data Hub Edition contains:
    • Everything in Cloudera Basic Edition.
    • A license key for Cloudera Navigator.
    • Support for all of HBase, Accumulo, Impala, Spark, Cloudera Search and Cloudera Navigator.
  • Cloudera Enterprise Flex Edition contains everything in Cloudera Basic Edition, plus support for one of the extras in Data Hub Edition.

In analyzing all this, I’m focused on two particular aspects:

  • The “zero, one, many” system for defining the editions of Cloudera Enterprise.
  • The use of “Data Hub” as a general marketing term.

Given its role as a highly influential yet still small “platform” vendor in a competitive open source market, Cloudera even more than most vendors faces the dilemma:

  • Cloudera wants customers to adopt its views as to which Hadoop-related technologies they should use.
  • However, Cloudera doesn’t want to be in the position of trying to ram some particular unwanted package down a customer’s throat.

The Flex/Data Hub packaging fits great with that juggling act, because Cloudera — and hence also Cloudera salespeople — get paid exactly as much when customers pick 2 Flex options as when they use all 5-6. If you prefer Cassandra or MongoDB to HBase, Cloudera is fine with that. Ditto if you prefer CitusDB or Vertica or Teradata Hadapt to Impala. Thus Cloudera can avoid a lot of religious wars, even if it can’t entirely escape Hortonworks’ “More open source than thou” positioning.

Meanwhile, so far as I can tell, Cloudera currently bets on the “Enterprise Data Hub” as its core proposition, as evidenced by that term being baked into the name of Cloudera’s most comprehensive and expensive offering. Notes on the EDH start:

  • Cloudera also portrays “enterprise data hub” as an architectural/reference architecture concept.
  • “Enterprise data hub” doesn’t really mean anything very different from “data lake” + “data refinery”; Cloudera just thinks it sounds more important. Indeed, Cloudera claims that the other terms are dismissive or disparaging, at least in some usages.

Cloudera’s long-term dream is clearly to make Hadoop the central data platform for an enterprise, while RDBMS fill more niche (or of course also legacy) roles. I don’t think that will ever happen, because I don’t think there really will be one central data platform in the future, any more than there has been in the past. As I wrote last year on appliances, clusters and clouds,

Ceteris paribus, fewer clusters are better than more of them. But all things are not equal, and it’s not reasonable to try to reduce your clusters to one — not even if that one is administered with splendid efficiency by low-cost workers, in a low-cost building, drawing low-cost electric power, in a low-cost part of the world.

and earlier in the same post

… these are not persuasive reasons to put everything on a SINGLE cluster or cloud. They could as easily lead you to have your VMware cluster and your Exadata rack and your Hadoop cluster and your NoSQL cluster and your object storage OpenStack cluster — among others — all while participating in several different public clouds as well.

One system is not going to be optimal for all computing purposes.

Categories: Other

Workaround for ADF 12c Choice List Blank Selection Issue

Andrejus Baranovski - Sun, 2014-10-12 09:28
I would like to share a workaround for Choice List component in ADF 12c. There is specific issue, related to blank selection - as soon as user selects blank selection in the choice list, it starts to invoke value change listener for that list, each time when any other element is selected in the table. This is quite annoying and could lead to unexpected results, especially if you depend on logic implemented in value change listener.

Workaround is implemented in the sample application - ADFChoiceList12cApp.zip. We should take a look first, how it works without workaround applied. Imagine you have a table and one of the columns implements a choice list. Go and select a blank selection for a couple of rows:


Navigate between rows, do couple of clicks - you should notice in the log information about value change listener invoked. This is quite strange, it should not call value change listener each time, just only first time after selection was changed in the list:


Choice list UI component is set with AutoSubmit=true and assigned with value change listener method:


Choice list is configured with blank item selection in ADF BC:


To apply workaround, go and uncheck blank item selection in ADF BC, make sure checkbox for 'Include No Selection Item' is unchecked:


On UI side, provide any value for UnselectedLabel property of the choice list component, this will generate property in the source (alternatively you could just type it manually directly in the source):


Make sure UnselectedLabel property is set to be blank, this will generate blank item on the runtime (it works much better than the one enabled from ADF BC):


User could navigate through the table, there will be no value change listener triggered anymore (besides one time, when value was actually changed):


Exciting as usual - new features and new bugs :)

Managing your Database in a Zombie Apocalypse

Steve Karam - Sun, 2014-10-12 07:00
Zombie Server Room

Only two things are really certain: network latency over long distances, and the fact that humanity will soon rapidly degenerate into undead brain-eaters.

When that day comes, when the dead are crowding at your door and the windows are busted out and ripped up rotted arms are clawing at the inside of your home, I know what you’ll be thinking: is my database protected?

Don’t worry, my friends. The Oracle Alchemist has you covered. We just need to zombie-proof your DR plan. Let’s get started.

Getting the Power Back

Hopefully you did the smart thing and figured out how much battery and generator power you’d need to survive multiple years of failing power systems due to zombies. I know I did.

However, if you didn’t get this critical task done you may still have some options. Statistics show that the demand for U.S. gasoline was 8.73 million barrels in 2012. That comes out to 23,917.80821917808219 barrels per day of fuel that’s out there just waiting for you to snatch it up. The problem is going to be getting it. You’ll need to load yourself down with lots of weaponry and strike out in a fuel truck a few times a week, which will definitely take away from your database administration time. It’s a smart idea to enable a lot of automation and monitoring to take care of things while you’re out.

Zombie Gas StationYou’re going to need to fight other groups of surviving IT for fuel. This means you’re going to need friends. The way I see it, you have two choices: SysAdmins and Developers. They’re the two groups you work closest with as a DBA, so they’re the most likely to have your back when the dead walk. Start your planning now. If you want to get the developers on your side, tune some queries for them. Seriously, nothing will convince a developer to slice through the brain base of a walker like adding some key indexes when a query goes south during testing. However, if you think the SysAdmins are more likely to fight off rival gangs of resource hogs on the prowl for food and fuel, you can make them feel good by keeping all your filesystems cleaned up and RAM usage at a minimum.

The Problem with Zombies

Remember, the walking dead are tenacious. You remember before the apocalypse when a bunch of reporting users would all log into the database and run huge ad hoc queries against PROD without thinking about what they were doing? That was nothing. Zombies are the real deal. They will tear through a database faster than a multi-terabyte cartesian product. You can deploy outside the box now increase your chances of having a clone out there somewhere, just in case. If you want that database to survive, you’re going to need standbys. Lots of them.

I’d recommend a hub and spoke configuration. One central production database, at least 5 standby databases. As everybody knows, the chances of a zombie bringing down a database are roughly 89.375%. With 5 standby environments, you can drastically reduce the odds of being left without a standby system. On the plus side, zombies are completely brainless. What this means is that you don’t have to worry about masking or obfuscating your backup data in any way. Even on the off chance one of them kicks off a query (even zombies can figure out SQL Developer), they won’t be able to comprehend your users’ personal data, and with the complete downfall of the dollar it won’t matter if they see any credit information. So rest easy.

When All Else Fails

At some point, the zombies are going to come for you. Sorry, but it’s a statistical fact and there’s not much we can do about that. At that moment, when all hope is lost, you’re really going to need to protect your database because once you become a zombie too there really won’t be anyone left and you won’t be focused on maintaining it anymore; you’ll be focused on acquiring copious amounts of human flesh.

Zombie Server RoomSo make your last stand count. You’re a soon-to-be-undead DBA, act like it! Remember how we tune. Eliminate the wait, punch through the bottlenecks, make efficient use of processing power. Don’t get trapped between a rack and a hard place. If you have to play a game of circle-the-Exadata in order to get away, go for it, but don’t let them corner you. And whatever you do, make sure you keep your badge with you. The last thing you need is to hit a door you can’t get through without the proper credentials. Above all else: remember to kick off a backup before they finally take you. I’d recommend having the script ready and running on the console just in case you have to hit a quick key.

Good luck. You’re going to need it.

The post Managing your Database in a Zombie Apocalypse appeared first on Oracle Alchemist.

Latest Oracle allows SELECT without SELECT...FOR UPDATE

Gary Myers - Sat, 2014-10-11 21:11
Digging through a backlog of Oracle blogs, I came across an gem in a presentation from AMIS (on Slideshare). Got to bullet point 5 on slide 63 and boom !

You all know that when you grant SELECT on a table to a user, they can do a SELECT FOR UPDATE, locking records in the table and preventing other updates or deletes. [Some client tools may do that in the background. ]

Well finally Oracle have cottoned on to that too, and there's a lighter-weight "READ" privilege in 12.1.0.2 which won't allow SELECT FOR UPDATE.

This will make DBAs very happy. Actually it won't. The natural state of a DBA is grumpy, at least when in the vicinity of a developer or salesman.


PS. Why would SELECT FOR UPDATE ever be a good idea for a user with no UPDATE privilege ?
If I had to guess, I'd say it went back to a 'pre-read consistency' model when you might use a SELECT FOR UPDATE to try to select data that wasn't being updated.

Partner Webcast Special Edition – Oracle Mobile Application Framework: Developer Challenge

Win up to $6.000 by developing a mobile application with Oracle Mobile Application Framework! Mobile technology has changed the way that we live and work. As mobile interfaces take the lion’s...

We share our skills to maximize your revenue!
Categories: DBA Blogs

grumpy old dba goes to the mountains ( have to ski somewhere )?

Grumpy old DBA - Sat, 2014-10-11 14:47
So in a slight reversal of submit so many presentation abstracts get rejected so many times the Rocky Mountain training days 2015 will be featuring me!  Wow pumped yikes this should be a ton of fun!

Ok to be totally honest Maria Colgan is keynoting not me ( shocking news still trying to get over it ).

Rocky Mountain training days 2015

Looks like a top batch of speakers and topics ... I only have to present at the same time as people like Alex Gorbachev / Bill Inmon / Heli Helskyaho ... so well maybe room is a little sparse but I will be attempting to bribe people into attending my session so we will see how that goes.

Thanks RMOUG!

PS on another topic more news soon on our GLOC 2015 conference coming up in May 2015!

Categories: DBA Blogs

Kscope15 Abstract deadline is Wednesday, October 15!

Tim Tow - Sat, 2014-10-11 13:08
Kscope15 abstract submission will close in just a few days and the ODTUG content teams want you!

I am sure there are many of you who have never done a presentation at a conference and may not feel comfortable speaking in public. Well, I have a secret for you, despite the fact that I won the Best Speaker Award at Kscope14, I share your discomfort with speaking. What works for me, however, is that I only speak about topics that I know very well. Do you have some area of expertise that you know really well? If you do, please share it! It will be fun and all of your peers who learn something new, from you, will appreciate it!

With that, take the next step and click here to submit session!
Categories: BI & Warehousing

OCP 12C – SQL Tuning

DBA Scripts and Articles - Fri, 2014-10-10 12:23

What’s new ? Oracle 12c introduces a major update called Adaptive Query Optimization which is based on : Adaptive execution plans Adaptive Statistics These two functionnalities are used to improve execution plans by using dynamic statistics gathered during the first part of the SQL execution. This allow to create more efficient plans that those using [...]

The post OCP 12C – SQL Tuning appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Oracle OpenWorld and JavaOne OTN Interviews - Part 1

OTN TechBlog - Fri, 2014-10-10 11:22
Oracle OpenWrold and JavaOne 2014 are over and the OTN Community Managers are busy editing TONS of video they took with Oracle experts and community members at the events. This week's best of will focus on the Java and Architect Community videos posted so far. Next Friday we will focus on the Database and Systems Communities. Architect Community From OTN Architect Community Manager Bob Rhubart

During Oracle OpenWorld 2014 I spent most of my time in the OTN Lounge recording 2 Minute Tech Tips, as presented by a small army of architects and developers. The premise is simple: each person has to beat the clock to deliver an interesting tech tip before the countdown timer runs down. All of the participants were great sports, and putting these videos together involved a lot of laughter. The three videos below are just a sample of the more than 28 Tech Tip videos now available -- and more are on the way!

Click here for more 2 Minute Tech Tip Videos.

Java Community From OTN Java Sr. Community Manager Tori Wieldt JavaOne 2014 I spent most of my time in the Java Hub recording video. Below is just a sample of what is to come, more is on the way!
  • Java ME and ProSyst Software
    Dr. Dimitar Valtchev, CTO of ProSyst Software, discusses Java ME and working with small devices with Java ME. Live from JavaOne, by Tori Wieldt of the Oracle Technology Network..
  • IoT Dev Challenge Student Winners: ePot
    JavaOne interview with the student winners of the OTN IoT Developer Challenge.
  • JavaFX and ObjectWheel
    Mike Tallent, CEO of ObjectWheel, discusses using drag-and-drop to create UIs with ObjectWheel. Live from JavaOne, by Tori Wieldt of the Oracle Technology Network..

Click here for more OTN JavaOne Videos.

Did You See Our Work in Steve Miranda’s Keynote?

Oracle AppsLab - Fri, 2014-10-10 09:28

Last week at OpenWorld, a few of our projects were featured in Steve Miranda’s (@stevenrmiranda) keynote session.

Jeremy (@jrwashley) tweeted the evidence.

jatweet

Debra (@debralilley) noticed too. I wasn’t able to attend the keynote, so I found out thanks to the Usable Apps (@usableapps) Storify, which chronicled “Our OpenWorld 2014 Journey.”

And today, I finally got to see the video, produced by Friend of the ‘Lab, Martin Taylor, who you might remember from other awesome videos like “A Smart Holster for Law Enforcement.”

Noel (@noelportugal) and Anthony (@anthonyslai) both play developers in the short film. Noteworthy, the expression on Noel’s face as he drives the Sphero ball with the Muse, brain-sensing headband.

Thanks to Martin for making this video, thanks to Steve for including it in his keynote, and thanks to you for watching it.Possibly Related Posts:

What the Internet of Things means for database administration

Chris Foot - Fri, 2014-10-10 08:41

When it comes to storing data produced by Internet-connected devices, relational databases may not make the cut.

The push for NoSQL environments is growing in tandem with the pace at which wearable machines and industrial sensors is increasing. Database administration services are customizing systems to fit the needs of manufacturers, hospitals, retailers and other companies investing heavily in the Internet of Things.

The rise of the Industrial Internet
General Electric has expressed a considerable amount of interest in the technology, equipping its wind turbines, jet engines, locomotives and other machines to boost efficiency. The goal is to develop smarter implementations that will cumulatively save economic sectors hundreds of billions of dollars.

To achieve this goal, GE is partnering with Cisco, Intel and Verizon to create intelligent factories and public infrastructures, according to InformationWeek. For example, while Cisco's rugged routers will be placed throughout a facility, Intel's chips and servers will support computations. To enable GE's sensors to use these assets, Verizon will provide networking. With this environment, GE's devices will be able to perform predictive maintenance, power optimization and other tasks.

GE isn't preventing organizations from customizing these platforms, either. InformationWeek noted the company is developing Predix, a data aggregation and processing solution that's based on open-source technologies. This enables GE customers to create their own algorithms, code and data sources.

How it affects database administration
As distributed machines communicate with one another, produce native data and draw conclusions of their own, storing such information in SQL databases isn't the best option, noted IoT World contributor Sherly Mendoza. The issue comes down to how these environments store information

While SQL systems offer consistency and reliability, Mendoza maintained they require a lot of processing power to manage indexes and buffers as well as lock, latch and log data. In addition, although database experts can mitigate this problem, relational databases aren't easy to scale.

The solution lies in non-relational servers, which don't require administrators to assign information to schemes. The result is an architecture comprised of simple data structures that can be manipulated and adjusted as professionals require them. Because the data produced by the IoT is primarily unstructured, this makes NoSQL databases the optimal choice.

One key advantage of NoSQL environments is their ability to pull and organize data distributed across multiple servers. This means they can also scale across two machines or more, making hardware capacity less of a concern.

Preparing for the IoT boom requires considerable knowledge of non-relational databases, as well as the staff capable of enhancing these implementations.

The post What the Internet of Things means for database administration appeared first on Remote DBA Experts.

Deploying a Private Cloud at Home — Part 2

Pythian Group - Fri, 2014-10-10 08:34

Today’s blog post is part two of seven in a series dedicated to Deploying Private Cloud at Home, where I will demonstrate how to do basic configuration setup to get started with OpenStack. In my first blog post, I explained why I decided to use OpenStack.

I am using a two-node setup in my environment, but you can still follow these steps and configure everything on single node. The below configuration reflects my setup. Kindly modify it as per your subnet and settings.

  • My home network has subnet of 192.168.1.0/24
  • My home PC which I am turning into controller node has IP of 192.168.1.140
  • MY KVM Hypervisor which I am turning to compute node has IP of 192.168.1.142
  1. It is advisable to have DNS setup in your intranet but just in case you don’t have it, you need to modify /etc/hosts file on both controller and compute node in order for OpenStack services to communicate to each other like below
    #Controller node
    192.168.1.140 controller
    #Compute node
    192.168.1.142 compute
  2. OpenStack services require a database to store information. You can use any database you are familiar with. I am using MySQL/MariaDB, as I am familiar with it. On the controller node, we will install the MySQL client and server packages, and the Python library.
     yum install -y mysql mysql-server MySQL-python
  3. Enable InnoDB, UTF-8 character set, and UTF-8 collation by default. To do that we need to modify /etc/my.cnf and set the following keys under [mysqld] section.
    default-storage-engine = innodb 
    innodb_file_per_table 
    collation-server = utf8_general_ci 
    init-connect = 'SET NAMES utf8' 
    character-set-server = utf8
  4. Start and enable the MySQL services
    service mysqld start
    chkconfig mysqld on
  5. Finally, set the root password for MySQL database. If you need further details about configuring the MySQL root password, there are many resources available online.
  6. On the compute node we need to install the MySQL Python library
    yum install -y MySQL-python
  7. Set up RDO repository on both controller and compute nodes
    yum install -y http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm
  8. I am using CentOS 6.2 so I need to have epel repo as well. This step is not required if you are using distro other then RHEL, CentOS, Scientific Linux etc.
    yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
  9. Install OpenStack utilities on both nodes and get started.
    yum install openstack-utils

Stay tuned for the remainder of my series, Deploying a Private Cloud at Home. In part three, we will continue configuring OpenStack services.

Categories: DBA Blogs

SQL Saturday Bulgaria 2014

Pythian Group - Fri, 2014-10-10 08:22

 

This Saturday October 11, I will be speaking at SQL Saturday Bulgaria 2014 in Sofia. It’s my first time in the country and I’m really excited to be part of another SQL Saturday :)

I will be speaking about Buffer Pool Extension, a new feature on SQL Server 2014. If you want to learn a little more about the new SQL Server version, don’t hesitate to attend the event. Looking forward to seeing you there!

Categories: DBA Blogs