Skip navigation.

Feed aggregator

Internet browsers at the heart of enterprise hacks, says study

Chris Foot - Tue, 2015-02-03 09:47

Which browser are your employees using? Their choices may affect how secure your digital enterprise assets are. 

Microsoft's Internet Explorer is often characterized as being the least secure among Firefox, Chrome and Safari, but is this really the case? What features are indicative of an insecure Web browser? What sort of techniques are hackers using to access databases through Internet browsers? 

The point of infiltration 
According to a study conducted by the Ponemon Institute, and sponsored by Spikes Security, insecure Web browsers caused 55 percent of malware infections over the course of 2014. Both organizations surveyed IT professionals for the report, the majority of whom maintained that their current security tools are incapable of detecting Web-borne malware. 

"The findings of this research reveal that current solutions are not stopping the growth of Web-borne malware," said Ponemon Institute Chairman and Founder Dr. Larry Ponemon, as quoted by Dark Reading. "Almost all IT practitioners in our study agree that their existing security tools are not capable of completely detecting Web-borne malware, and the insecure Web browser is a primary attack vendor. 

The Ponemon Institute and Spikes Security also made the following discoveries: 

  • 69 percent of survey participants maintained that browser-borne malware is more prevalent than it was a year ago. 
  • Nearly half of organizations reported that Web-based malware bypassed their layered firewall defense systems.
  • 38 percent of respondents maintained sandboxing and content analysis engines still allowed Web-borne malware to infect corporate machines. 

Which is the biggest target? 
Dark Reading acknowledged that the number of flaws discovered in Chrome, Firefox, Internet Explorer, Opera and Safari decreased 19 percent in 2014. Google attributed this success to its bug bounty program. Last year, the tech giant paid $1.5 million to researchers who found more than 500 bugs in its Web browser. 

However, Firefox was the most exploited Browser at Pwn2Own 2014, a hacking challenge hosted by Hewlett-Packard, according to eWEEK. The open source Web browser possessed four zero-day flaws, all of which were taken advantage of. Since the March 2014 event, Firefox has patched these vulnerabilities. 

Yet it's important to determine which browsers are the most popular among professionals and consumers alike, as this will dictate hackers' priorities. It makes more sense for a cybercriminal to target a heavily-used browser than it is for him or her to attack one that is sparingly used. W3schools.com regarded Chrome as the most frequently used solution, so it's likely that hackers are focusing their efforts on this particular browser. 

The post Internet browsers at the heart of enterprise hacks, says study appeared first on Remote DBA Experts.

A Primer on Oracle Documents Cloud Service Administration - Part 1

WebCenter Team - Tue, 2015-02-03 08:42

Author: Thyaga Vasudevan, Senior Director, Oracle WebCenter Product Management

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

At OpenWorld last year, Oracle announced Oracle Documents Cloud Service - an enterprise-grade, secure, cloud based file sharing and sync solution. The November edition of Oracle Fusion Middleware Newsletter ran a feature on it, giving a general overview of Oracle Documents Cloud Service (DOCS). In addition to strong security features and deeper integration with on-premise content management, one of the best features of DOCS is how quickly you can get it running for use. On this blog, time and again, we would try and get deeper into DOCS use cases and feature/functionality. And if there are topics you would like to see covered, please do let us know by leaving a comment.

As an Oracle Document Cloud Service administrator, you want to be confident your organization is getting the most from the service. In this three-part series, I will walk you through five simple tips to get started and get your users on-boarded to the Oracle Document Cloud Service quickly and easily.

My post today focuses on:

Tip 1: Adding Users to Oracle Documents Cloud Service

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

There are 2 ways to provision users to the Document Cloud service:

Option1. Adding Single User at a Time

  1. Sign in to the My Services application.
    a. Please note that you can navigate to My Services by signing in to Oracle Cloud from https://cloud.oracle.com/home.
    b. You can also access the dashboard directly by using the My Services URL, which is dependent on the data center for your service, for example: https://myservices.us2.oraclecloud.com/mycloud

  1. In the My Services application, click the Users tab in the upper right.
  2. Click Add, and then provide first and last name and an email address for the new user, and assign the “Oracle Documents Cloud Service User” role for each user.

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

Option 2. Bulk Import Users

You can also add users to the service by importing a set of users from a file. Click the Import button. In the Import Users dialog, select a file from your local system that contains attributes as detailed below.

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

The user file contains a header line, followed by a line for each user to be created. Each user line contains first name, last name, and email address:

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

First Name,Last Name,Email

John,Smith,john.smith@acme.com

Anne,Taylor,anne.taylor@acme.com


Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Next, you have to assign the Oracle Documents Cloud Service User role to the imported users using the “Batch Assign Role” option.

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

Clicking on this option will request you to upload a CSV file.

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

From the Role drop down, select “Oracle Documents Cloud Service User” and click Assign.

The CSV file contains a header line, followed by a line for each user to be created. Each user line contains only the email address.

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

Email

john.smith@acme.com

anne.taylor@acme.com

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

And there you have it - it's that simple.

In my next post,we will look at Assigning User Quota and Resetting a User's Password.

Webinar Followup

Randolf Geist - Tue, 2015-02-03 06:47
Thanks everyone who attended my recent webinar at AllThingsOracle.com.

The link to the webinar recording can be found here.

The presentation PDF can be downloaded here. Note that this site uses a non-default HTTP port, so if you're behind a firewall this might be blocked.

Thanks again to AllThingsOracle.com and Amy Burrows for hosting the event.

Social Coding Resolves JAX-RS and CDI Producer Problem

Steve Button - Tue, 2015-02-03 06:04
The inimitable Bruno Borges picked up tweet earlier today commenting on a problem using @Produces with non-CDI libraries with WebLogic Server 12.1.3.

The tweeter put his example up on a github repository to share - quite a nice example of using JAX-RS, CDI integration and of using Arquillian to verify it works correctly.  Ticked a couple of boxes for what I've been looking at lately

Forking his project to have a look at it locally:

https://github.com/buttso/weblogic-producers

Turns out that the issue was quite a simple and common one - a missing reference to the jax-rs:2.0 shared-library that is needed to use JAX-RS 2.0 on WebLogic Server 12.1.3.   Needs a weblogic.xml to reference that library.

I made the changes in a local branch and tested it again:

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] producers .......................................... SUCCESS [  0.002 s]
[INFO] bean ............................................... SUCCESS [  0.686 s]
[INFO] web ................................................ SUCCESS [  7.795 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------

With the tests now passing, pushed the branch to my fork and sent Kuba a pull request to have a look at the changes I made:

https://github.com/buttso/weblogic-producers/tree/steve_work

I now just hope it works in his environment too :-)

The GitHub model is pretty magical really.

Big changes ahead for India's IT majors

Abhinav Agarwal - Tue, 2015-02-03 02:22
My article on challenges confronting the Indian IT majors was published in DNA in January 2015.

Here is the complete text of the article - Big changes ahead for India's IT majors:

Hidden among the noise surrounding the big three of the Indian IT industry - TCS, Wipro, and Infosys - was a very interesting sliver of signal that points to possibly big changes on the horizon. Though Cognizant should be counted among these biggies - based on its size and revenues - let's focus on these three for the time being.

Statements made by the respective CEOs of Infosys and Wipro, and the actions of TCS, provide hints on how these companies plan on addressing the coming headwinds that the Indian IT industry faces. Make no mistake. These are strong headwinds that threaten to derail the mostly good fairy tale of the Indian IT industry. Whether it is the challenge of continuing to show growth on top of a large base - each of these companies is close to or has exceeded ten billion dollars in annual revenues; protecting margins when everyone seems to be in a race to the bottom; operating overseas in the face of unremitting resistance to outsourcing; or finding ways to do business in light of the multiple disruptions thrust by cloud computing, big data, and the Internet of Things, they cannot continue in a business-as-usual model any longer.



For nearly two decades the Indian IT industry has grown at a furious pace, but also grown fat in the process, on a staple diet of low-cost business that relied on the undeniable advantage of labour-cost arbitrage. Plainly speaking, people cost a lot overseas, but they cost a lot less in India. The favourable dollar exchange-rate ensured that four, five (or even ten engineers at one point in time) could be hired in India for the cost of one software engineer in the United States. There was no meaningful incentive to either optimize on staffing, or build value-added skills when people could be retained by offering fifteen per cent salary hikes, every year. Those days are fast fading, and while the Indian IT workforce's average age has continued to inch up, the sophistication of the work performed has not kept pace, resulting in companies paying their employees more and more every year for work that is much the same.

TCS, willy nilly, has brought to the front a stark truth facing much of the Indian IT industry - how to cut costs in the face of a downward pressure on most of the work it performs, which has for the most part remained routine and undifferentiated. Based on a remark made by its HR head on "layoffs" and "restructuring" that would take place over the course of 2015, the story snowballed into a raging controversy. It was alleged that TCS was planning on retrenching tens of thousands of employees - mostly senior employees who cost more than college graduates with only a few years of experience. Cursory and level-headed thinking would have revealed that prima-facie any such large layoffs could not be true. But such is the way with rumours - they have long legs. What however remains unchanged is the fact that without more value-based business, an "experienced" workforce is a drag on margins. It's a liability, not an asset. Ignore, for a minute, the absolute worst way in which TCS handled the public relations fiasco arising out of its layoff rumours - something even its CEO, N Chandraskaran, acknowledged. Whether one likes it or not, so-called senior resources at companies that cannot lay claim to skills that are in demand will find themselves under the dark cloud of layoffs. If you prefer, call them "involuntary attrition", "labour cost rationalization", or anything else. The immediate reward of a lowered loaded cost number will override any longer-term damage such a step may involve. If it is a driver for TCS, it will be a driver for Wipro and Infosys.

Infosys, predictably, and as I had written some six months back, is trying to use the innovation route to find its way to both sustained growth and higher margins. Its CEO, Vishal Sikka, certainly has the pedigree to make innovation succeed. His words have unambiguously underlined his intention to pursue, acquire, or fund innovation. Unsurprisingly, there are several challenges to this approach. First, outsourced innovation is open to market risks. If you invest early enough, you will get in at lower valuations, but you will also have to cast a wider net, which requires more time and focus. Invest later, and you pay through your nose by way of sky-high valuations. Second, external innovation breeds resentment internally. It sends the message that the company does not consider its own employees "good enough" to innovate. To counter this perception, Vishal has exhorted Infosys employees "to innovate proactively on every single thing they are working on." This is a smart strategy. It is low cost, low risk, and a big morale booster. However, it also distracts. Employees can easily get distracted by the "cool" factor of doing what they believe is innovative thinking. "20%" may well be a myth in any case. How does a company put a process in place that can evaluate, nurture, and manage innovative ideas coming out of tens of thousands of employees? Clearly, there are issues to be balanced. The key to success, like in most other things, will lie in execution - as Ram Charan has laid out in his excellent book, unsurprisingly titled "Execution".

Lastly, there is Wipro. In an interview, Wipro's CEO, TK Kurien, announced that Wipro would use "subcontracting to drive growth". This seems to have gone largely unnoticed, in the industry. Wipro seems to have realized, on the basis of this statement at least, that it cannot continue to keep sliding down the slipper slope of low-cost undifferentiated work. If the BJP government's vision of developing a hundred cities in India into so-called "Smart Cities", one could well see small software consulting and services firm sprout up all over India, in Tier 2 and even Tier 3 cities. These firms will benefit from the e-infrastructure available as a result of the Smart Cities initiative on the one hand, and find a ready market for their services that requires a low cost model to begin with on the other. This will leave Wipro free to subcontract low-value, undifferentiated work, to smaller companies in smaller cities. A truly virtuous circle. In theory at least. However, even here it would be useful for Wipro to remember the Dell and Asus story. Dell was at one point among the most innovative of computer manufacturers. It kept on giving away more and more of its computer manufacturing business - from motherboard designing, laptop assembly, and so on - to Asus, because it helped Dell keep its margins high while allowing it to focus on what it deemed its core competencies. Soon enough, Asus had learned everything about the computer business, and it launched its own computer brand. The road to commoditization hell is paved with the best intentions of cost-cutting.

While it may appear that these three IT behemoths are pursuing three mutually exclusive strategies, it would be naïve to judge these three strategies as an either-or play. Wach will likely, and hopefully, pursue a mix of these strategies, focusing more on what they decide fits their company best, and resist the temptation to follow each other in a monkey-see-monkey-do race. Will one of the big three Indian IT majors pull ahead of its peers and compete with the IBM, Accenture, and other majors globally? Watch this space.

IOUG Collaborate #C15LV

Yann Neuhaus - Tue, 2015-02-03 01:48

The IOUG - Independant Oracle User Group - has a great event each year: the COLLABORATE. This year it's in April 12-16, 2015 at The Mandalay Bay Resort & Casino in Las Vegas.

I'll be a speaker and a RAC Attack Ninja as well.

alt  IOUG COLLABORATE provides all the real-world technical training you need – not sales pitches. The IOUG Forum presents hundreds of educational sessions on Oracle technology, led by the most informed and accomplished Oracle users and experts in the world, bringing more than 5,500 Oracle technology and applications professionals to one venue for Oracle education, customer exchange and networking.         

Installing the Oracle Application Management Pack for Oracle Utilities

Anthony Shorten - Mon, 2015-02-02 22:15

The Application Management Pack for Oracle Utilities is a plugin to the Oracle Enterprise Manager product to allow management, patching and monitoring of Oracle Utilities applications.

To install the pack you use the following technique:

  • If you are a customer who has installed the previous versions of the pack, all targets from that pack must be removed and the pack deinstalled prior to using the new version (12.1.0.1.0). This is because the new pack is a completely different set of software and it is recommended to remove old versions. This will only occur in this release as future releases will automatically upgrade.
  • Navigate to the Setup --> Extensibility --> Plugins and search for the "Oracle Utilities Application" plugin. Do not use the "Oracle Utilities" plugin as that is the previous release.
  • Press "Download" to download the plugin to your repository.
  • Press "Apply" to apply the pack to your OMS console instance. This will install the server components of the pack.
  • From the Plugin Manager (you will be directed there), you can deploy the pack to your OMS using Deploy On Management Servers.  This will start the deployment process.
  • After deployment to the server, you then can also deploy the plug in on any licensed Oracle Utilities Servers using the Deploy On Management Agents. Select the servers from the list.
  • You now have installed the Pack.
  • You should first, discover and promote the Oracle WebLogic targets for the domain, clusters (if used), servers and application deployments for the Oracle Utilities products.
  • Run the discover against the Oracle Utilities servers to discover the pack specific targets.

At this point you can create groups on the targets or even dashboards.

My Oracle Support Release 15.1 is Live!

Joshua Solomin - Mon, 2015-02-02 20:12
My Oracle Support Release 15.1 is Live!

My Oracle Support release 15.1 is now live. Improvements include:

All Customer User Administrators (CUAs) can manage and group their users and assets using the Support Identifier Groups (SIGs) feature.
Knowledge Search automatically provides unfiltered results when filters return no results. In addition, product and version detail displays in bug search results.
The SR platform selector groups common products with the appropriate platform.
Some problem types for non-technical SRs have guided resolution workflow.
In the Proactive Analysis Center: all clickable links are underlined, users only see applicable reports, and column headers can be sorted.



Learn more by viewing the What's new in My Oracle Support video.

Exadata Vulnerability

Pakistan's First Oracle Blog - Mon, 2015-02-02 19:49
This Exadata vulnerability is related to glibc vulnerability. A heap-based buffer overflow was found in glibc's __nss_hostname_digits_dots() function, which is used by the gethostbyname() and gethostbyname2() glibc function calls.

A remote attacker able to make an application call either of these functions could use this flaw to execute arbitrary code with the permissions of the user running the application.

In order to check if your Exadata system suffers from this vulnerability, use:

[root@server ~]# ./ghostest-rhn-cf.sh
vulnerable

The solution and action plan for this vulnerability is available by My Oracle Support in the following document:

glibc vulnerability (CVE-2015-0235) patch availability for Oracle Exadata Database Machine (Doc ID 1965525.1)
Categories: DBA Blogs

Scrutinizing Exadata X5 Datasheet IOPS Claims…and Correcting Mistakes

Kevin Closson - Mon, 2015-02-02 19:37

I want to make these two points right out of the gate:

  1. I do not question Oracle’s IOPS claims in Exadata datasheets
  2. Everyone makes mistakes
Everyone Makes Mistakes

Like me. On January 21, 2015, Oracle announced the X5 generation of Exadata. I spent some time studying the datasheets from this product family and also compared the information to prior generations of Exadata namely the X3 and X4. Yesterday I graphed some of the datasheet numbers from these Exadata products and tweeted the graphs. I’m sorry  to report that two of the graphs were faulty–the result of hasty cut and paste. This post will clear up the mistakes but I owe an apology to Oracle for incorrectly graphing their datasheet information. Everyone makes mistakes. I fess up when I do. I am posting the fixed slides but will link to the deprecated slides at the end of this post.

We’re Only Human

Wouldn’t IT be a more enjoyable industry if certain IT vendors stepped up and admitted when they’ve made little, tiny mistakes like the one I’m blogging about here? In fact, wouldn’t it be wonderful if some of the exceedingly gruesome mistakes certain IT vendors make would result in a little soul-searching and confession? Yes. It would be really nice! But it’ll never happen–well, not for certain IT companies anyway. Enough of that. I’ll move on to the meat of this post. The rest of this article covers:

  • Three Generations of Exadata IOPS Capability
  • Exadata IOPS Per Host CPU
  • Exadata IOPS Per Flash SSD
  • IOPS Per Exadata Storage Server License Cost
Three Generations of Exadata IOPS Capability

The following chart shows how Oracle has evolved Exadata from the X3 to the X5 EF model with regard to IOPS capability. As per Oracle’s datasheets on the matter these are, of course, SQL-driven IOPS. Oracle would likely show you this chart and nothing else. Why? Because it shows favorable,  generational progress in IOPS capability. A quick glance shows that read IOPS improved just shy of 3x and write IOPS capability improved over 4x from the X3 to X5 product releases. These are good numbers. I should point out that the X3 and X4 numbers are the datasheet citations for 100% cached data in Exadata Smart Flash Cache. These models had 4 Exadata Smart Flash Cache PCIe cards in each storage server (aka, cell). The X5 numbers I’m focused on reflect the performance of the all-new Extreme Flash (EF) X5 model. It seems Oracle has started to investigate the value of all-flash technology and, indeed, the X5 EF is the top-dog in the Exadata line-up. For this reason I choose to graph X5 EF data as opposed to the more pedestrian High Capacity model which has 12 4TB SATA drives fronted with PCI Flash cards (4 per storage server). exadata-evolution-iops-gold-1 The tweets I hastily posted yesterday with the faulty data points aimed to normalize these performance numbers to important factors such as host CPU, SSD count and Exadata Storage Server Software licensing costs.  The following set of charts are the error-free versions of the tweeted charts.

Exadata IOPS Per Host CPU

Oracle’s IOPS performance citations are based on SQL-driven workloads. This can be seen in every Exadata datasheet. All Exadata datasheets for generations prior to X4 clearly stated that Exadata IOPS are limited by host CPU. Indeed, anyone who studies Oracle Database with SLOB knows how all of that works. SQL-driven IOPS requires host CPU. Sadly, however, Oracle ceased stating the fact that IOPS are host-CPU bound in Exadata as of the advent of the X4 product family. I presume Oracle stopped correctly stating the factual correlation between host CPU and SQL-driven IOPS for only the most honorable of reasons with the best of customers’ intentions in mind. In case anyone should doubt my assertion that Oracle historically associated Exadata IOPS limitations with host CPU I submit the following screen shot of the pertinent section of the X3 datasheet:   X3-datasheet-truth Now that the established relationship between SQL-driven IOPS and host CPU has been demystified, I’ll offer the following chart which normalizes IOPS to host CPU core count: exadata-evolution-iops-per-core-gold I think the data speaks for itself but I’ll add some commentary. Where Exadata is concerned, Oracle gives no choice of host CPU to customers. If you adopt Exadata you will be forced to take the top-bin Xeon SKU with the most cores offered in the respective Intel CPU family.  For example, the X3 product used 8-core Sandy Bridge Xeons. The X4 used 12-core Ivy Bridge Xeons and finally the X5 uses 18-core Haswell Xeons. In each of these CPU families there are other processors of varying core counts at the same TDP. For example, the Exadata X5 processor is the E5-2699v3 which is a 145w 18-core part. In the same line of Xeons there is also a 145w 14c part (E5-2697v3) but that is not an option to Exadata customers.

All of this is important since Oracle customers must license Oracle Database software by the host CPU core. The chart shows us that read IOPS per core from X3 to X4 improved 18% but from X4 to X5 we see only a 3.6% increase. The chart also shows that write IOPS/core peaked at X4 and has actually dropped some 9% in the X5 product. These important trends suggest Oracle’s balance between storage plumbing and I/O bandwidth in the Storage Servers is not keeping up with the rate at which Intel is packing cores into the Xeon EP family of CPUs. The nugget of truth that is missing here is whether the 145w 14-core  E5-2697v3 might in fact be able to improve this IOPS/core ratio. While such information would be quite beneficial to Exadata-minded customers, the 22% drop in expensive Oracle Database software in such an 18c versus 14c scenario is not beneficial to Oracle–especially not while Oracle is struggling to subsidize its languishing hardware business with gains from traditional software.

Exadata IOPS Per Flash SSD

Oracle uses their own branded Flash cards in all of the X3 through X5 products. While it may seem like an implementation detail, some technicians consider it important to scrutinize how well Oracle leverages their own components in their Engineered Systems. In fact, some customers expect that adding significant amounts of important performance components, like Flash cards, should pay commensurate dividends. So, before you let your eyes drift to the following graph please be reminded that X3 and X4 products came with 4 Gen3 PCI Flash Cards per Exadata Storage Server whereas X5 is fit with 8 NVMe flash cards. And now, feel free to take a gander at how well Exadata architecture leverages a 100% increase in Flash componentry: exadata-evolution-iops-per-SSD-gold This chart helps us visualize the facts sort of hidden in the datasheet information. From Exadata X3 to Exadata X4 Oracle improved IOPS per Flash device by just shy of 100% for both read and write IOPS. On the other hand, Exadata X5 exhibits nearly flat (5%) write IOPS and a troubling drop in read IOPS per SSD device of 22%.  Now, all I can do is share the facts. I cannot change people’s belief system–this I know. That said, I can’t imagine how anyone can spin a per-SSD drop of 22%–especially considering the NVMe SSD product is so significantly faster than the X4 PCIe Flash card. By significant I mean the NVMe SSD used in the X5 model is rated at 260,000 random 8KB IOPS whereas the X4 PCIe Flash card was only rated at 160,000 8KB read IOPS. So X5 has double the SSDs–each of which is rated at 63% more IOPS capacity–than the X4 yet IOPS per SSD dropped 22% from the X4 to the X5. That means an architectural imbalance–somewhere.  However, since Exadata is a completely closed system you are on your own to find out why doubling resources doesn’t double your performance. All of that might sound like taking shots at implementation details. If that seems like the case then the next section of this article might be of interest.

IOPS Per Exadata Storage Server License Cost

As I wrote earlier in this article, both Exadata X3 and Exadata X4 used PCIe Flash cards for accelerating IOPS. Each X3 and X4 Exadata Storage Server came with 12 hard disk drives and 4 PCIe Flash cards. Oracle licenses Exadata Storage Server Software by the hard drive in X3/X4 and by the NVMe SSD in the X5 EF model. To that end the license “basis” is 12 units for X3/X5 and 8 for X5. Already readers are breathing a sigh of relief because less license basis must surely mean less total license cost. Surely Not! Exadata X3 and X4 list price for Exadata Storage Server software was $10,000 per disk drive for an extended price of $120,000 per storage server. The X5 EF model, on the other hand, prices Exadata Storage Server Software at $20,000 per NVMe SSD for an extended price of $160,000 per Exadata Storage Server. With these values in mind feel free to direct your attention to the following chart which graphs the IOPS per Exadata Storage Server Software list price (IOPS/license$$). exadata-evolution-iops-per-license-cost-gold The trend in the X3 to X4 timeframe was a doubling of write IOPS/license$$ and just short of a 100% improvement in read IOPS/license$$. In stark contrast, however, the X5 EF product delivers only a 57% increase in write IOPS/license$$ and a troubling, tiny, 17% increase in read IOPS/license$$. Remember, X5 has 100% more SSD componentry when compared to the X3 and X4 products.

Summary

No summary needed. At least I don’t think so.

About Those Faulty Tweeted Graphs

As promised, I’ve left links to the faulty graphs I tweeted here: Faulty / Deleted Tweet Graph of Exadata IOPS/SSD: http://wp.me/a21zc-1ek Faulty / Deleted Tweet Graph of Exadata IOPS/license$$: http://wp.me/a21zc-1ej

References

Exadata X3-2 datasheet: http://www.oracle.com/technetwork/server-storage/engineered-systems/exadata/exadata-dbmachine-x3-2-ds-1855384.pdf Exadata X4-2 datasheet: http://www.oracle.com/technetwork/database/exadata/exadata-dbmachine-x4-2-ds-2076448.pdf Exadata X5-2 datasheet: http://www.oracle.com/technetwork/database/exadata/exadata-x5-2-ds-2406241.pdf X4 SSD info: http://www.oracle.com/us/products/servers-storage/storage/flash-storage/f80/overview/index.html X5 SSD info: http://docs.oracle.com/cd/E54943_01/html/E54944/gokdw.html#scrolltoc Engineered Systems Price List: http://www.oracle.com/us/corporate/pricing/exadata-pricelist-070598.pdf , http://www.ogs.state.ny.us/purchase/prices/7600020944pl_oracle.pdf


Filed under: oracle

If You Want It, Here It Is

Floyd Teter - Mon, 2015-02-02 18:22
If you want it
Here it is, come and get it
Mmmm, make your mind up fast
If you want it
Anytime, I can give it
But you better hurry
Cause it may not last
    - From "Come And Get It", written by Sir Paul McCartney and originally recorded by Badfinger

I'm watching changes in the SaaS world...some people are keeping up with the changes, and some people are not.  The approach is selling SaaS subscriptions is an area that stands out in my mind where the market players have not all quite wrapped their brains around a new reality.

In the old days of selling on-premise applications (also lovingly referred to now as "fat apps"), the initial sale was the key battleground between applications vendors in their quest for customers.  That's because switching on-premise apps was hard.  Ask anyone switching from Oracle to SAP for enterprise apps...a very tough, very expensive, and very long process.

In the SaaS world, switching is quicker, easier, and much less expensive.  No technology footprint to switch out.  Get my data from the current SaaS vendor, map and convert to the new SaaS applications, train my workforce, cut off the old SaaS vendor, start paying the new SaaS vendor.  While it's still not a small undertaking, it's a comparative drop in the bucket.

Oh, what about hybrid platforms?  Still easier to switch out the SaaS portion of your system.  And so far as integrations:  well, the commonly used integrations are fast becoming commodities.  That's what Cloud Integration platforms from providers like Oracle, Sierra-Cedar (yeah, that was a plug - pretty slick the way I slipped it in there, huh?), Boomi, Workday, etc...providing highly-reused application integrations as a managed cloud service.

So what does this mean?  It means that as SaaS becomes more prevalent in the enterprise applications world, it won't be about making the deal as much as it will be about keeping the customer while concurrently enticing other players customers to switch while concurrently hunting for customers just entering the enterprise applications customer space.  In other words, we'll soon see huge churning of accounts from Brand X to Brand Y.  And we'll also see vendors attempting to protect their own patch of accounts.  And, at the same time, we'll see more offerings geared toward the SMB space...because that's where the net new growth opportunities will exist.

We're entering a great time for buyers...vendor lock-in in the enterprise apps market will become a less predominant factor.  And, frankly, vendors will be treat each customer like the "belle of the ball".

Watch for SaaS vendors to begin singing Sir Paul's tune:  "If you want it, here it is..." - on very customer-favorable terms.

Last year's big four cybersecurity vulnerabilities [VIDEO]

Chris Foot - Mon, 2015-02-02 09:04

Transcript 

Hi, welcome to RDX! 2014 was a rough year in regard to cybersecurity. Between April and November of last year, four critical vulnerabilities were unraveled. Here’s a recap.

The Heartbleed bug is a flaw in the Open SSL cryptographic software library that allows people to steal data protected by the SSL/TLS encryption method.

Shellshock is a collection of security bugs used in the Unix Bash shell, which could potentially allow a hacker to issue unsanctioned commands through a Linux distribution.

Winshock enables those exploiting the flaw to possibly issue denial-of-service attacks and enter unauthenticated remote code executions.

Lastly, Kerberos Checksum could allow Active Directory to regard incorrect passwords as legitimate, exposing corporate networks.

As the former three vulnerabilities are applicable to both Windows and Linux server operating systems, consulting with personnel capable of assessing and patching these bugs is critical.

Thanks for watching! Visit us next time for news regarding operating system vulnerabilities.

The post Last year's big four cybersecurity vulnerabilities [VIDEO] appeared first on Remote DBA Experts.

Why won't my APEX submit buttons submit?

Tony Andrews - Mon, 2015-02-02 07:46
I hit a weird jQuery issue today that took a ridiculous amount of time to solve.  It is easy to demonstrate: Create a simple APEX page with an HTML region Create 2 buttons that submit the page with a request e.g. SUBMIT and CANCEL Run the page So far, it works - if you press either button you can see that the page is being submitted.   Now edit the buttons and assign them static IDs of "Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com0http://tonyandrews.blogspot.com/2015/02/why-wont-my-apex-submit-buttons-submit.html

Rittman Mead BI Forum Abstract Voting Now Open – For One Week Only!

Rittman Mead Consulting - Mon, 2015-02-02 06:52

The call for papers for the Rittman Mead BI Forum 2015 closed a couple of weeks ago and we’ve had some excellent submissions on topics ranging from OBIEE, Visualizations and data discovery through to in-memory analytics, cloud, big data and data integration. As always, we’re now opening up the abstract submission list for scoring, so that anyone considering coming to either the Brighton or Atlanta events can have a say in what abstracts are selected.

The voting forms, and event details, are below:

Voting is open for just one week, and will close at 5pm this coming Sundy (Feb 8th 2015) . Shortly afterwards we’ll announce the speaker line-up, and open-up registrations for both events. For info, we’ve got a couple of additional abstracts coming in from Oracle on OBIEE12c, BI Cloud and Big Data Discovery, which I’ll be considering as part of the final line-up for the two events.

Keep an eye on the blog later in February for the final speaker line-up, and details of how to register for Brighton and Atlanta.

Categories: BI & Warehousing

Call vs Exec

Dominic Brooks - Mon, 2015-02-02 05:34

Just a reference to a really simple difference between CALL and EXEC.
I thought I had mentioned this before but couldn’t find it so…

EXEC/EXECUTE is a SQL*Plus command which wraps the proc call in an anonymous BEGIN … END; block.

CALL is a SQL command hence it is limited to SQL data types and there are other restrictions which the documentation sums up pretty well.

Because CALL is SQL, there is on key behavioural difference which caused a bug on a project a few years ago when Java code was calling a stored proc not with BEGIN … END; but with CALL and ended up swallowing certain exceptions:

SQL> l
  1  create or replace procedure p1
  2  as
  3  begin
  4    raise no_data_found;
  5* end;
SQL> /

Procedure created.

SQL> exec p1;
BEGIN p1; END;

*
ERROR at line 1:
ORA-01403: no data found
ORA-06512: at "PGPS_UAT1.P1", line 4
ORA-06512: at line 1


SQL> call p1();

Call completed.

SQL>

SQL expects and handles certain exceptions.

So always use BEGIN and END; blocks in application code rather than CALL.


Oracle Application Testing Suite for Oracle Utilities available

Anthony Shorten - Sun, 2015-02-01 23:52

The initial release of the Oracle Application Testing Suite for Oracle Utilities is now available from eDelivery. This product is designed to help automate functional, regression and load testing for Oracle Utilities Application Framework based products.

The product features a set of reusable components for Oracle Utilities Application Framework based products that can be used within Oracle Application Testing Suite (Functional and Load Testing) to quickly build and deploy automated test scripts. The approach of using this product are as follows:

  • The components are prebuilt by the product and QA teams for the Oracle Utilities products. They are the components the product teams use to test the product internally. The contain all the interfacing, verifications, parameters and are already synchronized with the version of the product they support. This reduces the need for testing component development.
  • If you wish to customize a component then Oracle Application Testing Suite now includes Flow Builder which allows for components to be built or customized (they must be copied first).
  • The Oracle Flow Builder part of the Oracle Application Testing Suite can be used to assemble the components into flows that match your individual business processes. This is as simple as drag and drop to put the component in the right location within the flow. Data is automatically passed between components, though the flow can be altered to cover additional data or additional verifications (for example, you can create a verification component to email you the verification report).
  • Once a flow has been built, test data can be attached to complete the flow script. This can be manually entered in the UI or imported via a file such as a spreadsheet.
  • The Openscript can then be generated without the need to learn coding. The code is automatically generated for you.
  • You can execute the script in the OpenScript development component of the Oracle Application Testing Suite or the Test Manager product.
  • The script can also be loaded into the Load Testing component of the Oracle Application Testing Suite if you wish to perform performance testing. The Load Testing component requires that the Functional Testing component is installed as well (as the source of the test scripts).

This approach uses components to a great advantage over other testing approaches:

  •  The components are prebuilt and tested internally by our product teams for our own product QA. You are getting access to those same components.
  • As the components are certified for each release of the products then upgrading is as simple as regenerating your test flows. Oracle Application Testing Suite includes management tools to assist with that function. You simply install the new version of the components, transfer your flows over and regenerate them to use the new version of the component.
  • The Oracle Application Testing Suite for Oracle Utilities will include components for each product and version it supports. It is possible to create cross product flows easily if you want.
  • The Oracle Application Testing Suite for Oracle Utilities is licensed the same way as Oracle Application Testing Suite. It is licensed on Testing users, regardless of the number of targets tested against or numbers of products installed. The Load Testing component is licensed on number of simulated users.

The initial version of this testing product only supports Oracle Utilities Mobile Workforce Management V2.2+ and Oracle Real Time Scheduler V2.2+. Other products will be added over the next few releases.

Over the next few weeks, additional material will be published about this product including best practices.

Using the WebLogic Embedded EJB Container

Steve Button - Sun, 2015-02-01 21:39
The WebLogic Server 12.1.3 EJB Developers Guide was recently updated to note that the embedded EJB container can be used by adding a reference to weblogic.jar to the CLASSPATH when the EJB client is being executed.

https://docs.oracle.com/middleware/1213/wls/EJBAD/embedejb.htm#EJBAD1403

This is very convenient since it enables the WebLogic Server embedded EJB container to used by simply adding weblogic.jar to the classpath when running the client:

Or for example if you are developing unit tests using JUnit and running them from a maven project, you can configure the maven-surefire-plugin to use WebLogic Server to run the EJB test code in its embedded EJB container:

A fully working example of using this is available in this GitHub repository:

https://github.com/buttso/weblogic-embedded-ejb

For more information have a look at the repository and check out the example.

Updated Application Management Pack for Oracle Utilities available

Anthony Shorten - Sun, 2015-02-01 17:04

The Application Management Pack for Oracle Utilities has been updated with new and updated functionality. The whitepaper Oracle Application Management Pack for Oracle Utilities Overview  (Doc Id: 1474435.1) available from My Oracle Support has been updated with the latest information including an overview of how to upgrade from previous versions.

The new and changed functionality is as follows:

  • The pack is a complete rewrite of the original pack using a lower level API to Oracle Enterprise Manager. This means deep integration to features in Oracle Enterprise Manager and integrations to other Oracle Enterprise Manager plugins.
  • There is a completely new target model which recognizes individual components of the architecture. In past release of the pack, there was a single target type (Oracle Utilities Environment) which did not really reflect the diverse architectures that customers and partners implemented. In this new release, the Web Application, Service Application, Batch Application and Software locations are all detected and registered individually. A composite target called "Oracle Utilities Environment" now is a collection of these other targets.
  • This new target model and the adoption of the Oracle Enterprise Manager security model allows greater flexibility in security. It is now possible to authorize at an individual target level as well as a task level (or combinations) with multiple levels of permissions. This allows customers and partners to model their IT organizational permissions within the target model.
  • The new pack includes most of the features of the last pack with new interfaces using standard menus and each to use quick buttons. A few features have been removed to be added in future releases:
    • The Assessment feature has been removed and will be replaced with Compliance Frameworks in a future release.
    • The Log Viewing/Configuration File Viewing feature has been removed and will be replaced with the OEM Log Query feature in a future release.
    • IBM WebSphere support has been temporarily removed and will be added in a patch release in the future.
  • The new pack features over 100+ product specific metrics tracking online performance and real time batch performance. This is on top of the already 200+ metrics available from the WebLogic instance targets.
  • The new pack uses the Oracle WebLogic targets within Oracle Enterprise Manager. This allows direct seemless migration from Oracle Utilities targets to Oracle WebLogic targets. This means if you have the Oracle WebLogic packs installed then you can use advanced facilities directly from Oracle Utilities targets.
  • Online performance can be tracked from the new Oracle Utilities Web Service target type. It is also possible to set transactions onto a watch list.
  • Batch Threadpools and threads can be managed from Oracle Utilities Batch Server targets across the batch cluster. This also gives you detailed metrics about performance of individual active threads.

Over the next few weeks, look for more information in this blog about individual features. The new pack is denoted as version 12.1.0.1.0 to reflect the new addon status of the pack and is available from Oracle Enterprise Manager Self Update.

Note: Customers of the previous versions of the pack MUST follow the instructions in the Installation Guides for the new pack to upgrade. The old pack MUST be removed before the new pack can be used.

Note: This pack is tightly integrated with Oracle WebLogic targets in the base Oracle Enterprise Manager. BEFORE discovering any Oracle Utilities targets ensure all Oracle WebLogic targets for the Oracle Utilities domain are registered.

Data Lineage and Impact Analysis in Oracle BI Apps 11.1.1.8.1

Rittman Mead Consulting - Sun, 2015-02-01 09:41

In yesterday’s post I took a look at one of the new features in the 11.1.1.8.1 release of the BI Applications; integration between BI Apps 11g and Oracle Endeca Information Discovery. Whilst we’re on the topic then, I thought it’d be worth taking a look at another new feature introduced with BI Apps 11.1.1.8.1 – data lineage and impact analysis.

So what exactly is data lineage, and impact analysis? Data lineage is the path that data takes through your system from source to the final target reports and dashboards, and describes the lifecycle from raw data through to processed, validated and transformed information presented to your users. Impact analysis is what you do to determine what downstream data items will be affected by a change to a source table, column or data mapping, and has been a feature in many Oracle data integration tools in the past including Oracle Warehouse Builder and ODI11g, as shown in the screenshots below.

NewImage

You could also trace data lineage through the earlier 7.9.x releases of Oracle BI Applications by starting at the DAC Console, which recorded the source table and columns for a particular target warehouse table, and the DAC tasks (usually corresponding to Informatica workflows of the same name) used to load those tables. The DAC stopped at the warehouse layer though and didn’t contain any details of the dashboards and reports that used the warehouse data, and so if you wanted to trace data lineage back from a particular report or presentation layer column you had to step through the process manually in a way that I illustrated in this blog post from a few years ago. What these new data lineage and impact analysis features in BI Apps 11.1.1.8.1 do for us is bring the two sets of metadata together, along with configuration data from the BI Applications Configuration Manager application, to create an end-to-end data lineage view of the BI Apps dataset.

Data within the BI Apps 11.1.1.8.1 system can be thought of as going through seven different layers, as shown in the diagram below. Starting at the top-level dashboards and reports, these map onto OBIEE presentation tables and columns, which in-turn are selected from business model columns that then map back to the physical tables and columns in the Oracle Business Intelligence Applications data warehouse. These data warehouse tables are loaded in two stages, first using source-specific data mappings from for example Oracle E-Business Suite 12.1.3, and then using a set of source-independant mappings that take standardised staging datasets from these sources and map them into the target data warehouse tables. Our data lineage and impact analysis routines have to be aware of these seven stages and show us how data moves and is transformed between each stage.

NewImage

The way that BI Apps 11.1.1.8.1 data lineage works is to use ODI11g to extract metadata from the BI Apps Configuration Manager underlying tables, and the ODI11g repository, and combine that with RPD and catalog metadata you have to manually extract and copy into files for ODI to also upload. An ODI load plan supplied by Oracle then combines these datasets into a final set of data lineage tables also stored on the target data warehouse schema, and you can create your own data lineage and impact analysis reports or start with the ones Oracle also provide with this new feature.

NewImage

To load these data lineage tables, you run a predefined load plan from ODI Studio or ODI console after checking all connections to the various sources are set up correctly. The load plan in-turn runs a number of interfaces that load lineage information from the RPD and catalog extracts, BI Configuration Manager tables and ODI repository tables, with this load plan having to run outside of the main BI Apps managed data loads – which makes sense as you have to manually re-extract the RPD and catalog metadata anyway, and you’ll probably want to run the data lineage reload after every development release of the BI Apps system rather than every day, for example.

NewImage

Once you’ve loaded the data lineage tables, the subject area you can then select from to create lineage and impact reports covers all the stages in the data load, and also extends to OTBI (Oracle Transactional Business Intelligence, more on that in a future post) if you use that in combination with the BI Apps (or OTBI EE, as it’s called for cloud-based Fusion installations).

NewImage

You also get a set of starter dashboards and analyses for displaying the lineage for dashboard objects, presentation tables and columns, down to tables and columns in the BI Apps data warehouse, and impact for source models, columns, variables and so on.

NewImage

It’s definitely a good start, a useful resource. Going back to the days of OWB it’d be nice if this were build-in directly into ODI Studio, and the steps to identify and then export the RPD and catalog metadata are pretty manual, but it’s better than having to step through the metadata layers yourself as you had to do with the previous 7.9.x versions of the BI Apps. More details on data lineage and impact analysis in BI Apps 11.1.1.8.1 can be found in the online docs, including the configuration steps you’ll need to carry out before doing the first data lineage load.

Categories: BI & Warehousing

Database Flashback -- 1

Hemant K Chitale - Sun, 2015-02-01 09:25
A first post on Database Flashback.

Enabling Database Flashback in 11.2 non-RAC

SQL*Plus: Release 11.2.0.2.0 Production on Sun Feb 1 23:13:17 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select version, status, database_status
2 from v$instance;

VERSION STATUS DATABASE_STATUS
----------------- ------------ -----------------
11.2.0.2.0 OPEN ACTIVE

SYS>select flashback_on, database_role
2 from v$database;

FLASHBACK_ON DATABASE_ROLE
------------------ ----------------
NO PRIMARY

SYS>
SYS>show parameter db_recovery_file

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string /home/oracle/app/oracle/flash_
recovery_area
db_recovery_file_dest_size big integer 3852M
SYS>
SYS>select * from v$flash_recovery_area_usage;

FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
-------------------- ------------------ ------------------------- ---------------
CONTROL FILE 0 0 0
REDO LOG 0 0 0
ARCHIVED LOG .95 .94 5
BACKUP PIECE 28.88 .12 5
IMAGE COPY 0 0 0
FLASHBACK LOG 0 0 0
FOREIGN ARCHIVED LOG 0 0 0

7 rows selected.

SYS>

So, the above output shows that the database is OPEN but Flashback is not enabled.
Let me enable Flashback now.
SYS>alter database flashback on;

Database altered.

SYS>select flashback_on from v$database;

FLASHBACK_ON
------------------
YES

SYS>select * from v$flash_recovery_area_usage;

FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
-------------------- ------------------ ------------------------- ---------------
CONTROL FILE 0 0 0
REDO LOG 0 0 0
ARCHIVED LOG .95 .94 5
BACKUP PIECE 28.88 .12 5
IMAGE COPY 0 0 0
FLASHBACK LOG .41 0 2
FOREIGN ARCHIVED LOG 0 0 0

7 rows selected.

SYS>

Immediately after enabling Flashback, Oracle shows usage of the FRA for Flashback Logs. Note : Although 11.2 allows you to enable Flashback in an OPEN Database, I would suggest doing so when the database is not active.

Categories: DBA Blogs