Skip navigation.

Feed aggregator

SharePoint Governance? Why?

Yann Neuhaus - Mon, 2015-06-08 06:16

Companies are struggling with SharePoint. It’s been installed, and abandoned. Business stuff is not drove to make SharePoint succeed.
From this point you need to dress up a governance for SharePoint.
Governance focuses on the technology, business and human side of SharePoint.


 

What is GOVERNANCE? what

Governance is the set of:

  • policies
  • roles
  • responsibilities
  • processes

that help and drive Companie's IT Team and business divisions in order to get their GOALS.
Good governance is therefore establishing sufficiently robust and thorough processes to ensure that not only can those objectives be delivered but that they are delivered in an effective and transparent way.

Example: with permission governance, it's easy to manage who is authorized to get the EDIT permission which allows user to Contribute AND DELETE (list/Library).

In other words, we can equate Governance to something we see in our daily life.

goal

  What happens with NO GOVERNANCE?

No Governance means nothing to be followed and let everything going in all ways!
Without a proper governance, be sure that business objectives won't be achieved, and at least the SharePoint implementation will failed.

Example: if there is no governance about "Site Creation", everybody would be able to create site, and probably on the wrong way. Imagine a SharePoint site without any permissions levels, etc...

You might meet a chaotic situation as depicted by the traffic jam below:

traffic

A Bad Governance will introduce:

  • Social Exclusion
  • Inefficiency
  • Red Tape
  • Corruption
How to start a Governance?

Step by step, define a Governance implementation:

1. The Governance Committee must be organised

A governance committee includes people from the Business & IT divisions of an organization.

2. Decide the SharePoint Elements to be covered

SharePoint Elements that can be governed:

  • Operational Management
  • Technical Operations
  • Site and Security Administration
  • Content Administration
  • Personal and Social Administration


3. Define and implement Rules & Policies

The implementation includes the good writing of Rules & Policies:

  • Setting up Rights & Permissions for Users & Groups
  • Restrict Site Collection creation
  • Setup content approval & routing
  • Setup Locks & Quotas
  • Set Document Versioning Policies
  • Set Retention / Deletion Policies
  • Restrict page customization & usage of SharePoint Designer
  • Setup workflows for automating approvals & processes (using SharePoint Tool or a third party tool)

Having a good communication/adoption with users of those elements will drive higher productivity and less support calls for issues.


4. Drive & Reinforce Governance

Regular meetings are conducted by the Governance Committee to review governance, any necessary change to the Rules & Policies is updated during this phase.

Use the Best practices for governance plans:

  • Determine initial principles and goals
  • Classify your business information
  • Develop an education strategy
  • Develop an ongoing plan

Technet source: https://technet.microsoft.com/en-us/library/cc263356.aspx

 

Governance and Teamwork is essential to smart implementation!

Collaboration

Wrong Java version on Unified Directory Server

Frank van Bortel - Mon, 2015-06-08 06:09
Wrong version Java After losing the battle with the OS guys for control over java, I keep stumbling upon environments that have wrong java versions due to the fact java is installed in /usr/java, or /usr/bin.In such cases, this is the result:which java /usr/bin/java As I do not have control over /usr/bin, I install java in /oracle/middleware/java, so I would like which java /oracle/middleware/Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

Teradata will support Presto

DBMS2 - Mon, 2015-06-08 03:32

At the highest level:

  • Presto is, roughly speaking, Facebook’s replacement for Hive, at least for queries that are supposed to run at interactive speeds.
  • Teradata is announcing support for Presto with a classic open source pricing model.
  • Presto will also become, roughly speaking, Teradata’s replacement for Hive.
  • Teradata’s Presto efforts are being conducted by the former Hadapt.

Now let’s make that all a little more precise.

Regarding Presto (and I got most of this from Teradata)::

  • To a first approximation, Presto is just another way to write SQL queries against HDFS (Hadoop Distributed File System). However …
  • … Presto queries other data stores too, such as various kinds of RDBMS, and federates query results.
  • Facebook at various points in time created both Hive and now Presto.
  • Facebook started the Presto project in 2012 and now has 10 engineers on it.
  • Teradata has named 16 engineers – all from Hadapt – who will be contributing to Presto.
  • Known serious users of Presto include Facebook, Netflix, Groupon and Airbnb. Airbnb likes Presto well enough to have 1/3 of its employees using it, via an Airbnb-developed tool called Airpal.
  • Facebook is known to have a cluster cited at 300 petabytes and 4000 users where Presto is presumed to be a principal part of the workload.

Daniel Abadi said that Presto satisfies what he sees as some core architectural requirements for a modern parallel analytic RDBMS project: 

  • Data is pipelined between operators, with no gratuitous writing to disk the way you might have in something MapReduce-based. This is different from the sense of “pipelining” in which one query might keep an intermediate result set hanging around because another query is known to need those results as well.
  • Presto processing is vectorized; functions don’t need to be re-invoked a tuple at a time. This is different from the sense of vectorization in which several tuples are processed at once, exploiting SIMD (Single Instruction Multiple Data). Dan thinks SIMD is useful mainly for column stores, and Presto tries to be storage-architecture-agnostic.
  • Presto query operators and hence query plans are dynamically compiled, down to byte code.
  • Although it is generally written in Java, Presto uses direct memory management rather than relying on what Java provides. Dan believes that, despite being written in Java, Presto performs as if it were written in C.

More precisely, this is a checklist for interactive-speed parallel SQL. There are some query jobs long enough that Dan thinks you need the fault-tolerance obtained from writing intermediate results to disk, ala’ HadoopDB (which was of course the MapReduce-based predecessor to Hadapt).

That said, Presto is a newish database technology effort, there’s lots of stuff missing from it, and there still will be lots of stuff missing from Presto years from now. Teradata has announced contribution plans to Presto for, give or take, the next year, in three phases:

  • Phase 1 (released immediately, and hence in particular already done):
    • An installer.
    • More documentation, especially around installation.
    • Command-line monitoring and management.
  • Phase 2 (later in 2015)
    • Integrations with YARN, Ambari and soon thereafter Cloudera Manager.
    • Expanded SQL coverage.
  • Phase 3 (some time in 2016)
    • An ODBC driver, which is of course essential for business intelligence tool connectivity.
    • Other connectors (e.g. more targets for query federation).
    • Security.
    • Further SQL coverage.

Absent from any specific plans that were disclosed to me was anything about optimization or other performance hacks, and anything about workload management beyond what can be gotten from YARN. I also suspect that much SQL coverage will still be lacking after Phase 3.

Teradata’s basic business model for Presto is:

  • Teradata is selling subscriptions, for which the principal benefit is support.
  • Teradata reserves the right to make some of its Presto-enhancing code subscription-only, but has no immediate plans to do so.
  • Teradata being Teradata, it would love to sell you Presto-related professional services. But you’re absolutely welcome to consume Presto on the basis of license-plus-routine-support-only.

And of course Presto is usurping Hive’s role wherever that makes sense in Teradata’s data connectivity story, e.g. Teradata QueryGrid.

Finally, since I was on the phone with Justin Borgman and Dan Abadi, discussing a project that involved 16 former Hadapt engineers, I asked about Hadapt’s status. That may be summarized as:

  • There are currently no new Hadapt sales.
  • Only a few large Hadapt customers are still being supported by Teradata.
  • The former Hadapt folks would love Hadapt or Hadapt-like technology to be integrated with Presto, but no such plans have been finalized at this time.
Categories: Other

QlikView Tips & Tricks: The Link Table

Yann Neuhaus - Mon, 2015-06-08 01:00

In this blog, I will show you how to bypass a “Synthetic Key” table in QlickView.

Why bypassing a “Synthetic Key” table?

If you have multiples links between two tables, QlikView generates automatically a “Synthetic Key” table. (here “$Syn 1” table).

The QlikView best practice recommend to remove such kind of key table for a question of performance and “correctness” of the result.

1_QV_Link_Table.PNG

How to bypass this “Synthetic key” table?

The “Link Table” is the solution to bypass the generation of a synthetic key table.

This table will contain two kind of fields:

  • A “foreign key”, made with the fields that are common to the two tables
  • The fields that have been used to create the new “foreign key”

This “Link Table” will have the following structure:

2_QV_Link_Table.PNG

In our case, the structure of the “Link Table” will be the following:

3_QV_Link_Table.PNG

How to proceed? Add the needed fields in the linked tables

Before creating the “Link Table”, we must add the fields in the tables that we should linked together.

Remark: A best practice to create this “Foreign_Key” field is to separate the different fields with “|”.

So, in our case, the fields in the table SALESDETAILS will be added as follow:

4_QV_Link_Table.PNG

The fields in table BUDGET will be added as follow:

5_QV_Link_Table.PNG

Create the “Link table”

The fields to create the “Link Table” are now added. So we can create the table as follow:

Click on “Tab / Add Tab” and name it “LINK_TABLE” (1).

6_QV_Link_Table.PNG

Type the following script:

(1) The name of the table

(2)The names of the fields should be the same in each table

(3) Use the CONCATENATE instruction

7_QV_Link_Table.PNG

Reload the data (1) and check the result (2)

8_QV_Link_Table.PNG

The result should be like this:

9_QV_Link_Table.PNG

Creepy Dolls - A Technology and Privacy Nightmare!

Abhinav Agarwal - Sun, 2015-06-07 22:22
This post was first published on LinkedIn on 20th May, 2015.

"Hi, I'm Chucky. Wanna play?"[1]  Fans of the horror film genre will surely recall these lines - innocent-sounding on their own, yet bone-chilling in the context of the scene in the movie - that Chucky, the possessed demonic doll, utters in the cult classic, "Child's Play". Called a "cheerfully energetic horror film" by Roger Ebert [2], the movie was released to more than a thousand screens on its debut in November 1988 [3]. It went on to spawn at least five sequels and developed a cult following of sorts over the next two decades [4].

Chucky the doll
(image credit: http://www.shocktillyoudrop.com/)In "Child's Play", Chucky the killer doll stays quiet around the adults - at least initially - but carries on secret conversations with Andy, and is persuasive enough to convince him to skip school and travel to downtown Chicago. Chucky understands how children think, and can evidently manipulate - or convince, depending on how you frame it - Andy into doing little favours for him. A doll that could speak, hear, see, understand, and have a conversation with a human in the eighties was the stuff out of science fiction, or in the case of "Child's Play" - out of a horror movie.


Edison Talking Doll.
Image credit: www.davescooltoys.comA realistic doll that could talk and converse was for long the "holy grail" of dollmakers [5]. It will come as a huge surprise to many - at least it did to me - that within a few years of the invention of the phonograph by Thomas Edison in 1877, a doll with a pre-recorded voice had been developed and marketed in 1890! It didn't have a very happy debut however. After "several years of experimentation and development", the Edison Talking Doll, when it launched in 1890, "was a dismal failure that was only marketed for a few short weeks."[6] Talking dolls seem to have made their entry into mainstream retail only with the advent of "Chatty Cathy" - released by Mattel in the 1960s - and which worked on a simple pull-string mechanism. The quest to make these dolls more interactive and more "intelligent" continued; "Amazing Amanda" was another milestone in this development; it incorporated "voice-recognition and memory chips, sensory technology and facial animatronics" [7]. It was touted as an "an evolutionary leap from earlier talking dolls like Chatty Cathy of the 1960's" by some analysts [8]. In some ways that assessment was not off-the-mark. After all, "Amazing Amanda" utilized RFID technology - among the hottest technology buzzwords a decade back. "Radio-frequency tags in Amanda's accessories - including toy food, potty and clothing - wirelessly inform the doll of what it is interacting with." This is what enabled "Amazing Amanda" to differentiate between "food" (pizza, or "cookies, pancakes and spaghetti") and "juice"[9]. "However, even with all these developments and capabilities, the universe of what these toys could was severely limited. At most they could recognize the voice of the child as its "mommy".
Amazing Amanda doll.
Image credit:amazing-amanda.fuzzup.netThey were constrained by both the high price of storage (Flash storage is much sturdier than spinning hard drives, but an order of magnitude costlier; this limits the amount of storage possible) and limited computational capability (putting in a high-end microprocessor inside every doll would make them prohibitively expensive). The flip side was that what the toys spoke in home to the children stayed at home. These toys had a limited set of pre-programmed sentences and emotions they could convey, and if you wanted something different, you went out and bought a new toy, or in some cases, a different cartridge.


That's where things stood. Till now.

Screenshot of ToyFair websiteBetween February 14-17, 2015, the Jacob K. Javits Convention Center in New York saw "the Western Hemisphere’s largest and most important toy show"[10] - the 2015 Toy Fair. This was a trade-show, which meant that "Toy Fair is not open to the public. NO ONE under the age of 18, including infants, will be admitted."[11] It featured a "record-breaking 422,000+ net square feet of exhibit space"[12] and hundreds of thousands of toys. Yet no children were allowed. Be that as it may, there was no dearth of, let's say, "innovative" toys. Apart from an "ultra creepy mechanical doll, complete with dead eyes", a fake fish pet that taken to a "whole new level of weird", or a "Doo Doo Head" doll that had the shape of you-guessed-it [13], of particular interest was a "Hello Barbie" doll, launched by the Fortune 500 behemoth, Mattel. This doll had several USPs to its credit. It featured voice-recognition software, voice recording capabilities, the ability to upload recorded conversations to a server (presumably Mattel's or ToyTalk's) in the cloud, over "Wi-Fi" - as a representative at the exhibition took pains to emphasize, repeatedly - and give "chatty responses."[14] This voice data would be processed and analyzed by the company's servers. The doll would learn the child's interests, and be able to carry on a conversation on those topics - made possible by the fact that the entire computational and learning capabilities of a server farm in the cloud could be accessed by every such toy. That the Barbie franchise is a vital one to Mattel could not be understated. The Barbie brand netted Mattel $1.2 billion in FY 2013 [15], but this represented a six per cent year-on-year decline. Mattel attributed that this decline in Barbie sales in part to "product innovation not being strong enough to drive growth." The message was clear. Something very "innovative" was needed to jump-start sales. To make that technological leap forward, Mattel decided to team up with ToyTalk.

ToyTalk is a San Francisco-based start-up, and its platform powered the voice-recognition software used by "Hello Barbie". ToyTalk is headed by "CEO Oren Jacob, Pixar's former CTO, who worked at the groundbreaking animation company for 20 years" [16], and which claimed "$31M in funding from Greylock Partners, Charles River Ventures, Khosla Ventures, True Ventures and First Round Capital as well as a number of angel investors." [17]

Cover of Misery, by Stephen King.
Published by Viking Press.The voice recognition software would allow Mattel and ToyTalk to learn the preferences of the child, and over time refine the responses that Barbie would communicate back. As the Mattel representative put it, "She's going to get to know all my likes and all my dislikes..."[18] - a statement that at one level reminds one of Annie Wilkes when she says, "I'm your number one fan."[19] We certainly don't want to be in Paul Sheldon shoes.

Hello Barbie's learning would start happening from the time the doll was switched on and connected to a Wi-Fi network. ToyTalk CEO Oren Jacob said, "we'll see week one what kids want to talk about or not" [20]. These recordings, once uploaded to the company's servers, would be used by "ToyTalk's speech recognition platform, currently powering the company's own interactive iPad apps including The Winston Show, SpeakaLegend, and SpeakaZoo" and which then "allows writers to create branching dialogue based on what children will potentially actually say, and collects kids' replies in the cloud for the writers to study and use in an evolving environment of topics and responses."[20]. Some unknown set of people. sitting in some unknown location, would potentially get to hear and listen to entire conversations of a child before his parents would.

If Mattel or ToyTalk did not anticipate the reaction this doll would generate, one can only put it down to the blissful disconnect from the real-world that Silicon Valley entrepreneurs often develop, surrounded as they are by similar-thinking digerati. In any case, the responses were swift, and in most cases brutal. The German magazine "Stern" headlined an article on the doll - "Mattel entwickelt die Stasi-Barbie" [21] Even without the benefit of translation, the word "Stasi" stood out like a red flag. In any case, if you wondered, the headline translated to "Mattel developed the Stasi Barbie" [22]. Stern "curtly re-baptised" it "Barbie IM". "The initials stand for “Inoffizieller Mitarbeiter”, informants who worked for East Germany’s infamous secret police, the Stasi, during the Cold War." [23] [24]. A Newsweek article carried a story, "Privacy Advocates Call Talking Barbie 'Surveillance Barbie'"[25]. France 24 wrote - "Germans balk at new ‘Soviet snitch’ Barbie" [26]. The ever-acerbic The Register digged into ToyTalk's privacy policy on the company's web site, and found these gems out [27]:
Screenshot of ToyTalk's Privacy page- "When users interact with ToyTalk, we may capture photographs or audio or video recordings (the "Recordings") of such interactions, depending upon the particular application being used.
- We may use, transcribe and store such Recordings to provide and maintain the Service, to develop, test or improve speech recognition technology and artificial intelligence algorithms, and for other research and development or internal purposes."

Further reading revealed that what your child spoke to the doll in the confines of his home in, say, suburban Troy Michigan, could end up travelling half the way across the world, to be stored on a server in a foreign country - "We may store and process personal information in the United States and other countries." [28]

What information would ToyTalk share with "Third Parties" was equally disturbing, both for the amount of information that could potentially be shared as well as for the vagueness in defining who these third-parties could possibly be - "Personal information"; "in an aggregated or anonymized form that does not directly identify you or others;"; "in connection with, or during negotiations of, any merger, sale of company assets, financing or acquisition, or in any other situation where personal information may be disclosed or transferred as one of the business assets of ToyTalk"; "We may also share feature extracted data and transcripts that are created from such Recordings, but from which any personal information has been removed, with Service Providers or other third parties for their use in developing, testing and improving speech recognition technology and artificial intelligence algorithms and for research and development or other purposes."[28] A child's speech, words, conversation, voice - as recorded by the doll - was the "business asset" of the company.

And lest the reader have any concerns about safety and security of the data on the company's servers, the following disclaimer put paid to any reassurances on that front also: "no security measures are perfect or impenetrable and no method of data transmission that can be guaranteed against any interception or other type of misuse."[28] If the sound of hands being washed-off could be put down on paper, that sentence above is what it could conceivably look like.

Apart from the firestorm of criticism described above, the advocacy group "Campaign for a Commercial Free Childhood" started a campaign to petition Mattel "CEO Christopher Sinclair to stop "Hello Barbie" immediately." [29]

The brouhaha over "Hello Barbie" is however only symptomatic of several larger issues that have emerged and intersect each other in varying degrees, raising important questions about technology, including the cloud, big data, the Internet of Things, data mining, analytics; privacy in an increasingly digital world; advertising and the ethics of marketing to children; law and how it is able to or unable to cope with an increasingly digitized society; and the impact on children and teens - sociological as well as psychological. Technology and Moore's Law [30] have combined with the convenience of broadband to make possible what would have been in the realm of science fiction even two decades ago. The Internet, while opening up untold avenues of betterment for society at large, has however also revealed itself as not without a dark side - a dilemma universally common to almost every transformative change in society. From the possibly alienating effects of excessive addiction to the Internet to physiological changes that the very nature of the hyperlinked web engenders in humans - these are issues that are only recently beginning to attract the attention of academics and researchers. The basic and most fundamental notions of what people commonly understood as "privacy" are not only being challenged in today's digital world, but in most cases without even a modicum of understanding on the part of the affected party - you. In the nebulous space that hopefully still exists between those who believe in technology as the only solution capable of delivering a digital nirvana to all and every imaginable problem in society on the one hand and the Luddites who see every bit of technology as a rabid byte (that's a bad pun) against humanity lies a saner middle ground that seeks to understand and adapt technology for the betterment of humanity, society, and the world at large.

So what happened to Chucky? Well, as we know, it spawned a successful and profitable franchise of sequels and other assorted franchise. Which direction "Hello Barbie" takes is of less interest to me as the broader questions I raised in the previous paragraph.

References:
[1] http://www.imdb.com/title/tt0094862/quotes?item=qt0289926 
[2] "Child's Play" review, http://www.rogerebert.com/reviews/childs-play-1988
[3] http://www.the-numbers.com/movie/Childs-Play#tab=box-office
[4] https://en.wikipedia.org/wiki/Child%27s_Play_%28franchise%29
[5] "A Brief History of Talking Dolls--From Bebe Phonographe to Amazing Amanda", http://collectdolls.about.com/od/dollsbymaterial/a/talkingdolls.htm
[6] "Edison Talking Doll", http://www.edisontinfoil.com/doll.htm
[7] http://www.canada.com/story.html?id=f4370a3c-903d-4728-a9a4-3d3f941055a6
[8] http://www.nytimes.com/2005/08/25/technology/circuits/25doll.html?pagewanted=all&_r=0
[9] http://www.canada.com/story.html?id=f4370a3c-903d-4728-a9a4-3d3f941055a6
[10] http://www.toyfairny.com/toyfair/Toy_Fair/Show_Info/A_Look_Back.aspx
[11] http://www.toyfairny.com/ToyFair/ShowInfo/About_the_Show/Toy_Fair/Show_Info/About_the_Show.aspx
[12] http://www.toyfairny.com/ToyFair/ShowInfo/About_the_Show/Toy_Fair/Show_Info/About_the_Show.aspx
[13] http://mashable.com/2015/02/15/weird-toys-2015-toy-fair/
[14] https://www.youtube.com/watch?feature=player_embedded&v=RJMvmVCwoNM
[15] http://corporate.mattel.com/PDFs/2013_AR_Report_Mattel%20Inc.pdf
[16] http://www.fastcompany.com/3042430/most-creative-people/using-toytalk-technology-new-hello-barbie-will-have-real-conversations-
[17] https://www.toytalk.com/about/
[18] https://www.youtube.com/watch?feature=player_embedded&v=RJMvmVCwoNM
[19] http://www.imdb.com/title/tt0100157/quotes?item=qt0269492
[20] http://www.fastcompany.com/3042430/most-creative-people/using-toytalk-technology-new-hello-barbie-will-have-real-conversations-
[21] http://www.stern.de/digital/ueberwachung/barbie-wird-zum-spion-im-kinderzimmer-2173997.html
[22] https://translate.google.co.in/?ie=UTF-8&hl=en&client=tw-ob#auto/en/Mattel%20entwickelt%20die%20Stasi-Barbie
[23] http://www.france24.com/en/20150224-hello-barbie-germany-stasi-data-collection/
[24] http://www.stern.de/digital/ueberwachung/barbie-wird-zum-spion-im-kinderzimmer-2173997.html
[25] http://www.newsweek.com/privacy-advocates-want-take-wifi-connected-hello-barbie-offline-313432
[26] http://www.france24.com/en/20150224-hello-barbie-germany-stasi-data-collection/
[27] http://www.theregister.co.uk/2015/02/19/hello_barbie/
[28] https://www.toytalk.com/legal/privacy/
[29] http://org.salsalabs.com/o/621/p/dia/action3/common/public/?action_KEY=17347
[30] http://en.wikipedia.org/wiki/Moore's_law


Disclaimer: Views expressed are personal.


© 2015, Abhinav Agarwal. All rights reserved.

Partner Webcast – Oracle Database 12c: Application Express 5.0 for Cloud development

If you have the Oracle Database, you already have Application Express. When you get Oracle Database Cloud, you get Application Express full development platform for cloud-based applications. Since...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Flipkart and Focus 4 - Beware the Whispering Death

Abhinav Agarwal - Sun, 2015-06-07 12:43
The fourth part of my series on Flipkart and its apparent loss of Focus and its battle with Amazon appeared in DNA on April 20th, 2015.

Part 4 – Beware the Whispering Death
Monopolies may have the luxury of getting distracted. If you were a Microsoft in the 1990s, you could force computer manufacturers to pay you a MS-DOS royalty for every computer they sold, irrespective of whether the computer had a Microsoft operating system installed on it or not[1]. You dared not go against Microsoft, because if you did, it could snuff you out – “cut off the oxygen supply[2]”, to put it more evocatively. But if you are a monopoly, you do have to keep one eye on the regulator[3], which distracts you. If you are not a monopoly, you have to keep one eye on the competition (despite what Amazon may keep saying to the contrary, that they “just ignore the competition”[4]).



Few companies exist in a competitive vacuum. In Flipkart’s case, the competition is Amazon – make no mistake about it. Yes, there is SnapDeal, eBay India, and even HomeShop18; but the numbers speak for themselves. Flipkart has pulled ahead of the pack. As long as Amazon had not entered the Indian market, Flipkart’s rise was more or less certain, thanks to its sharp focus on expanding its offerings, honing its supply-chain, and successfully raising enough capital to not have to worry about its bottom-line while it furiously expanded. Amazon India made a quiet entry on the fifth of June 2013[5], with two categories – books, Movies & TV shows, but followed up with a very splashy blitz two months later in August (it offered 66% discounts on many books[6] to mark India’s 66 years of Indian independence – I should know, I binge-bought about twenty books!). A little more than a year later, in September 2014, Amazon turned the screws even more when its iconic founder-CEO, Jeff Bezos, visited India. In a very showy display that earned it a ton of free advertising, Bezos wore a sherwani and got himself photographed swinging from an Indian truck[7], met Narendra Modi, the Indian Prime Minister[8], and reiterated Amazon’s commitment and confidence in the Indian market[9] - all this without ever taking Flipkart’s name. It didn’t help Flipkart that on July 30th 2014, Amazon India had announced an additional $2 billion investment in India[10]. It didn’t hurt Amazon either that it timed the press release exactly one day after Flipkart closed $1 billion in funding[11] - this was entirely in Amazon’s way of jiu jitsu-ing its competitors (so much for “ignoring the competition”). Flipkart on its part ran into yet more needless problems with its much-touted “Big Billion” sale that was mercilessly ambushed by competitors[12], and which resulted in its founders having to tender an apology[13] for several glitches its customers faced during the sale. Then there were questions on just how much money it actually made from the event, which I analyzed[14].

Flipkart seemed to be getting distracted.

When facing a charged-up Michael Holding, you cannot afford to let your guard down, even if you are batting on 91. Ask the legendary Sunil Gavaskar[15]. Amazon is the Michael Holding of competitors. Ask Marc Lore, the founder of Jet, “which is planning to launch a sort of online Costco later this spring with 10 million discounted products”[16]. Marc who? He is the co-founder of Quidsi. Quidsi who? Quidsi is (was) the company behind the website Diapers.com, and which was acquired by Amazon. Therein lies a tale.

Diapers.com was the website of Quidsi, a New Jersey start-up founded in 2005 by Marc Lore and Vinit Bharara to solve a very real problem: children running through diapers at a crazy pace, and “dragging screaming children to the store is a well-known parental hassle.” What made selling diapers online unviable for retailers was the cost involved in “shipping big, bulky, low-margin products like jumbo packs of Huggies Snug and Dry to people’s front doors.” Diapers.com solved the problem by using “software to match every order with the smallest possible shipping box, minimizing excess weight and thus reducing the per-order shipping cost.” Within a few years, it grew from zero to over $300 million in annual sales. It was only when VC firms, including Accel Partners, pumped in $50 million that Amazon and Jeff Bezos started to pay attention. Sometime in 2009, Amazon started to drop prices on diapers and other baby products by up to 30 percent. Quidsi (the company behind Diapers.com) lowered prices – as an experiment – only to watch Amazon’s website change prices accordingly. Quidsi fared well under Amazon’s assault, “at least at first.” However, growth slowed. “Investors were reluctant to furnish Quidsi with additional capital, and the company was not yet mature enough for an IPO.” Quidsi and WalMart vice chairman (and head of WalMart.com) Eduardo Castro-Wright spoke, but Quidsi’s asking price of $900 million more than what WalMart was willing to pay. Even as Lore and Bharara travelled to Seattle to meet with Amazon for a possible deal, Amazon launched Amazon Mom – literally while the two were in the air and therefore unreachable by a frantic Quidsi staff! “Quidsi executives took what they knew about shipping rates, factored in Procter and Gamble’s wholesale prices, and calculated that Amazon was on track to lose $100 million over three months in the diapers category alone.” Amazon offered $580 million. WalMart upped its offer to $600 million – this offer was revealed to Amazon, because of the conditions in the preliminary term sheet that required Quidsi “to turn over information about any subsequent offers.” When Amazon executives learned of this offer, “they ratcheted up the pressure even further, threatening the Quidsi founders that “sensei,” being such a furious competitor, would drive diaper prices to zero if they went with Walmart.” Quidsi folded, sold to Amazon, and the deal was announced on November 8, 2010[17]. Marc Lore continued with Amazon for two years after that – most likely the result of a typical retention and no-compete clause in such acquisitions.

The tale of Quidsi is one cautionary tale for any company going head-to-head with Amazon. For more details on the fascinating history of Amazon, I would recommend Brad Stone’s book, “The Everything Store: Jeff Bezos and the Age of Amazon”[18] – from which I have adapted the example of Diapers.com above. You can read another report here[19]. I suspect you may well find some copies of the book lying around in Flipkart’s Bengaluru offices!

In their evolution and growth as an online retailer, Flipkart has adopted and emulated several of Amazon’s successful features. Arguably the most successful innovation from Amazon has been to reduce, or entirely eliminate in some cases, the friction of ordering goods from their website. The pace and extent of innovation is quite breath-taking. A brief overview will help illustrate the point.
Amazon used to charge for every order placed in addition to a handling charge per item (typically 99 cents). In 2002, it launched “Free Super Saver Shipping on qualifying orders over $49” as a test. After seeing the results, it lowered this threshold to $25[20]. For over ten years that price held, till 2013, when it raised this minimum to $35[21]. Not content with this, to lure in that segment of customers who wanted to order even a single item, and have it delivered in two days or less, Amazon launched a new express shipping option – Amazon Prime – where “for a flat membership fee of $79 per year, members get unlimited, express two-day shipping for free, with no minimum purchase requirement.”[22] This proved to be a blockbuster hit for Amazon, and the company piled on goodies to this program – Amazon Instant Video, an “instant streaming of selected movies and TV shows” at no additional cost[23]. That same year it launched “Library Lending for Kindle Books”, which allowed customers to “to borrow Kindle books from over 11,000 local libraries to read on Kindle and free Kindle reading apps”[24], with no due date, and added that to the Prime program, at no extra cost. In 2011 it launched “Subscribe & Save” – that let customers order certain items on a regular basis at a discounted price – basically you had to select the frequency, and the item would be delivered every month/quarter without your having to re-order it. Amazon launched “Kindle Matchbook”, where, “For thousands of qualifying books, your past, present, and future print-edition purchases now allow you to buy the Kindle edition for $2.99 or less.[25]” Similarly, its “AutoRip” program allowed customers to receive free MP3 format versions of CDs they had purchased from Amazon (since 1998)[26], and which was extended to Vinyl Records[27].

If all this was not enough, in 2015 Amazon launched a physical button called Dash Button – on April 1st, no less – that would let customers order an item of their selection with one press of the button. It could be their favourite detergent, dog food, paper towels, diapers – an expanding selection. You could stick that button anywhere – your refrigerator, car dashboard, anywhere. It was indeed so outlandish that many thought it was an April Fool’s gimmick[28].
Amazon has been relentless in eliminating friction between the customer and the buying process on Amazon on the one hand, and on squeezing out its competitors with a relentless, ruthless pressure on the other. It manages to do all this while topping customer satisfaction surveys[29], year after year[30].

Flipkart has certainly not been caught flat-footed. It’s been busy introducing several similar programs. It began with free shipping, then raised the minimum to ?100, then ?200, and eventually ?500. Somewhere in between, it modified that to exclude books fulfilled by WS Retail (which was co-founded by Flipkart founders and which accounts for more than three-fourths of all products sold on Flipkart[31]) from that minimum. In May 2014, it launched Flipkart First, an Amazon Prime-like membership program that entitled customers to free “in-a-day” shipping for an annual fees of ?500[32]. It also tied up with Mumbai’s famed “dabbawalas” to solve the last-mile connectivity problem for deliveries[33].

Flipkart’s foray into digital music however was less than successful. It shuttered its online music store, Flyte, in June 2013, a little over a year after launching it[34]. Some speculated it was unable to compete with free offerings like Saavn, Gaana, etc… and was unable to meet the annual minimum guarantees it had signed up with music labels for[35]. Whether it really needed to pull the plug so soon is debatable – for all purposes it may have signalled weakness to the world. Competitors watch these developments very, very closely. Its e-book business has been around for a little over two years, but is not clear how much traction they have in the market. With the launch of Amazon Kindle in India, Flipkart will see it being squeezed even more. The history of the ebook market is not a happy tale – if you are not Amazon or the customer.

The market for instant-gratification refuses to stand still. Amazon upped the ante by launching Amazon Prime Now in December 2014. Prime program customers were guaranteed one-hour delivery on tens of thousands of items for $7.99 (two-hour delivery was free)[36]. This program was launched in Manhattan, and rapidly expanded to half a dozen cities in the US by April 2015[37]. Closer to home, in India, it launched KiranaNow in March 2015, in Bangalore, promising delivery of groceries and related items in four hours[38].

More than anything else, the online retail world is a race to eliminate friction from the buying process, to accelerate and enable buying decisions – as frequently as possible, and to provide instant gratification through instant delivery (in the case of e-books or streaming music or video) or one-hour deliveries. Flipkart may well be the incumbent and the player to beat in the Indian market, but Amazon brings with it close to two decades of experience – experience of battling it out in conditions that are very similar to the Indian market in several respects. More ominously, for Flipkart, Amazon has won many more battles than it has lost. Distraction can prove to be a fatal attraction and affliction.

[1] This is described in James Wallace’s book, “Overdrive: Bill Gates and the Race to Control Cyberspace”, http://www.amazon.in/gp/product/B00J348MXG/ref=as_li_tl?ie=UTF8&camp=3626&creative=24822&creativeASIN=B00J348MXG&linkCode=as2&tag=abhisblog-21&linkId=XIHAIBIQ3H6L6NMH
[2] "BBC NEWS | Special Report | 1998 | 04/98 | Microsoft | USA versus Microsoft: The first two days", http://news.bbc.co.uk/2/hi/special_report/1998/04/98/microsoft/198390.stm
[3] " Justice to Launch Probe of Microsoft ", http://www.washingtonpost.com/wp-srv/business/longterm/microsoft/stories/1993/launch082193.htm
[4] "We just ignore our competitors, never felt pressure from Alibaba's rise: Jeff Bezos, CEO Amazon ", http://articles.economictimes.indiatimes.com/2014-09-29/news/54437158_1_amazon-india-expectations-competitors
[5] "Amazon Launches In India", http://www.amazon.in/gp/feature.html/ref=amb_link_183716847_70?ie=UTF8&docId=1000728823&pf_rd_m=A1VBAL9TL5WCBF&pf_rd_s=center-4&pf_rd_r=05DGNQKB48Z6RV3KZV1G&pf_rd_t=1401&pf_rd_p=605972407&pf_rd_i=1000834593
[6] https://www.dropbox.com/s/jb9wa2vqu1x0p4x/AmazonIn_2013.png?dl=0
[7] "jeff bezos truck bangalore - Google Search", https://www.google.co.in/search?q=jeff+bezos+truck+bangalore&tbm=isch&tbo=u&source=univ&sa=X&ei=_IszVc3QGc-LuAT5i4CADw&ved=0CB0QsAQ&biw=1600&bih=741
[8] "Amazon chief Jeffrey Bezos calls on Prime Minister Modi - The Times of India", http://timesofindia.indiatimes.com/business/india-business/Amazon-chief-Jeffrey-Bezos-calls-on-Prime-Minister-Modi/articleshow/44229776.cms
[9] "No obstacles to growth in India: Amazon CEO Jeff Bezos", http://www.hindustantimes.com/business-news/no-obstacles-to-growth-in-india-says-amazon-ceo-jeff-bezos/article1-1269464.aspx
[10] "Amazon Announces Additional US $2 Billion Investment in India", http://www.amazon.in/gp/feature.html?ie=UTF8&docId=1000818573
[11] "India's Flipkart Raises $1 Billion in Fresh Funding - WSJ", http://www.wsj.com/articles/indias-flipkart-raises-1-billion-in-fresh-funding-1406641579?mod=LS1
[12] "Ambushed: When Flipkart’s Big Billion Sale turned into a nightmare | Best Media Info, News and Analysis on Indian Advertising, Marketing and Media Industry.", http://www.bestmediainfo.com/2014/10/ambushed-when-flipkarts-big-billion-sale-turned-into-a-nightmare/
[13] "Flipkart’s ‘Big Billion Day Sale’ Prompts Big Apology - India Real Time - WSJ", http://blogs.wsj.com/indiarealtime/2014/10/08/flipkarts-big-billion-day-sale-prompts-big-apology/
[14] "A Billion Dollar Sale, And A Few Questions", http://www.dnaindia.com/analysis/standpoint-a-billion-dollar-sale-and-a-few-questions-2047853
[15] "3rd Test: India v West Indies at Ahmedabad, Nov 12-16, 1983", http://www.espncricinfo.com/ci/engine/match/63352.html
[16] "Why Amazon Refuses to Wear Purple Lanyards in Vegas", http://www.bloomberg.com/news/articles/2015-04-01/why-amazon-refuses-to-wear-purple-lanyards-in-vegas
[17] "Amazon.com to Acquire Diapers.com and Soap.com", http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=1493202
[18] “The Everything Store: Jeff Bezos and the Age of Amazon”, by Brad Ston, http://www.amazon.in/Everything-Store-Brad-Stone/dp/0593070461/tag=abhisblog-21&ref=sr_1_1?ie=UTF8&qid=1429439636&sr=8-1&keywords=the+everything+store#reader_0593070461
[19] "Amazon vs. Jet.com: Marc Lore Aims to Beat Bezos", http://www.bloomberg.com/news/features/2015-01-07/amazon-bought-this-mans-company-now-hes-coming-for-them-correct
[20] "Amazon Media Room: Press Releases", http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=503037
[21] "Amazon Raises Free Shipping Threshold From $25 to $35", http://www.pcmag.com/article2/0,2817,2426202,00.asp
[22] "Amazon Media Room: Press Releases", http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=669786
[23] "Amazon Media Room: Press Releases", http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=1531234
[24] "Amazon Media Room: Press Releases", http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=1552678
[25] "Amazon.com: Kindle MatchBook", https://www.amazon.com/gp/digital/ep-landing-page?ie=UTF8&*Version*=1&*entries*=0
[26] "Introducing “Amazon AutoRip” – Customers Now Receive Free MP3 Versions of CDs Purchased From Amazon – Past, Present and Future", http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=1773251
[27] "Amazon Media Room: Press Releases", http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=1802939
[28] "Amazon launches a product so gimmicky we thought it was an April Fools' joke", http://venturebeat.com/2015/03/31/amazon-launches-a-product-so-gimmicky-we-thought-it-was-an-april-fools-joke/
[29] "Customer Satisfaction Lowest at Wal-Mart, Highest at Nordstrom and Amazon", http://247wallst.com/retail/2015/02/18/customer-satisfaction-lowest-at-wal-mart-highest-at-nordstrom-and-amazon/
[30] "Customers Rank Amazon #1 in Customer Satisfaction", http://www.amazon.com/gp/feature.html?ie=UTF8&docId=1001924291
[31] "Flipkart top seller WS Retail to separate logistics arm Ekart into wholly-owned unit", http://articles.economictimes.indiatimes.com/2015-01-21/news/58306262_1_ekart-ws-retail-logistics-arm
[32] "India's Flipkart Launches Subscription Service for Customers", http://thenextweb.com/in/2014/05/08/indias-flipkart-launching-amazon-prime-like-subscription-service-called-flipkart-first/
[33] "Now Mumbai's famed dabbawalas will deliver your Flipkart buys", http://www.dnaindia.com/money/report-now-mumbai-s-famed-dabbawalas-will-deliver-your-flipkart-buys-2076276
[34] "Flipkart closes Flyte MP3 store a year after launch", http://www.livemint.com/Consumer/TJOoP9he0fq0EG7S8lRXYK/Flipkart-closes-Flyte-MP3-store-a-year-after-launch.html
[35] "Why Flipkart Shut Down Flyte Music - MediaNama", http://www.medianama.com/2013/05/223-why-flipkart-shut-flyte-music/
[36] "Amazon Media Room: Press Releases", http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=2000521
[37] "Amazon again expands 'Prime Now' one-hour delivery service, this time to Austin", http://www.geekwire.com/2015/amazon-again-expands-prime-now-one-hour-delivery-service-this-time-to-austin/

[38] "Now Amazon will deliver from your local kirana store", http://www.dnaindia.com/money/report-now-amazon-will-deliver-from-your-local-kirana-store-2076280


© 2015, Abhinav Agarwal (अभिनव अग्रवाल). All rights reserved.

The State of SaaS

Floyd Teter - Sun, 2015-06-07 12:09
I've been reading quite a bit lately about the maturation of SaaS...how the market is transitioning away from the "early adopter" phase into more of a mainstream marketplace.  With all due, respect to those making such claims, I must offer a dissenting opinion.  While a big fan of SaaS, I still recognize at least three factors that must be addressed before SaaS can be considered a mature offering.  These three areas represent the soft underbelly of SaaS: integration, data state, and fear of losing control.

Integration
Perhaps your experience is different, but I have yet to see a service integration for enterprise software that works reliably out-of-the-box.  Pick your vendor:  Oracle, Workday, Amazon, Microsoft, Salesforce, Infor...it just doesn't happen.  There are too many variations amongst various customer applications.  And, in all honesty, enterprise software vendors just don't seem to be all that good at writing packaged integrations.  That's part of the reason we see integration as a service players like MuleSoft and Boomi making a play in the market.  It is also why so many technology companies offer integration implementation services.  We're still a far cry from easy, packaged integration.

Data State
After spending years in the enterprise software market, I'm firmly convinced that everyone has loads of "dirty data".  Records that poorly model transactions, inconsistent relationships between records, custom data objects that have worked around business rules intended for data governance.  Every closet has a skeleton.  The most successful SaaS implementations I've seen either summarize all those records into opening entries for the new system or junk customer data history altogether.  Both these approaches work in the financial applications, but not so well in HCM or Marketing.  Until we can figure out automated ways to take figurative steaming piles of waste and transform them into beautiful, fragrant rose beds of clean data, SaaS will continue to be a challenging transition for most enterprise software customers.

Fear of Losing Control
Certain customer departments are resistant to SaaS, mostly out of a fear of losing control.  Some is borne of a genuine concern over data security.  Some is over fear of losing job security.  

For those concerned over data security, consider that data security is critical for SaaS vendors.  Without the ability to secure data, they're out of business.  It's a core function for them.  So they hire armies of the best and brightest security people.  And they invest heavily in security systems.  And most customers can't match that, either in terms of talent or systems.  Result:  the SaaS vendors provide security solutions that are simply out of the reach of enterprise software customers.  There is a greater risk in keeping your data in-house than in utilizing a SaaS vendor to protect your data.

For those fearing the loss of job security, they're correct...unless they're willing to change.  The skills of maintaining large back-office enterprise software systems just don't apply in a SaaS world (unless you're working for a SaaS vendor).  I'd lump database administrators and database developers into this category.  However, there are new opportunities for those skills...developing and maintaining software that enables strategic in-house initiatives.  There are also opportunities to extend SaaS applications to support unique in-house needs.  Both scenarios require a change - working more closely with business as a partner rather than as a technology custodian.

Overcoming the fear of losing control will require significant in advocacy and evangelism...most customers need information, training, and assurance in overcoming these fears.  But we can't really say that SaaS is "there" until we see a significant turn in perceptions here.


So there you have it.  Is SaaS up-and-coming? Absolutely.  Is the SaaS market transitioning to a mainstream, mature marketplace?  No...lots of maturing needed in the areas of integration, data state, and fear of losing control before we can get there.

As always, your thoughts are welcome in the comments...

An alternative to DBA_EXTENTS optimized for LMT

Yann Neuhaus - Sun, 2015-06-07 11:45

This is a script I have for several years, when tablespaces became locally managed. When we want to know to which segment a block (identified by file id, block id) belongs to, the DBA_EXTENTS view can be very long when you have lot of datafiles and lot of segments. This view using the underlying X$ tables and constrained by hints is faster when queried for one FILE_ID/BLOCK_ID. I did that in 2006 when having lot of corruptions on several 10TB databases with 5000 datafiles.

Since then, I've used it only a few times, so there is no guarantee that the plan is still optimal in current version, but the approach of starting to filter the segments that are in the same tablespace as the file_id makes it optimal for a search by file_id and block_id.

The script

Here is the creation of the DATAFILE_MAP view:

create or replace view datafile_map as
WITH
 l AS ( /* LMT extents indexed on ktfbuesegtsn,ktfbuesegfno,ktfbuesegbno */
  SELECT ktfbuesegtsn segtsn,ktfbuesegfno segrfn,ktfbuesegbno segbid, ktfbuefno extrfn, 
         ktfbuebno fstbid,ktfbuebno + ktfbueblks - 1 lstbid,ktfbueblks extblks,ktfbueextno extno 
  FROM sys.x$ktfbue
 ),
 d AS ( /* DMT extents ts#, segfile#, segblock# */
  SELECT ts# segtsn,segfile# segrfn,segblock# segbid, file# extrfn, 
         block# fstbid,block# + length - 1 lstbid,length extblks, ext# extno 
  FROM sys.uet$
 ),
 s AS ( /* segment information for the tablespace that contains afn file */
  SELECT /*+ materialized */
  f1.fenum afn,f1.ferfn rfn,s.ts# segtsn,s.FILE# segrfn,s.BLOCK# segbid ,s.TYPE# segtype,f2.fenum segafn,t.name tsname,blocksize
  FROM sys.seg$ s, sys.ts$ t, sys.x$kccfe f1,sys.x$kccfe f2  
  WHERE s.ts#=t.ts# AND t.ts#=f1.fetsn AND s.FILE#=f2.ferfn AND s.ts#=f2.fetsn 
 ),
 m AS ( /* extent mapping for the tablespace that contains afn file */
SELECT /*+ use_nl(e) ordered */ 
 s.afn,s.segtsn,s.segrfn,s.segbid,extrfn,fstbid,lstbid,extblks,extno, segtype,s.rfn, tsname,blocksize
 FROM s,l e
 WHERE e.segtsn=s.segtsn AND e.segrfn=s.segrfn AND e.segbid=s.segbid
 UNION ALL
 SELECT /*+ use_nl(e) ordered */  
 s.afn,s.segtsn,s.segrfn,s.segbid,extrfn,fstbid,lstbid,extblks,extno, segtype,s.rfn, tsname,blocksize
 FROM s,d e
  WHERE e.segtsn=s.segtsn AND e.segrfn=s.segrfn AND e.segbid=s.segbid
 UNION ALL
 SELECT /*+ use_nl(e) use_nl(t) ordered */ 
 f.fenum afn,null segtsn,null segrfn,null segbid,f.ferfn extrfn,e.ktfbfebno fstbid,e.ktfbfebno+e.ktfbfeblks-1 lstbid,e.ktfbfeblks extblks,null extno, null segtype,f.ferfn rfn,name tsname,blocksize
 FROM sys.x$kccfe f,sys.x$ktfbfe e,sys.ts$ t
 WHERE t.ts#=f.fetsn and e.ktfbfetsn=f.fetsn and e.ktfbfefno=f.ferfn
 UNION ALL
 SELECT /*+ use_nl(e) use_nl(t) ordered */ 
 f.fenum afn,null segtsn,null segrfn,null segbid,f.ferfn extrfn,e.block# fstbid,e.block#+e.length-1 lstbid,e.length extblks,null extno, null segtype,f.ferfn rfn,name tsname,blocksize
 FROM sys.x$kccfe f,sys.fet$ e,sys.ts$ t
 WHERE t.ts#=f.fetsn and e.ts#=f.fetsn and e.file#=f.ferfn
 ),
 o AS (
  SELECT s.tablespace_id segtsn,s.relative_fno segrfn,s.header_block   segbid,s.segment_type,s.owner,s.segment_name,s.partition_name 
  FROM SYS_DBA_SEGS s 
 )
SELECT 
 afn file_id,fstbid block_id,extblks blocks,nvl(segment_type,decode(segtype,null,'free space','type='||segtype)) segment_type,
 owner,segment_name,partition_name,extno extent_id,extblks*blocksize bytes,
 tsname tablespace_name,rfn relative_fno,m.segtsn,m.segrfn,m.segbid
 FROM m,o WHERE extrfn=rfn and m.segtsn=o.segtsn(+) AND m.segrfn=o.segrfn(+) AND m.segbid=o.segbid(+)
UNION ALL
SELECT 
 file_id+(select to_number(value) from v$parameter WHERE name='db_files') file_id,
 1 block_id,blocks,'tempfile' segment_type,
 '' owner,file_name segment_name,'' partition_name,0 extent_id,bytes,
  tablespace_name,relative_fno,0 segtsn,0 segrfn,0 segbid
 FROM dba_temp_files
;
Sample output
COLUMN   partition_name ON FORMAT   A16
COLUMN   segment_name ON FORMAT   A20
COLUMN   owner ON FORMAT   A16
COLUMN   segment_type ON FORMAT   A16

select file_id,block_id,blocks,segment_type,owner,segment_name,partition_name from datafile_map 
where file_id=1326 and 3782 between block_id and block_id + blocks - 1
SQL> /

 FILE_ID BLOCK_ID  BLOCKS SEGMENT_TYPE     OWNER            SEGMENT_NAME     PARTITION_NAME
-------- -------- ------- ---------------- ---------------- ---------------- ----------------
    1326     3781      32 free space

you identified free space block

select file_id,block_id,blocks,segment_type,owner,segment_name,partition_name from datafile_map 
where file_id=1326 and 3982 between block_id and block_id + blocks - 1
SQL> /


 FILE_ID BLOCK_ID  BLOCKS SEGMENT_TYPE     OWNER            SEGMENT_NAME         PARTITION_NAME
-------- -------- ------- ---------------- ---------------- -------------------- ----------------
    1326     3981       8 TABLE PARTITION  TESTUSER         AGGR_FACT_DATA       AFL_P_211

you identified a data block

select file_id,block_id,blocks,segment_type,owner,segment_name,partition_name from datafile_map 
where file_id=202 and 100 between block_id and block_id + blocks - 1
SQL> /

   FILE_ID   BLOCK_ID     BLOCKS SEGMENT_TYPE     OWNER            SEGMENT_NAME         PARTITION_NAME
---------- ---------- ---------- ---------------- ---------------- -------------------- ---------------
       202          1       1280 tempfile                          C:O102TEMP02.DBF

you identified a tempfile file_id

select file_id,block_id,blocks,segment_type,owner,segment_name,partition_name from datafile_map 
where file_id=1 and block_id between 0 and 100 order by file_id,block_id;

   FILE_ID   BLOCK_ID     BLOCKS SEGMENT_TYPE     OWNER            SEGMENT_NAME         PARTITION_NAME
---------- ---------- ---------- ---------------- ---------------- -------------------- ---------------
         1          9          8 ROLLBACK         SYS              SYSTEM
         1         17          8 ROLLBACK         SYS              SYSTEM
         1         25          8 CLUSTER          SYS              C_OBJ#
         1         33          8 CLUSTER          SYS              C_OBJ#
         1         41          8 CLUSTER          SYS              C_OBJ#
         1         49          8 INDEX            SYS              I_OBJ#
         1         57          8 CLUSTER          SYS              C_TS#
         1         65          8 INDEX            SYS              I_TS#
         1         73          8 CLUSTER          SYS              C_FILE#_BLOCK#
         1         81          8 INDEX            SYS              I_FILE#_BLOCK#
         1         89          8 CLUSTER          SYS              C_USER#
         1         97          8 INDEX            SYS              I_USER#

you mapped the first segments in system tablespace

Try it on a database with lot of segments and lot of datafiles, and compare with DBA_EXTENTS. Then you will know which one to choose in case of emergency.

RMAN -- 1 : Backup Job Details

Hemant K Chitale - Sun, 2015-06-07 03:57
Here's a post on how you could be misled by a simple report on the V$RMAN_BACKUP_JOB_DETAILS view.

Suppose I run RMAN Backups through a shell script.  Like this :

[oracle@localhost Hemant]$ ls -l *sh
-rwxrw-r-- 1 oracle oracle 336 Jun 7 17:30 Backup_DB_Plus_ArchLogs.sh
[oracle@localhost Hemant]$ cat Backup_DB_Plus_ArchLogs.sh
ORACLE_SID=orcl;export ORACLE_SID

rman << EOF
connect target /

spool log to Backup_DB_plus_ArchLogs.LOG

backup as compressed backupset database ;

sql 'alter system switch logfile';
sql 'alter system archive log current' ;

backup as compressed backupset archivelog all;

backup as compressed backupset current controlfile ;

EOF

[oracle@localhost Hemant]$
[oracle@localhost Hemant]$
[oracle@localhost Hemant]$
[oracle@localhost Hemant]$ ./Backup_DB_Plus_ArchLogs.sh

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Jun 7 17:31:06 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

RMAN>
connected to target database: ORCL (DBID=1229390655)

RMAN>
RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> [oracle@localhost Hemant]$
[oracle@localhost Hemant]$

I then proceed to check the results of the run in V$RMAN_BACKUP_JOB_DETAILS.

SQL> l
1 select to_char(start_time,'DD-MON HH24:MI') StartTime, to_char(end_time,'DD-MON HH24:MI') EndTime,
2 input_type, status
3 from v$rman_backup_job_details
4* where start_time > trunc(sysdate)+17.5/24
SQL> /

STARTTIME ENDTIME INPUT_TYPE STATUS
--------------------- --------------------- ------------- -----------------------
07-JUN 17:31 07-JUN 17:31 DB FULL FAILED

SQL>

It says that I ran one FULL DATABASE Backup that failed. Is that really true ?  Let me check the RMAN spooled log.

[oracle@localhost Hemant]$ cat Backup_DB_plus_ArchLogs.LOG

Spooling started in log file: Backup_DB_plus_ArchLogs.LOG

Recovery Manager11.2.0.2.0

RMAN>
RMAN>
Starting backup at 07-JUN-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=60 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=59 device type=DISK
RMAN-06169: could not read file header for datafile 6 error reason 4
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 06/07/2015 17:31:08
RMAN-06056: could not access datafile 6

RMAN>
RMAN>
sql statement: alter system switch logfile

RMAN>
sql statement: alter system archive log current

RMAN>
RMAN>
Starting backup at 07-JUN-15
current log archived
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=615 RECID=1 STAMP=881773851
channel ORA_DISK_1: starting piece 1 at 07-JUN-15
channel ORA_DISK_2: starting compressed archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=616 RECID=2 STAMP=881773851
input archived log thread=1 sequence=617 RECID=3 STAMP=881773853
input archived log thread=1 sequence=618 RECID=4 STAMP=881774357
input archived log thread=1 sequence=619 RECID=5 STAMP=881774357
channel ORA_DISK_2: starting piece 1 at 07-JUN-15
channel ORA_DISK_2: finished piece 1 at 07-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_annnn_TAG20150607T173112_bq83v12b_.bkp tag=TAG20150607T173112 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_2: starting compressed archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=620 RECID=6 STAMP=881775068
input archived log thread=1 sequence=621 RECID=7 STAMP=881775068
input archived log thread=1 sequence=622 RECID=8 STAMP=881775071
channel ORA_DISK_2: starting piece 1 at 07-JUN-15
channel ORA_DISK_1: finished piece 1 at 07-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_annnn_TAG20150607T173112_bq83v10y_.bkp tag=TAG20150607T173112 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_2: finished piece 1 at 07-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_annnn_TAG20150607T173112_bq83v292_.bkp tag=TAG20150607T173112 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
Finished backup at 07-JUN-15

Starting Control File and SPFILE Autobackup at 07-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/autobackup/2015_06_07/o1_mf_s_881775075_bq83v3nr_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 07-JUN-15

RMAN>
RMAN>
Starting backup at 07-JUN-15
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 07-JUN-15
channel ORA_DISK_1: finished piece 1 at 07-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_ncnnf_TAG20150607T173117_bq83v6vg_.bkp tag=TAG20150607T173117 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 07-JUN-15

Starting Control File and SPFILE Autobackup at 07-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/autobackup/2015_06_07/o1_mf_s_881775080_bq83v88z_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 07-JUN-15

RMAN>
RMAN>

Recovery Manager complete.
[oracle@localhost Hemant]$

Hmm. There were *three* distinct BACKUP commands in the script file.  The first was BACKUP ... DATABASE ..., the second was BACKUP ... ARCHIVELOG ... and the third was BACKUP ... CURRENT CONTROLFILE.  All three were executed.
Only the first BACKUP execution failed.  The subsequent  two BACKUP commands succeeded.  They were for ArchiveLogs and the Controlfile.
And *yet* the view V$RMAN_BACKUP_JOB_DETAILS shows that I ran  a FULL DATABASE BACKUP that failed.  It tells me nothing about the ArchiveLogs and the ControlFile backups that did succeed !


What if I switch my strategy from using a shell script to an rman script ?

[oracle@localhost Hemant]$ ls -ltr *rmn
-rw-rw-r-- 1 oracle oracle 287 Jun 7 17:41 Backup_DB_plus_ArchLogs.rmn
[oracle@localhost Hemant]$ cat Backup_DB_plus_ArchLogs.rmn
connect target /

spool log to Backup_DB_plus_ArchLogs.TXT

backup as compressed backupset database ;

sql 'alter system switch logfile';
sql 'alter system archive log current' ;

backup as compressed backupset archivelog all;

backup as compressed backupset current controlfile;

exit

[oracle@localhost Hemant]$
[oracle@localhost Hemant]$
[oracle@localhost Hemant]$
[oracle@localhost Hemant]$ rman @Backup_DB_plus_ArchLogs.rmn

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Jun 7 17:42:17 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

RMAN> connect target *
2>
3> spool log to Backup_DB_plus_ArchLogs.TXT
4>
5> backup as compressed backupset database ;
6>
7> sql 'alter system switch logfile';
8> sql 'alter system archive log current' ;
9>
10> backup as compressed backupset archivelog all;
11>
12> backup as compressed backupset current controlfile;
13>
14> exit[oracle@localhost Hemant]$




SQL> l
1 select to_char(start_time,'DD-MON HH24:MI') StartTime, to_char(end_time,'DD-MON HH24:MI') EndTime,
2 input_type, status
3 from v$rman_backup_job_details
4 where start_time > trunc(sysdate)+17.5/24
5* order by start_time
SQL> /

STARTTIME ENDTIME INPUT_TYPE STATUS
--------------------- --------------------- ------------- -----------------------
07-JUN 17:31 07-JUN 17:31 DB FULL FAILED
07-JUN 17:42 07-JUN 17:42 DB FULL FAILED

SQL>

[oracle@localhost Hemant]$
[oracle@localhost Hemant]$ cat Backup_DB_plus_ArchLogs.TXT

connected to target database: ORCL (DBID=1229390655)

Spooling started in log file: Backup_DB_plus_ArchLogs.TXT

Recovery Manager11.2.0.2.0

Starting backup at 07-JUN-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=59 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=50 device type=DISK
RMAN-06169: could not read file header for datafile 6 error reason 4
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 06/07/2015 17:42:19
RMAN-06056: could not access datafile 6

Recovery Manager complete.
[oracle@localhost Hemant]$

Now, this time, once the first BACKUP command failed, RMAN seems to have bailed out. It didn't even try executing the subsequent BACKUP commands !

How can V$RMAN_BACKUP_JOB_DETAILS differentiate from the two failed backups ?

SQL> l
1 select to_char(start_time,'DD-MON HH24:MI') StartTime, to_char(end_time,'DD-MON HH24:MI') EndTime,
2 input_bytes/1048576 Input_MB, output_bytes/1048576 Output_MB,
3 input_type, status
4 from v$rman_backup_job_details
5 where start_time > trunc(sysdate)+17.5/24
6* order by start_time
SQL> /

STARTTIME ENDTIME INPUT_MB OUTPUT_MB INPUT_TYPE STATUS
--------------------- --------------------- ---------- ---------- ------------- -----------------------
07-JUN 17:31 07-JUN 17:31 71.5219727 34.878418 DB FULL FAILED
07-JUN 17:42 07-JUN 17:42 0 0 DB FULL FAILED

SQL>

The Input Bytes does indicate that some files were backed up in the first run. Yet, it doesn't tell us how much of those were ArchiveLogs and how much were the ControlFile.


Question 1 : How would you script your backups ?  (Hint : Differentiate between the BACKUP DATABASE and the BACKUP ARCHIVELOG runs).

Question 2 : Can you improve your Backup Reports ?

Yes, the RMAN LIST BACKUP command is useful.  But you can't select the columns, format the output or add text  as you would with a query on V$ views.

[oracle@localhost oracle]$ NLS_DATE_FORMAT=DD_MON_HH24_MI_SS;export NLS_DATE_FORMAT
[oracle@localhost oracle]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Jun 7 17:51:41 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN> list backup completed after "trunc(sysdate)+17.5/24";

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
17 375.50K DISK 00:00:01 07_JUN_17_31_13
BP Key: 17 Status: AVAILABLE Compressed: YES Tag: TAG20150607T173112
Piece Name: /NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_annnn_TAG20150607T173112_bq83v12b_.bkp

List of Archived Logs in backup set 17
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------------- ---------- ---------
1 616 14068910 07_JUN_17_10_49 14068920 07_JUN_17_10_51
1 617 14068920 07_JUN_17_10_51 14068931 07_JUN_17_10_53
1 618 14068931 07_JUN_17_10_53 14069550 07_JUN_17_19_17
1 619 14069550 07_JUN_17_19_17 14069564 07_JUN_17_19_17

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
18 1.03M DISK 00:00:00 07_JUN_17_31_14
BP Key: 18 Status: AVAILABLE Compressed: YES Tag: TAG20150607T173112
Piece Name: /NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_annnn_TAG20150607T173112_bq83v292_.bkp

List of Archived Logs in backup set 18
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------------- ---------- ---------
1 620 14069564 07_JUN_17_19_17 14070254 07_JUN_17_31_08
1 621 14070254 07_JUN_17_31_08 14070265 07_JUN_17_31_08
1 622 14070265 07_JUN_17_31_08 14070276 07_JUN_17_31_11

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
19 13.72M DISK 00:00:02 07_JUN_17_31_14
BP Key: 19 Status: AVAILABLE Compressed: YES Tag: TAG20150607T173112
Piece Name: /NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_annnn_TAG20150607T173112_bq83v10y_.bkp

List of Archived Logs in backup set 19
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------------- ---------- ---------
1 615 14043833 12_JUN_23_28_21 14068910 07_JUN_17_10_49

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
20 Full 9.36M DISK 00:00:00 07_JUN_17_31_15
BP Key: 20 Status: AVAILABLE Compressed: NO Tag: TAG20150607T173115
Piece Name: /NEW_FS/oracle/FRA/ORCL/autobackup/2015_06_07/o1_mf_s_881775075_bq83v3nr_.bkp
SPFILE Included: Modification time: 07_JUN_17_28_15
SPFILE db_unique_name: ORCL
Control File Included: Ckp SCN: 14070285 Ckp time: 07_JUN_17_31_15

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
21 Full 1.05M DISK 00:00:02 07_JUN_17_31_19
BP Key: 21 Status: AVAILABLE Compressed: YES Tag: TAG20150607T173117
Piece Name: /NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_ncnnf_TAG20150607T173117_bq83v6vg_.bkp
Control File Included: Ckp SCN: 14070306 Ckp time: 07_JUN_17_31_17

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
22 Full 9.36M DISK 00:00:00 07_JUN_17_31_20
BP Key: 22 Status: AVAILABLE Compressed: NO Tag: TAG20150607T173120
Piece Name: /NEW_FS/oracle/FRA/ORCL/autobackup/2015_06_07/o1_mf_s_881775080_bq83v88z_.bkp
SPFILE Included: Modification time: 07_JUN_17_31_18
SPFILE db_unique_name: ORCL
Control File Included: Ckp SCN: 14070312 Ckp time: 07_JUN_17_31_20

RMAN>

So, the RMAN LIST BACKUP can provide details that V$RMAN_BACKUP_JOB_DETAILS cannot provide. Yet, it doesn't tell us that a Backup failed.
.
.
.

Categories: DBA Blogs

Install Oracle RightNow Cloud Adapter in JDeveloper

Today, there are thousands of enterprise customers across the globe using Oracle RightNow CX cloud service for providing superior customer experience across multiple channels including web, contact...

We share our skills to maximize your revenue!
Categories: DBA Blogs

INDEX FULL SCAN (MIN/MAX) Not Used – How To Resolve

Oracle in Action - Sat, 2015-06-06 01:06

RSS content

If you want to find out  the minimum or the maximum of a column value and the column is indexed, Oracle can very quickly determine the minimum or maximum value of the column by navigating to the first (left-most) or last (right-most) leaf blocks in the index structure to get the  Min or Max values respectively.  This access path known as  Index Full Scan (Min/Max) is extremely cost effective as instead of scanning the entire index / table, only first or last entries in the index need to be read.

In case the Select clause includes another column with a function applied to it, optimizer employs Full table Scan instead. In this post, I will demonstrate this scenario and also the solution to the same.

In my test setup, I have a table HR.EMP having index on SALARY column.

— Let’s first query the MIN(SALARY) and SYSDATE from HR.EMP. It can be seen that optimizer employs INDEX FULL SCAN (MIN/MAX) as desired.

SQL>select min(salary),  sysdate from hr.emp ;
select * from table (dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------
SQL_ID 7c3q3s8g2ucxx, child number 0
-------------------------------------
select min(salary), sysdate from hr.emp

Plan hash value: 3077585419
----------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
----------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 3 |
| 1 | SORT AGGREGATE | | 1 | 4 | |
| 2 | INDEX FULL SCAN (MIN/MAX)| EMP_SAL | 107 | 428 | |
----------------------------------------------------------------------

— Now if I try to find out MIN(SALARY)  with function applied to SYSDATE, the optimizer chooses  costly TABLE ACCESS FULL instead of  INDEX FULL SCAN (MIN/MAX) .

SQL>select min(salary), to_char(sysdate, 'dd/mm/yy') from hr.emp ;
select * from table (dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------
SQL_ID 3dthda93cgm6v, child number 0
-------------------------------------
select min(salary), to_char(sysdate, 'dd/mm/yy') from hr.emp

Plan hash value: 2083865914
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 3 (100)| |
| 1 | SORT AGGREGATE | | 1 | 4 | | |
| 2 | TABLE ACCESS FULL| EMP | 107 | 428 | 3 (0)| 00:00:01 |
-------------------------------------------------------------------------

As a workaround , we can  restructure our query as shown so that  it uses an inline view to get the MIN(SALARY)   so that optimizer chooses   INDEX FULL SCAN (MIN/MAX) and  function to SYSDATE  is applied in the  main SELECT clause.

SQL>select min_salary, to_char(sysdt, 'dd/mm/yy') from
(select min(salary) min_salary, sysdate sysdt from hr.emp) ;
select * from table (dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------
SQL_ID 5rzz6x8wzkh2k, child number 0
-------------------------------------
select min_salary, to_char(sysdt, 'dd/mm/yy') from (select
min(salary) min_salary, sysdate sysdt from hr.emp)

Plan hash value: 2631972856
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 3 (100)| |
| 1 | VIEW | | 1 | 19 | 3 (0)| 00:00:01 |
| 2 | SORT AGGREGATE | | 1 | 4 | | |
| 3 | INDEX FULL SCAN (MIN/MAX)| EMP_SAL | 107 | 428 | | |
-------------------------------------------------------------------------

Hope it helps!

References:

AIOUG -North India Chapter- Performance Tuning By Vijay Sehgal – 30th May 2015
Index Full Scan (MIN/MAX) and Partitioned Table

——————————————————————————————————————-

Related links:

Home
Tuning Index



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [INDEX FULL SCAN (MIN/MAX) Not Used - How To Resolve], All Right Reserved. 2015.

The post INDEX FULL SCAN (MIN/MAX) Not Used – How To Resolve appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Oracle Fusion Middleware (FMW) Training is now closed

Online Apps DBA - Fri, 2015-06-05 13:48

We announced Oracle Fusion Middleware Training on 2nd June and I am excited to say with in 2 days we fully booked (in actual over booked) the training (To maintain quality we limit number of trainees in each batch).

Congratulation if you managed to register and those who couldn’t then We’ll announce here in few months time. Click below (image at end of this post) to register your interest in next batch.

Topic we are going to cover on Day1 

  • Overview & Architecture of Oracle Fusion Middleware
  • Key Concepts : Java VS System Components
  • Various Home in FMW (MW, WLS, ORACLE, DOMAIN, INSTANCE, ORACLE_COMMON)
  • Repository Creation Utility (RCU)
  • Administration & Management
  • High Availability (HA) Overview
  • Disaster Recovery Overview
  • WebLogic Server Domain
  • Administration Server
  • Managed Server
  • WebLogic Console
  • Node Manager
  • Data Sources in WebLogic
  • Installing WebLogic Server (Hands-On)
  • Creating WebLogic Domain (Hands-On)
  • Start/Stop WebLogic Server (Hands-On)
  • File System and Log File (Hands-On)
  • Creating Managed Server from Console (Hands-On)

 

Note: I’ll be conducting Free Session on 12th July 2015 for 1 hour that anyone can join, Register your interest  by clicking below image (Leave comment for topic that you want to see in Free Session)

 

Any queries leave them as comments or Contact-Us

Previous in series Related Posts for FusionM
  1. Oracle Fusion Middleware Part II
  2. Oracle Fusion Middleware Overview
  3. Oracle Fusion Middleware : BEA WebLogic or Oracle Application Server
  4. Oracle Fusion Middleware 11g is coming … 1 July 2009
  5. Oracle Fusion Middleware 11g launched today
  6. Oracle Fusion Middleware 11g concepts for Apps DBA’s
  7. Fusion Middleware 11g – How to register Oracle Instance with Webogic Server (opmnctl) ?
  8. Reader’s Question : How to change hostname, domainname, IP of Fusion Middleware 11g (SOA, WebCenter, WebLogic) ?
  9. Oracle Fusion Middleware 11g R1 patchset 2 (11.1.1.3.0) – SOA, WebCenter, RCU, WebLogic (10.3.3)
  10. Oracle Fusion Middleware Challenge : Can you answer Why different domain home for Admin & Managed Server ?
  11. Beware !!! Oracle Fusion Middleware 11g R1 patchset 2 (11.1.1.3.0) is patch set only
  12. Oracle Fusion Middleware 11g R1 PS 3 (11.1.1.4) available now
  13. Oracle Fusion Middleware 11g R1 PS4 (11.1.1.5) is available now
  14. Cloning Oracle Fusion Middleware 11g (SOA, WebCenter, UCM) – copyBinary, copyConfig, extractMovePlan, pasteBinary, pasteConfig, cloningclient.jar
  15. Fusion Middleware 11g startup issue : OPMN RCV No such file or directory
  16. Oracle Fusion Middleware Start-up Issue : jps-config.xml No such file or directory : startScriptEnabled
  17. #OFMW 11.1.1.6 (SOA, WebCenter, IdM, OBIEE, OHS, ADF, CEP…) is now available
  18. ODL-52049 DMS-Startup oracle.core. ojdl.logging. LogUtil log cannot create instance of class ‘oracle. dfw. incident.Incident DetectionLog Filter’
  19. Oracle Fusion Middleware (FMW) 12.1.2 is available now : WebLogic, Coherence, OWSM, OHS, ADF etc
  20. Oracle Fusion Middleware Installation : java.lang.UnsatisfiedLinkError libmawt.so libXtst.so.6: cannot open shared object file
  21. Oracle Fusion Middleware Training – Win FREE Lesson : Suggest topic to our client
  22. YouTube Sunday : Troubleshoot Fusion Middleware Pre-Requisite Failure : Kernel Setting
  23. Oracle Fusion Middleware (FMW) 11.1.1.9 now available : Documentation & Download
  24. Oracle Fusion Middleware (FMW) Training by Atul Kumar starting on 6th June
  25. Oracle Fusion Middleware (FMW) Training is now closed

The post Oracle Fusion Middleware (FMW) Training is now closed appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

New In Oracle BI Cloud Service – Oracle Visual Analyzer, and Data Mashups in VA

Rittman Mead Consulting - Fri, 2015-06-05 13:27

Oracle released an update to Oracle BI Cloud Service a few weeks ago that included Oracle Visual Analyzer, along with some other improvements including support for full Oracle Database-as-a-Service as the database backend, the ability to upload RPDs and run them in the cloud, and support for a new utility called DataSync. In this post though I want to take a quick look at Visual Analyzer, and in-particular look at the data-mashup feature it provides.

Visual Analyzer is of course one of the tentpole features in OBIEE12c that we’ve all been looking forward to, as is 12c’s ability to allow users to upload spreadsheets of data and join them to existing subject areas in Answers. I’m covering Visual Analyzer in an upcoming edition of Oracle Magazine so I won’t go into too much detail on the product at a high-level here, but in summary Visual Analyzer provides a single-pane-of-glass, Tableau-type environment for analysing and visualising datasets stored in Oracle Cloud Database and modelled in BICS’s cut-down web-based data-modeller. In the Oracle Magazine article I take the Donors Choose dataset that we featured at the recent Rittman Mead BI Forum 2015, and create a range of visualizations as I explore the dataset and pick the type of project I’d most like to donate to.

NewImage

Visual Analyzer differs from Answers in that all of the available data items are listed down one side of the page, there’s no flicking backwards-and-forwards between the Criteria tab and the Results tab, filters are set by just right-clicking on the column you wish to filter by, and the visualisation builds up in front-of your eyes as you add more columns, move things around and arrange the data to get the most appropriate view of it.

From an IT manager’s perspective, where Visual Analyzer improves on desktop analysis tools such as Tableau and Spotfire is that the data you work with is the same governed dataset that Answers and Dashboards users work with, the same security rules and auditing apply to you as to other Presentation Services and Catalog users, but those types of “self-service” users who just want to play-around with and explore the data – rather than create reports and dashboards for mass consumption – now can work with the type of tool they’ve up-to-now had to look elsewhere for.

NewImage

One of the other headline features for OBIEE12c announced at last year’s Oracle Openworld is “Model Extensibility and Data Mashup”. Announced as part of Paul Rodwick’s “Business Analytics and Strategy Roadmap” session and described in the slide below, this feature extends the capabilities of the BI Server to now handle data the user uploads from the Answers (and now Visual Analyzer) report creation page, joining that data as either “fact extensions” or “measure extensions” to an existing Presentation Services subject area. 

NewImage

I won’t go into the technical details of how this works at this point but in terms of how it looks to the end-user, let’s consider a situation where I’ve got a spreadsheet of additional state-level data that I’d like to use in this Visual Analyzer (VA) project, to in this case colour the states in the map based on the income level of the people living there. The spreadsheet of data that I’ve got looks like this:

NewImage

Note the cunningly-named columns in the first row – they don’t have to match the column names in your VA data model, but if they do as you’ll see in a moment it speeds the matching process up. To add this spreadsheet of data to my VA project I therefore switch the menu panel on the left to the Data Sources option, right-click and then choose Add Data Source…

NewImage

Then using the Add Data Source dialog, upload the XLSX file from your workstation. In my instance, because I named the columns in the top row of the spreadsheet to match the column names already in the BICS data model, it’s matched the SCHOOL_STATE column in the spreadsheet to the corresponding column in the SCHOOLS table and worked out that I’m adding measures, joined on that SCHOOL_STATE column.

NewImage

If my spreadsheet contained other text fields matched to the existing model via a dimension attribute, the upload wizard would assume I’m adding dimension attributes, or if it detects them wrong I can match the columns myself, and specify whether the new file contains measures or attributes. BICS then confirms the join between the two datasets and I can then start selecting from the new measures to add to my project.

NewImage

My final step then is to add the HOUSEHOLD_INCOME measure to my visualisation, so that each state is now shaded by the household income level, allowing me to see which states might benefit most from my school project donation.

NewImage

One thing to bear-in-mind when using mashups though, is that what you’re effectively doing is adding a new fact table that joins to the existing one on one or more dimension levels. In my case, my HOUSEHOLD_INCOME and POPULATION measures only join to the DONATIONS dataset on the SCHOOL dimension, and then only at the STATE level, so if I try and reference another column from another dimension – to add, for example a filter on the FUNDING STATUS column within the PROJECTS dimension – the project will error as that dimension isn’t conformed across both facts.

NewImage

My understanding is that Oracle will fix this in a future release by setting all the non-conformed dimensions to “Total” as you can do with the on-site version of OBIEE yourself, but for now this restricts mashups to datasets that use fully-conformed dimensions, and with filters that only use those conformed dimensions from the join-level up.

So that’s VA on BICS in a nutshell, with this article drilling-down further into the very interesting new data mashup feature. Look out for more on this new release of BICS soon as I cover the new DataSync feature, RPD uploads and connecting BICS to the full Oracle Database-as-a-Service.

 

Categories: BI & Warehousing

Using startLocationMonitor to Enable GeoLocation in MAF Applications

On of the feature and benefits of Oracle Mobile Framework is the ability to access native device services, such as the smartphone GPS. Of course, when you are developing a mobile application,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Log Buffer #426: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-06-05 08:57

This Log Buffer edition transcends beyond ordinary and loop in few of the very good blog posts from Oracle, SQL Server and MySQL.


Oracle:

  • Variable selection also known as feature or attribute selection is an important technique for data mining and predictive analytics.
  • The Oracle Utilities SDK V4.3.0.0.2 has been released and is available from My Oracle Support for download.
  • This article provides a high level list of the new features that exist in HFM 11.1.2.4 and details the changes/differences between HFM 11.1.2.4 and previous releases.
  • In recent years, we’ve seen increasing interest from small-to-mid-sized carriers in transforming their policy administration systems (PAS).
  • Got a question on how easy it is to use ORDS to perform insert | update | delete on a table?

SQL Server:

  • The Importance of Database Indexing
  • Stairway to SQL Server Security Level 9: Transparent Data Encryption
  • Query Folding in Power Query to Improve Performance
  • Writing Better T-SQL: Top-Down Design May Not be the Best Choice
  • Cybercrime – the Dark Edges of the Internet

MySQL:

  • One of the challenges in storage engine design is random I/O during a write operation.
  • Fast Galera Cluster Deployments in the Cloud Using Juju
  • High availability using MySQL in the cloud
  • Become a DBA blog series – Monitoring and Trending
  • MySQL as an Oracle DBA

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

Categories: DBA Blogs