Skip navigation.

Feed aggregator

Guerrilla Testing at OHUG

Oracle AppsLab - Mon, 2015-08-10 02:52

The Apple Watch came out, and we had a lot of questions: What do people want to do on it? What do they expect to be able to do on it? What are they worried about? And more importantly, what are they excited about?

But we had a problem—we wanted to ask a lot of people about the Apple Watch, but nobody had it, so how could we do any research?

Our solution was to do some guerrilla testing at the OHUG conference in June, which took place in Las Vegas. We had a few Apple Watches at that time, so we figured we could let people play around with the watch, and then ask them some targeted questions. This was the first time running a study like this, so we weren’t sure how hard it would be to get people to participate by just asking them while they were at the conference.

It turned out the answer was “not very.” We should have known—people both excited and skeptical were curious about what the watch was really like.

research1

Friend of the ‘Lab and Oracle ACE Director Gustavo Gonzalez and Ben enjoy some Apple humor.

Eventually we had to tell the people at our recruiting desk to stop asking people if they want to participate! Some sessions went on for over 45 minutes, with conference attendees chatting about different possibilities and concerns, brainstorming use cases that would work for them or their customers.

research2

The activity was a great success, generating some valuable insights not only about how people would like to use a smartwatch (Apple or not), but how they want notifications to work in general. Which, of course, is an important part of how people get their work done using Oracle applications.

research3

Our method was pretty simple: We had them answer some quick survey questions, then we put the watch on them and let them explore and ask questions. While they were exploring, we sent them some mock notifications to see what they thought, and then finished up asking them more in depth about what they want to be able to accomplish with the watch.

At the end, they checked off items from a list of notifications that they’d like to receive on the watch. We recorded everything so we didn’t have to have someone taking notes during the interviews. It took some time to transcribe everything, but it was extremely valuable to have actual quotes bringing to life the users’ needs and concerns with notifications and how they want things to work on a smartwatch.

Most usability activities we run at conferences involve 5–10 people, whether it’s a usability test or a focus group, and usually they all have similar roles. It was valuable here to get a cross-section of people from different roles and levels of experience, talking about their needs for not only a new technology, but also some core functionality of their systems.

In retrospect, we were a little lucky. It would probably be a lot more difficult to talk to the same number of people for an appreciable amount of time just about notifications, and though we did learn a good deal about wants and needs for developing for the watch, it was also a lot broader than that.

So one takeaway is to find a way to take advantage of something people will be excited to try out—not just in learning about that specific new technology, but other areas that technology can impact.Possibly Related Posts:

Easy logging and debugging, version 2.0

Gerd Volberg - Mon, 2015-08-10 00:48
Each application needs a simple way to log errors and find them. The following technique can also be used to debug Forms, Reports and PL/SQL. This version 2.0 has an important change. The username ist stored in the debugging-data and the viewname has changed a little bit.

First create the table, sequence and view to store the logging-information:
CREATE TABLE Logging (
ID NUMBER(9) NOT NULL,
SESSION_ID NUMBER(9),
INSERT_DATE DATE NOT NULL,
INSERT_USER VARCHAR2(30) NOT NULL,
TEXT VARCHAR2(2000) NOT NULL);

CREATE SEQUENCE Logging_SEQ;

CREATE OR REPLACE VIEW Logging_desc_V
(ID, SESSION_ID, INSERT_DATE, INSERT_USER, TEXT)
AS SELECT ID, SESSION_ID, INSERT_DATE, INSERT_USER, TEXT
FROM Logging
ORDER BY SESSION_ID DESC, ID DESC;

You need also a package with some functions to start the logging-process
CREATE OR REPLACE PACKAGE PK_DEBUG IS
FUNCTION Debug_allowed RETURN BOOLEAN;
FUNCTION Next_ID RETURN NUMBER;

PROCEDURE Disable;
PROCEDURE Enable;
PROCEDURE Destroy;
PROCEDURE Init (P_Debug_allowed IN BOOLEAN DEFAULT TRUE);
PROCEDURE Write (P_Text IN VARCHAR2,
P_Session_ID IN NUMBER DEFAULT NULL);

G_Debug_allowed BOOLEAN := TRUE;
G_Session_ID NUMBER;
END;
/
CREATE OR REPLACE PACKAGE BODY PK_DEBUG IS
FUNCTION Debug_allowed RETURN BOOLEAN IS
BEGIN
RETURN (G_Debug_allowed);
END;

FUNCTION Next_ID RETURN NUMBER IS
V_ID NUMBER;
BEGIN
SELECT Logging_SEQ.nextval
INTO V_ID
FROM DUAL;
RETURN (V_ID);
END;

PROCEDURE Disable IS
BEGIN
G_Debug_allowed := FALSE;
END;

PROCEDURE Enable IS
BEGIN
G_Debug_allowed := TRUE;
END;

PROCEDURE Destroy IS
BEGIN
Write ('----------------------stopp '
|| to_char (G_Session_ID) || '--');
G_Session_ID := NULL;
END;

PROCEDURE Init (
P_Debug_allowed IN BOOLEAN DEFAULT TRUE) IS
BEGIN
G_Debug_allowed := P_Debug_allowed;
G_Session_ID := Next_ID;
Write ('--start ' || to_char (G_Session_ID)
|| '----------------------');
END;

PROCEDURE Write (
P_Text IN VARCHAR2,
P_Session_ID IN NUMBER DEFAULT NULL) IS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
IF Debug_allowed THEN
IF G_Session_ID IS NULL THEN
Init;
END IF;
INSERT INTO Logging (ID,
Session_ID,
Insert_Date,
Insert_User,
Text)
VALUES (Next_ID,
NVL (P_Session_ID, G_Session_ID),
Sysdate,
User,
P_Text);
COMMIT;
END IF;
END;
END;
/

You start a debugging-session with INIT and stop it with DESTROY. Error-Messages are logged using WRITE. For example:
pk_Debug.Init;
pk_Debug.Write ('Hello World - ' || V_Test);
pk_Debug.Destroy;

Parts of your debugging can be deactivated with DISABLE and from this point on nothing will be written into the logging-table until you start ENABLE.

The view Logging_DESC_V shows you the debugging-information, group by the newest session-id.
ID Session Insert-Date     Text
============================================
24 21 10.09.-12:38:48 -------stopp 21--
23 21 10.09.-12:38:48 Hello World - 42
22 21 10.09.-12:38:48 --start 21-------


Try it
Gerd

Oracle 12.1.0.2c Standard cluster : Empty Alert Log

Oracle in Action - Sun, 2015-08-09 23:52

RSS content

I have setup Oracle  12.1.0.2 standard  2 node cluster  called cluster01 with ASM storage as shown:

[grid@host01 ~]$ asmcmd showclustermode
ASM cluster : Flex mode disabled

[root@host01 ~]# olsnodes -c
cluster01

[root@host01 host01]# crsctl get cluster mode config
Cluster is configured as type “standard

[grid@host01 ~]$ crsctl query crs activeversion;
Oracle Clusterware active version on the cluster is [12.1.0.2.0]

[root@host01 host01]# crsctl get cluster mode status
Cluster is running in “standard” mode

[root@host01 host01]# olsnodes -n
host01 1
host02 2

[root@host01 host01]# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
— —– —————– ——— ———
1. ONLINE aa1ca556ae114f57bf44070be6a78656 (ORCL:ASMDISK01) [DATA]
2. ONLINE ff91dd96594d4f3dbfcb9cff081e3438 (ORCL:ASMDISK02) [DATA]
3. ONLINE 815ddcab94d34f50bf318ba93e19951d (ORCL:ASMDISK03) [DATA]
Located 3 voting disk(s).

[root@host01 host01]# ocrcheck -config
Oracle Cluster Registry configuration is :
Device/File Name : +DATA

[root@host01 host01]# crsctl check crs

CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

When I tried to check the contents of the cluster alert log, I was surprised to find an empty alert log.

[root@host01 host01]# ls -l /u01/app/12.1.0/grid/log/host01/alerthost01.log

-rw-rw-r– 1 grid oinstall 0 Jun 15 14:10 /u01/app/12.1.0/grid/log/host01/alerthost01.log

It seems that this is a bug.



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [Oracle 12.1.0.2c Standard cluster : Empty Alert Log], All Right Reserved. 2015.

The post Oracle 12.1.0.2c Standard cluster : Empty Alert Log appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

RAM is the new disk – and how to measure its performance – Part 1 – Introduction

Tanel Poder - Sun, 2015-08-09 17:26

RAM is the new disk, at least in the In-Memory computing world.

No, I am not talking about Flash here, but Random Access Memory – RAM as in SDRAM. I’m by far not the first one to say it. Jim Gray wrote this in 2006: “Tape is dead, disk is tape, flash is disk, RAM locality is king” (presentation)

Also, I’m not going to talk about how RAM is faster than disk (everybody knows that), but in fact how RAM is the slow component of an in-memory processing engine.

I will use Oracle’s In-Memory column store and the hardware performance counters in modern CPUs for drilling down into the low-level hardware performance metrics about CPU efficiency and memory access.

But let’s first get started by looking a few years into past into the old-school disk IO and index based SQL performance bottlenecks :)

Have you ever optimized a SQL statement by adding all the columns it needs into a single index and then letting the database do a fast full scan on the “skinny” index as opposed to a full table scan on the “fat” table? The entire purpose of this optimization was to reduce disk IO and SAN interconnect traffic for your critical query (where the amount of data read would have made index range scans inefficient).

This special-purpose approach would have benefitted your full scan in two ways:

  1. In data warehouses, a fact table may contain hundreds of columns, so an index with “only” 10 columns would be much smaller. Full “table” scanning the entire skinny index would still generate much less IO traffic than the table scan, so it became a viable alternative to wide index range scans and some full table scans (and bitmap indexes with star transformations indeed benefitted from the “skinniness” of these indexes too).
  2. As the 10-column index segment is potentially 50x smaller than the 500-column fact table, it might even fit entirely into buffer cache, should you decide so.

This is all thanks to physically changing the on-disk data structure, to store a copy of only the data I need in one place (column pre-projection?) and store these elements close to each other (locality).

Note that I am not advocating the use of this as a tuning technique here, but just explaining what was sometimes used to make a handful critical queries fast at the expense of the disk space, DML, redo and buffer cache usage overhead of having another index – and why it worked.

Now, why would I worry about this at all in a properly warmed up inmemory database, where the disk IO is not at the critical path of data retrieval at all? Well, now that we have removed the disk IO bottleneck, we inevitably hit the next slowest component as a bottleneck and this is … RAM.

Sequentially scanning RAM is slow. Randomly accessing RAM lines is even slower! Of course this slowness is all relative to the modern CPUs that are capable of processing billions of instructions per core every second.

Back to Oracle’s In-Memory column store example: Despite all the marketing talk about loop vectorization with CPU SIMD processing extensions, the most fundamental change required for “extreme performance” is simply about reducing the data traffic between RAM and CPUs.

This is why I said “SIMD would be useless if you waited on main memory all the time” at the Oracle Database In-Memory in Action presentation at Oracle OpenWorld (Oct 2014):

Oracle In-Memory in Action presentation

The “secret sauce” of Oracle’s in-memory scanning engine is the columnar storage of data, the ability to (de)compress it cheaply and accessing only the filtered columns’ memory first, before even touching any of the other projected columns required by the query. This greatly reduces the slow RAM traffic, just like building that skinny index reduced disk I/O traffic back in the on-disk database days. The SIMD instruction set extensions are just icing on the columnar cake.

So far this is just my opinion, but in the next part I will show you some numbers too!

 

Related Posts

OTN Tour of Latin America 2015 : CLOUG, Chile – Day -1

Tim Hall - Sun, 2015-08-09 17:07

We were due to leave the hotel in Sau Paulo at 04:45 today, so I was planning to get up at around 04:00. Instead, I woke up at 03:00 and couldn’t sleep. I think I was a little nervous about missing the flight.

The journey to the airport didn’t present any major dramas at that time. There was a little mixup with the size of the taxi, so we had to travel in two cars, but that was fine. Debra and I travelled with David Peake, who was leaving us at the airport to fly to Brasilia for an APEX conference. Francisco and his son were in a separate care.

We had just enough time to get a coffee before getting on the 4 hour flight to Santiago, Chile. I was tired and in a terrible mood. A number of things happened on the flight that really pissed me off, so I shall be writing a letter of complaint to TAM Airlines!

During the flight I watched Avengers : Age of Ultron, which I thought was pretty good. Although I liked it, I’m not sure where this franchise can go. It does feel a little samey! Likewise with Fast & Furious 7, which I also watched and liked. Both franchises feel like they’ve run their course to me…

We landed in Santiago, Chile to find TAM had lost one of Francisco‘s bags. Then the hire car company AVIS/Budget screwed up, so were were in the airport for a while. At this point I switched from tired and angry to just tired and “Whatever!”

When we got to the hotel, Debra and I dumped our stuff and went on the 2 hour bus tour of Santiago. It started cold and damp, but ended very cold and very wet. Being tired and cold made it a bit of a trial, but it was good to do something, rather than just go to bed and sleep the day away. You can see some terrible photos, including various bits of bus, here. :)

So it’s been another incredibly long day, after very little sleep. Time for bed before the CLOUG event tomorrow. We fly straight out after the conference, so this will end up being a very brief stay in Chile.

Cheers

Tim…

OTN Tour of Latin America 2015 : CLOUG, Chile – Day -1 was first posted on August 10, 2015 at 12:07 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

OTN Tour of Latin America 2015 : GUOB, Brazil – Day 1

Tim Hall - Sat, 2015-08-08 19:27

I woke up pretty early again, spent a bit of time working, then headed down to meet the wife at breakfast. I was gabbing away, when she mentioned it was time to register. I had a mad dash to get back to the room, shower and head down to the GUOB event. Thankfully, I am stopping in the conference hotel.

This was a three track event, so I tried to get to sessions that I hadn’t seen at other events.

  • The event opened up with Pablo Ciccarello discussing OTN and the ACE program.
  • My first session for the day was “Pluggable Databases : What they will break and why you should use them anyway!” talk. I was introduced by Alex Zaballa, Brazil’s latest Oracle ACE Director. Well done mate! I was incredibly nervous for this talk. I think it was because of the rushing around before the session. Once I got going I calmed down a lot. I think the session went well. People were a bit shy about questions during the session, but people came up to me after the session to say hello and ask questions without being in front of the audience. The questions you get are the best bit of doing these conferences. In answering questions, you learn a lot yourself.
  • After speaking to a few people I headed (a little late) into Kerry Osborne‘s session on “Controlling Execution Plans (without touching the code)” Part 1. The room was too full for me to get into this session in Argentina.
  • After lunch I headed on to see Dana Singleterry speaking about “Development Platform in the Cloud – Why, What and When”.
  • Next up it was me with my “It’s raining data! Oracle Databases in the Cloud” session. I really like doing this talk. It’s not a heavy technical session, but I like to think it brings a bit of sanity to the Database on the Cloud discussion.
  • After me came Alex Gorbachev speaking about “Benchmarking Oracle I/O Performance with ORION”.
  • After a short break, it was on to Kerry Osborne‘s session on “Controlling Execution Plans (without touching the code)” Part 2.
  • Next up was Alex Gorbachev speaking about “Big Data and Hadoop for Oracle Database Professionals”.

The last session was in Portuguese, so I ducked out and spoke to a few of the attendees instead.

After saying our goodbyes to some of the folks, we headed out to a Brazilian barbecue. Obviously, it was a meat-fest, but it was a good place to eat as a vegetarian too. Unfortunately, I drank one and a half drinks that were designed to injure humans. :)

It was a long, but fun day. Big thanks to the organisers and attendees at the GUOB event. I hope to see you again! Thanks also to the ACE Program for getting me here!

I’ve got a ridiculously early start for Chile tomorrow! :) Goodbye Sau Paulo!

Cheers

Tim…

OTN Tour of Latin America 2015 : GUOB, Brazil – Day 1 was first posted on August 9, 2015 at 2:27 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Blackboard: Ask and Ye Shall Receive (Better Answers)

Michael Feldstein - Sat, 2015-08-08 10:24

By Michael FeldsteinMore Posts (1041)

About a week ago, I complained about Blackboard’s lack of clarity in messaging about their platform in general and the implications for managed hosting customers in particular. I wrote, in part,

What is “Premium SaaS”? Is it managed hosting? Is it private cloud? What does it mean for current managed hosting customers? What we have found is that there doesn’t seem to be complete shared understanding even among the Blackboard management team about what the answers to these questions are.

The problem with this oversight is deeper than just leaving managed hosting customers in the dark. Blackboard is asking customers (and prospects) to have patience as the company performs a major retooling on their platforms. In order to get that patience, they need for customers to understand (and believe) that this really is a major retooling, what is being retooled (at a high level), and what they will get that’s different from other platforms at the end of the process. This is a hard communication challenge, but it is also Blackboard’s live-or-die challenge. They really need to convince customers and prospects that the platform has a bright future, and to do that, they have to communicate nuances and technical issues that are not easy to communicate to executives. This is not something that can be fixed with a couple of DevCon sessions.

That’s why I was happy to see Blackboard respond this week with more clarity.

For starters, a couple of days after the post, I got a couple of Tweets from Vivek Ramgopal, Blackboard’s Director of Product Marketing:

@mfeldstein67 Hi Michael. Based on customer feedback, we changed the naming of the SaaS tiers after Dec. Here you go. pic.twitter.com/A4D3yu6stz

— Vivek Ramgopal (@TweetsByVivek) August 3, 2015

Here’s a bit more of the exchange:

@mfeldstein67 Working on a few things to help clarify – but you're right, actions speak louder than words.

— Vivek Ramgopal (@TweetsByVivek) August 3, 2015

True to his word, less than a week after my original post, Blackboard put up a post entitled “The Ultra experience for Blackboard Learn: What does it Mean for self-hosted and managed deployments?” Let’s start with the title. They are referring to the “Ultra experience,” which suggests that Ultra is about user experience. So there’s a bit of clarity right off the bat. Unfortunately, they  muddy it up a bit again pretty quickly when they write, “The Ultra experience, which is consistent across Learn, Collaborate, and our new Bb Student app, is a foundational element of the New Learning Experience we introduced at BbWorld.” What is the difference between Ultra and the New Learning Experience? Why do we need two different marketing terms? Nevertheless, two steps forward and one step back is net one step forward.

It gets better from there. Here’s the meat of it:

During BbWorld, many Learn customers asked us when the Ultra experience will be coming to self-hosted and Managed Hosting deployments. Blackboard is exploring the ability to bring Learn SaaS and thus the Ultra experience to Blackboard-managed data centers and perhaps even customers’ own data centers. However, this does not mean that the Ultra experience is coming to self-hosted or Managed Hosting implementations as you know them today.

Part of the challenge in communicating this has been that for most of Blackboard’s history, “deployed in a Blackboard-managed data center” has meant our Managed Hosting infrastructure and “deployed in a customer’s own data center” has meant the traditional enterprise Blackboard Learn technology stack. With the introduction of Learn SaaS and the Ultra experience, though, we are talking about a new architecture and a new infrastructure (even though it might be in the same physical location).

As noted above, the Ultra experience was built on – and thus must sit on top of – a cloud computing architecture. The Learn 9.1 Managed Hosting deployment option available today does not use that type of cloud architecture. The same is true of self-hosted Learn 9.1 implementations. Therefore, it is not possible to bring the Ultra experience to self-hosted or Managed Hosting implementations as you know them today.

And here’s the summary at the bottom or the article:

  • The Ultra experience sits on top of a cloud architecture that is different from what self-hosted and Managed Hosting customers currently use today.
  • Thus, the Ultra experience will not be coming to Managed Hosting or self-hosted implementations as you know them today.
  • A cloud architecture that can support Learn SaaS and the Ultra experience will be coming to Blackboard-managed data centers in some regions and potentially even to customers’ own data centers in the future.
  • However, this will be a different infrastructure than today’s Managed Hosting and self-hosting options.

It is also important to note that we are fully committed to Learn 9.1 as well as self-hosted and Managed Hosting deployments.  We have quality improvements, workflow enhancements, and entirely new features (like improved grade exchange and competency-based education (CBE) tools) on the Learn 9.1 roadmap.

This is much, much better and forms the foundation of the communication they need to be having with customers and prospects. In addition to this, they need to continue to demonstrate what “Ultra” means in practical terms for teachers and studentsand talk about how the new capabilities are intimately connected to the new architecture. They need to play variations of this theme over and over and over again. And then they also need to have a constant drumbeat of both Ultra and non-Ultra updates, announced very clearly and loudly but without fanfare or hype. In other words, they need to make sure customers see that (a) they are making steady progress on both the present and the future for their customers, and (b) they understand this progress is the proof they need to demonstrate in order to win and hold customer trust rather than some super-fantabulous revolution. That’s pretty much the whole enchilada for them for at least the next 12 months.

A while back, Phil wrote about Instructure,

Despite Canvas LMS winning far more new higher ed and K-12 customers than any other vendor, I still hear competitors claim that schools select Canvas due to rigged RFPs or being the shiny new tool despite having no depth or substance. When listening to the market, however, (institutions – including faculty, students, IT staff, academic technology staff, and admin), I hear the opposite. Canvas is winning LMS selections despite, not because of, RFP processes, and there are material and substantive reasons for this success.

I think one of the reasons that competitors are sometimes perplexed by Instructure’s gains is that they make the mistake of believing that they are primarily competing on the strength of the product. Canvas is good, but it’s not so much better that it justifies the tectonic shift in market share that they are getting, and their competitors know it. What their competitors (and fans of other platforms) don’t seem to have noticed is that Instructure’s communications with customers have been freaky good from the very beginning. The detractors got misled by the early flash and the cheek—the flame thrower videos, Josh, the party thrown right across the hall from their competitor’s conference, Josh, the snarky T-shirts, Josh, and so on. What they missed in all of that was the consistent clarity of messaging. Instructure’s SVP of Marketing Misty Frost is just ridiculously good finding the one narrative thread that customers need to hear and making sure that they hear that, loud and clear, with no clutter. (I mean, c’mon. “Misty Frost?” If you were going to invent comic book superhero head of marketing—or, for that matter, supervillain head of marketing—you would name her Misty Frost.)

I would go one step further. This isn’t “just” about making sure customers understand the most important things about your product and your company (as if that weren’t vital enough). The number one way that I judge whether a company is likely to make a good product in the future is to look for signs that they are listening to their customers. Where does that show up first? In their communications. It is possible to be a good product company with lousy messaging, but it is impossible to be a great product company with lousy messaging, because great product companies weave what they are learning about their customers into the fabric of the culture. If Marketing can’t tell Product Management’s story, that means there is weak internal alignment.

That’s one reason why I take messaging so seriously and also one reason why I’m really pleased to see them respond to criticism of their communications so quickly and clearly.

The post Blackboard: Ask and Ye Shall Receive (Better Answers) appeared first on e-Literate.

Ancestor Worship

Greg Pavlik - Sat, 2015-08-08 07:25
Some profound lessons in how to be human that we can learn from our Confucian friends


Fascinating Lives

Greg Pavlik - Fri, 2015-08-07 17:38
There is something, I think, admirable in a quiet life: care for family, constructive participation in community, hard work. But there are times and places (perhaps all times, but not all places?) where simply attending to the simple things of life becomes a kind of impossibility: whether for psychological or moral reasons. I was reflecting on two persons recently who have struck me by not only their intellectual genius but also by the sheer force by which they pushed against the norm, one for reasons of psychology and one for reasons of morality.

Yukio Mishima: narcissist, political fanatic, suicide. And one of Japan's greatest novelists. I recently completed the Sea of Fertility tetralogy, which traces the life of Shigekuni Honda from youth to retirement as a wealthy attorney, centered around what Honda believes are the successive reincarnations of his friend Kiyoaki Matsugae: as a young rightist, a Thai princess and an orphan. The most powerful of the four novels, in my opinion is the second: Runaway Horses. The book seems to rebuke the militant nationalism of Japanese reactionaries, though ironically enough Mishima himself ends his own life under the banner of a similar ideology. Mishima's fascinating portrait of an inherent dark side of youth - a taming of a deep inhumanism - so to speak, comes through almost all the novels, but most strongly in the last. This echoes a theme he developed in The Sailor Who Fell From Grace with the Sea, though I can think of few works that more strongly explore this theme than the Lord of the Flies. In any case, Mishima is masterful in exploring aberrant developmental psychology - even as he, himself, seems to have been stricken with his own disordered personality.

Maria Skobtsova: atheist, symbolist poet, Bolshevik revolutionary - and a renegade nun arrested for helping Jews in Paris by the Gestapo, she allegedly died by taking the place of a Jewish woman being sent to death. Jim Forrest provides a useful overview of her life - unlikely most lives of a Christian saints, this is no hagiography: it is a straightforward story of life. At the same time, we see a life transformed by a dawning realization that self-denial is a path to transformation -

"The way to God lies through love of people. At the Last Judgment I shall not be asked whether I was successful in my ascetic exercises, nor how many bows and prostrations I made. Instead I shall be asked did I feed the hungry, clothe the naked, visit the sick and the prisoners. That is all I shall be asked. About every poor, hungry and imprisoned person the Savior says ‘I': ‘I was hungry and thirsty, I was sick and in prison.’ To think that he puts an equal sign between himself and anyone in need. . . . I always knew it, but now it has somehow penetrated to my sinews. It fills me with awe."

And despite a life dedicated to service, she remained an acute intellectual, a characteristic of so many Russian emigres in Paris. This too reflected her view that redemption and suffering where intertwined - my favorite piece On the Imitation of the Mother of God- draws this out beautifully.


OTN Tour of Latin America 2015 : GUOB, Brazil – Day -1

Tim Hall - Fri, 2015-08-07 14:22

As I mentioned in the previous post, I got to bed at the hotel in Sau Paulo at about 03:00. I woke up at about 07:00 and did some work for a couple of hours before hitting breakfast with the wife. At breakfast we were joined by Francisco and his son. :)

After breakfast, Debra and I went on a sight seeing trip. I was in Sau Paulo about 2 years ago, but I saw nothing of the city as it was such a short visit. I did have a few photos from the event in 2013, as well as a couple people sent to me (here).

Trying to do Sau Paulo in 4 hours pretty much means sitting in the car a lot. :) Even during the day the traffic is heavy. Added to that, the temperature was in the low 30’s. With the lack of sleep and the temperature combined, we were struggling. Added to that, our driver started to feel ill. I did get some photos, which you can see here. Once again, wide angle is on, so don’t assume everything in Sau Paulo is bowed. :)

Despite our struggles, it was really nice to finally see something of the city!

When we got back from sightseeing, we popped across the road to get some food and bumped into Francisco, his son and Alex. :)

Then it was back to the hotel, with a plan to get some sleep. Instead, I started to catch up on blog posts, as well as spending £54 (or $84 USD) on washing. :( It’s a sad day when you have to pay someone to do your washing! At least this way I should have enough clothes to complete the tour. :)

I’ll pretty soon be off to bed, ready for the GUOB conference tomorrow. Happy days!

Cheers

Tim…

PS. I’m going to us the English spelling (Brazil), rather than the Portuguese version, as it’s the only way I have a chance of staying consistent. No offence meant by spelling the name of your country “wrong”. :)

OTN Tour of Latin America 2015 : GUOB, Brazil – Day -1 was first posted on August 7, 2015 at 9:22 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Game Mechanics of a Scavenger Hunt

Oracle AppsLab - Fri, 2015-08-07 13:17

android512x512This year we organized a scavenger hunt for Kscope15 in collaboration with the ODTUG board and YCC.

As we found out, scavenger hunts are a great way to get people to see your content, create buzz and have fun along the way.  We also used the scavenger hunt as a platform to try some of the latest technologies. The purpose was to have conference attendees complete tasks using Internet of Things (IoT), Twitter hashtags and pictures and compete for a prize.

Here is a short technical overview of the technologies we used.

Registration

We opted to use a Node.js back-end and a React front-end to do a clever Twitter name autocomplete. As you typed your Twitter handle, the first and last name fields were completed for you. Once you filled all your information the form submitted to a REST endpoint based on Oracle APEX. This piece was built by Mark Vilrokx (@mvilrokx) and we were all very happy with the results.

registration

Smart Badge

We researched two possible technologies: Bluetooth Low Energy (BLE) Beacons and Near Field Communication (NFC) stickers and settled for NFC. The reason behind choosing NFC was the natural tendency we have to touch something (NFC scanner) and get something in return (notifications + points). When we tested with BLE beacons the “check-in” experience was more transparent but not as obvious when trying to complete a task.

We added a NFC sticker to all scavenger hunt participants’ badges so they could get points by scanning their badge in our Smart Scanners.  To provision each NFC badge we build an Android app that took the tag id and associated with the user profile.

Circus_NFC

Smart Scanner

The Smart Scanner was a great way to showcase IoT. We used the beloved Raspberry Pis to host an NFC reader. We used the awesome blink(1) USB LED light to indicate whether the scan was successful or not. We also added a Mini USB Wi-Fi dongle and a high capacity battery to assure complete freedom from wires.

Raymond Xie (@YuhuaXie) did a great job using Java 8 to read the NFC stickers and send the information to our REST server. The key part for these scanners was creating a failover system in case of internet disconnection. In such case we would still read the NFC tag and register it, then it would post it to our server as soon as connectivity was restored.

IMG_1625

Twitter and SMS Bots

Another key component was to create a twitter and SMS bot. Once again, Mark used Node.js to consume the Twitter stream. We looked for tweets mentioning #kscope15 and #taskhashtag. Then we posted to our REST server which made sure that points were given to the right person for the right task. Again we were pleasantly surprised by the flexibility and power of Node.js. Similarly we deployed a Twilio SMS server that listened for SMS subscriptions and sent SMS notifications.

Leaderboards

We just didn’t settled by creating a web client to keep track of points. We created a web mobile client (React), an iOS app and an Android app. This was part of our research to see how people used each platform. As a bonus we created Apple Watch and Android Wear companion apps. One of the challenges we had was to create a similar experience across platforms.

leaderboards

Administration

We needed a way to manage all task and player administration. Since we used APEX  and PLSQL to create our REST interface, it was a no brainer to use APEX for our admin front-end. The added bonus was that APEX has user authentication and sessions management, so all we had to do is create admin users with different roles.

Screen Shot 2015-08-07 at 1.54.25 PM

Conclusion

Creating a scavenger hunt for tech conference is no easy task. You have to take into consideration many factors, from what are the right task for the conference attendees to having the optimal wifi connection. Having an easy registration and provisioning process are also paramount for easy uptake.

We really had fun using the latest technologies and we feel we successfully showcased what good UX can do for you across different devices and platforms. Stay tuned to see if we end up doing another similar activity. You wont want to miss it!Possibly Related Posts:

Mongostat – A Nifty Tool for Mongo DBAs

Pythian Group - Fri, 2015-08-07 12:08

One of the main MongoDB DBA’s task is to monitor the usage of MongoDB system and it’s load distribution. This could be needed for proactive monitoring, troubleshooting during performance degradation, root cause analysis, or capacity planning.

Mongostat is a nifty tool which comes out of the box with MongoDB which provides wealth of information in a nicely and familiar formatted way. If you have used vmstat, iostat etc on Linux; Mongostat should seem very familiar.

Mongostat dishes out statistics like counts of database operations by type (e.g. insert, query, update, delete, getmore). The vsize column  in Mongostat output shows the amount of virtual memory in megabytes used by the process. There are other very useful columns regarding network traffic, connections, queuing etc.

Following are some of the examples of running Mongostat.

[mongo@mongotest data]$ mongostat
insert query update delete getmore command flushes mapped  vsize    res faults qr|qw ar|aw netIn netOut conn     time
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:29
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:30
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:31
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:32
*0    *0     *0     *0       0     2|0       0 160.0M 646.0M 131.0M      0   0|0   0|0  133b    10k    1 12:47:33
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:34
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:35
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:36
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:37
*0    *0     *0     *0       0     2|0       0 160.0M 646.0M 131.0M      0   0|0   0|0  133b    10k    1 12:47:38

Following displayes just 5 rows of output.

[mongo@mongotest data]$ mongostat -n 5
insert query update delete getmore command flushes mapped  vsize    res faults qr|qw ar|aw netIn netOut conn     time
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:45
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:46
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:47
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:48
*0    *0     *0     *0       0     2|0       0 160.0M 646.0M 131.0M      0   0|0   0|0  133b    10k    1 12:47:49

In order to see full list of options:

[mongo@mongotest data]$ mongostat –help
Usage:
mongostat <options> <polling interval in seconds>

Monitor basic MongoDB server statistics.

See http://docs.mongodb.org/manual/reference/program/mongostat/ for more information.

general options:
–help                     print usage
–version                  print the tool version and exit

verbosity options:
-v, –verbose                  more detailed log output (include multiple times for more verbosity, e.g. -vvvvv)
–quiet                    hide all log output

connection options:
-h, –host=                    mongodb host to connect to (setname/host1,host2 for replica sets)
–port=                    server port (can also use –host hostname:port)

authentication options:
-u, –username=                username for authentication
-p, –password=                password for authentication
–authenticationDatabase=  database that holds the user’s credentials
–authenticationMechanism= authentication mechanism to use

stat options:
–noheaders                don’t output column names
-n, –rowcount=                number of stats lines to print (0 for indefinite)
–discover                 discover nodes and display stats for all
–http                     use HTTP instead of raw db connection
–all                      all optional fields
–json                     output as JSON rather than a formatted table

 

Discover more about our expertise in Big Data.

The post Mongostat – A Nifty Tool for Mongo DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Log Buffer #435: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-08-07 11:30

Sun of database technologies is shining through the cloud technology. Oracle, SQL Server, MySQL and various other databases are bringing forth some nifty offerings and this Log Buffer Edition covers some of them.

Oracle:

  • How to create your own Oracle database merge patch.
  • Finally the work of a database designer will be recognized! Oracle has announced the Oracle Database Developer Choice Awards.
  • Oracle Documents Cloud Service R4: Why You Should Seriously Consider It for Your Enterprise.
  • Mixing Servers in a Server Pool.
  • Index compression–working out the compression number
  • My initial experience upgrading database from Oracle 11g to Oracle 12c (Part -1).

SQL Server:

  • The Evolution of SQL Server BI
  • Introduction to SQL Server 2016 Temporal Tables
  • Microsoft and Database Lifecycle Management (DLM): The DacPac
  • Display SSIS package version on the Control Flow design surface
  • SSAS DSV COM error from SSDT SSAS design Data Source View

MySQL:

  • If you run multiple MySQL instances on a Linux machine, chances are good that at one time or another, you’ve ended up connected to an instance other than what you had intended.
  • MySQL Group Replication: Plugin Version Access Control.
  • MySQL 5.7 comes with many changes. Some of them are better explained than others.
  • What Makes the MySQL Audit Plugin API Special?
  • Architecting for Failure – Disaster Recovery of MySQL/MariaDB Galera Cluster

 

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL.

The post Log Buffer #435: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Partitioning in Hive Tables

Pythian Group - Fri, 2015-08-07 11:03

Partitioning a large table is general practice for a few reasons:

  • Improving query efficiency by avoiding to transfer and process unnecessary data.
  • Improving data lineage by isolating batches of ingestion, so if a ingestion batch failed for some reason and introduces some corrupted data, it’s safe to re-ingest the data

With that being said this practice often results in a table with a lot of partitions, which makes querying a full table or a large part of it a very slow operation. It also makes the Hive client executing the query “memory hungry”. This is mainly caused by how Hive processes a query. Before generating a query plan, the Hive client needs to read the metadata of all partitions. That means a lot of RPC round trips between the Hive client and Hadoop namenode, as well as RDBMS transactions between the Hive client and metastore. It’s a slow process and also consumes a lot of memory. A simple experiment using Hive-0.12 shows that it takes around 50KB heap space to store all data structures for each partition. Below are two examples from a heap dump of a Hive client executing a query which touches 13k+ partitions.

 

Screen Shot 2015-08-05 at 11.24.16 pm

We can set HADOOP_HEAPSIZE in hive-env.sh to a larger number to keep ourself out of trouble. The HADOOP_HEAPSIZE will be passed as -Xmx argument to JVM. But if we want to run multiple Hive queries at the same time on the same machine, we will run out of memory very quickly. Another thing to watch out when increasing the heap size is: if the parallel GC is used for the JVM, which is the default option for Java server VM, and if the maximum GC pause time isn’t set properly, a Hive client dealing with a lot of partitions will quickly increase its heap size to the maximum and never shrink the heap size down.

Another potential problem of querying a large amount of partitions is that Hive uses CombineHiveInputFormat by default, which instructs Hadoop to combine all input files which are smaller than “split size” into splits. The algorithm used to do the combining is “greedy”. It bins larger files into splits first, then smaller ones. So the “last” couple of splits combined usually have a huge amount (depends on how unevenly the size of input files is distributed) of small files in them. As a result, those “unlucky” map tasks which get these splits will be very slow compared to other map tasks and consume a lot of memory to collect and process metadata of input files. Usually you can tell how bad the situation is by comparing SPLIT_RAW_BYTES counters of map tasks.

A possible solution to this problem is creating two versions of that table: one partitioned, and one non-partitioned. The partitioned one is still populated as the way it is. The non-partitioned one can be populated in parallel with the partitioned one by using “INSERT INTO”. One disadvantage of the non-partitioned version is it’s harder to be revised if corrupted data is found in it because in that case the whole table has to be rewritten. Though, starting with hive 0.14, updating and deleting SQL statements are allowed for tables stored in ORC format. Another possible problem of the non-partitioned version is that the table may contain a large number of small files on HDFS, because every “INSERT INTO” will create at least one file. As the number of files in the table increases, querying to the table slows down. So a periodical compaction is recommended to decrease the number of files in a table. It can be done by simply executing “INSERT OVERWRITE SELECT * FROM” periodically. You need to make sure no other inserts are being executed at the same time or data loss will occur.

Learn more about Pythian’s expertise in Big Data.

The post Partitioning in Hive Tables appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

CBO catchup

Jonathan Lewis - Fri, 2015-08-07 06:10

It’s interesting to watch the CBO evolving and see how an enhancement in one piece of code doesn’t necessarily echo through to all the other places it seems to fit. Here’s an example of an enhancement that spoiled (or, rather, made slightly more complicated) a little demonstration I had been running for about the last 15  years  – but (in a fashion akin to another partitioning limitation) doesn’t always work in exactly the way you might expect.

I wrote a note some time ago about the way that the optimizer could pre-compute the result of what I called a “fixed subquery” (such as “select 9100 from dual”) and take advantage of the value it derived to do a better job of estimating the cardinality for a query. That’s a neat feature (although it may cause some 3rd party applications a lot of pain as plans change on the upgrade to 11.2.0.4 or 12c) but it doesn’t work everywhere you might hope.

I’m going to create two (small) tables with the same data, but one of them is going to be a simple heap table and the other is going to be partitioned by range; then I’m going to run the same queries against the pair of them and show you the differences in execution plans. First the tables:


create table t1
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                  id,
        lpad(rownum,10,'0')     v1,
        rpad('x',100)           padding
from
        generator       v1
where
        rownum <= 1e4
;

create table pt1(
        id, v1, padding
)
partition by range (id) (
        partition p02000 values less than ( 2001),
        partition p04000 values less than ( 4001),
        partition p06000 values less than ( 6001),
        partition p08000 values less than ( 8001),
        partition p10000 values less than (10001)
)
as
select * from t1
;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'PT1',
                granularity      =>'ALL',
                method_opt       => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

alter table  t1 add constraint  t1_pk primary key(id);
alter table pt1 add constraint pt1_pk primary key(id) using index local;

create or replace function f(i_in  number)
return number
is
begin
        return i_in;
end;
/

Note that I’ve used ‘ALL’ as my granularity option – for such small tables this should mean that the statistics at the partition and global level are as accurate as they can be. And since the data is defined to be uniform I don’t expect the partitioning to introduce any peculiarities in the optimizer’s calculations of selectivity and cardinality. I’ve created the indexes after gathering stats on the tables – this is 12c (and 11.2.0.4) so the index stats will be collected with a 100% sample as the indexes are created. Finally I’ve created a function that simply returns its numeric input.

Now let’s run a couple of queries against the simple table and check the cardinality (Rows) predicted by the optimizer – the two plans follow the code that generated them:

set serveroutput off

select  max(v1)
from    t1
where   id between (select 500 from dual)
           and     (select 599 from dual)
;

select * from table(dbms_xplan.display_cursor);

select  max(v1)
from    t1
where   id between (select f(500) from dual)
           and     (select f(599) from dual)
;

select * from table(dbms_xplan.display_cursor);

======================
Actual Execution Plans
======================

select max(v1) from t1 where id between (select 500 from dual)
  and     (select 599 from dual)

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |       |       |     4 (100)|          |
|   1 |  SORT AGGREGATE                      |       |     1 |    15 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |   101 |  1515 |     4   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T1_PK |   101 |       |     2   (0)| 00:00:01 |
|   4 |     FAST DUAL                        |       |     1 |       |     2   (0)| 00:00:01 |
|   5 |     FAST DUAL                        |       |     1 |       |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("ID">= AND "ID"<=)

select max(v1) from t1 where id between (select f(500) from dual)
     and     (select f(599) from dual)

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE                      |       |     1 |    15 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |    25 |   375 |     3   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T1_PK |    45 |       |     2   (0)| 00:00:01 |
|   4 |     FAST DUAL                        |       |     1 |       |     2   (0)| 00:00:01 |
|   5 |     FAST DUAL                        |       |     1 |       |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("ID">= AND "ID"<=)

In the first plan the optimizer has recognised the values 500 and 599, so its range-based calculation has produced a (matching, and nearly correct) prediction of 101 rows. In the second plan the function call has hidden the values so the optimizer has had to use the arithmetic for “ranges with unknown values” – which means it uses guesses for the selectivity of 0.45% for the index and 0.25% for the table. Maybe in a future release that f(500) will be evaluated in the same way that we can trigger in-list calculation with the precompute_subquery hint.

Now we repeat the query, but using the partitioned table – showing only the trimmed output from dbms_xplan.display_cursor():

select max(v1) from pt1 where id between (select 500 from dual)
   and     (select 599 from dual)

----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                            |        |       |       |     4 (100)|          |       |       |
|   1 |  SORT AGGREGATE                             |        |     1 |    15 |            |          |       |       |
|   2 |   PARTITION RANGE ITERATOR                  |        |   101 |  1515 |     4   (0)| 00:00:01 |   KEY |   KEY |
|   3 |    TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| PT1    |   101 |  1515 |     4   (0)| 00:00:01 |   KEY |   KEY |
|*  4 |     INDEX RANGE SCAN                        | PT1_PK |   101 |       |     2   (0)| 00:00:01 |   KEY |   KEY |
|   5 |      FAST DUAL                              |        |     1 |       |     2   (0)| 00:00:01 |       |       |
|   6 |      FAST DUAL                              |        |     1 |       |     2   (0)| 00:00:01 |       |       |
----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("ID">= AND "ID"<=)

select max(v1) from pt1 where id between (select f(500) from dual)
      and     (select f(599) from dual)

----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                            |        |       |       |     3 (100)|          |       |       |
|   1 |  SORT AGGREGATE                             |        |     1 |    15 |            |          |       |       |
|   2 |   PARTITION RANGE ITERATOR                  |        |    25 |   375 |     3   (0)| 00:00:01 |   KEY |   KEY |
|   3 |    TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| PT1    |    25 |   375 |     3   (0)| 00:00:01 |   KEY |   KEY |
|*  4 |     INDEX RANGE SCAN                        | PT1_PK |    45 |       |     2   (0)| 00:00:01 |   KEY |   KEY |
|   5 |      FAST DUAL                              |        |     1 |       |     2   (0)| 00:00:01 |       |       |
|   6 |      FAST DUAL                              |        |     1 |       |     2   (0)| 00:00:01 |       |       |
----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("ID">= AND "ID"<=)

It’s great to see that the predicted cardinalities match the simple heap version exactly – but can you see anything odd about either of these plans ?

 

 

Pause for thought …

 

 

There’s nothing odd about the second plan but there’s a little puzzle in the first.

In theory, it seems, the optimizer is aware that the first query covers the range 500 – 599; so why are the pstart/pstop columns for operations 2-4 showing KEY-KEY, which usually means the optimizer knows that it will have some partition key values at run-time and will be able to do run-time partition elimination, but it doesn’t know what those key values are at parse time.

In this very simple case it’s (probably) not going to make any difference to the performance – but it may be worth some careful experimentation in more complex cases where you might have been hoping to get strict identification of partitions and partition-wise joins taking place. Yet another topic to put on the todo list of “pre-emptive investigations” with a reminder to re-run the tests from time to time.

 

 


Solutions for Cloud Reporting: Oracle Enterprise Performance Reporting Cloud Service (EPRCS)

OracleApps Epicenter - Thu, 2015-08-06 23:33
Oracle Enterprise Performance Reporting Cloud Service was released last month. Oracle Enterprise Performance Reporting Cloud is the latest addition to the EPM cloud suite of applications, and is a purpose-built solution for narrative-driven management and financial performance reporting. This is not Hyperion Financial Reports on the cloud! This is a completely separate product that is […]
Categories: APPS Blogs

Mongostat ; A Nifty Tool for Mongo DBA

Pakistan's First Oracle Blog - Thu, 2015-08-06 20:56
One of the main Mongodb DBA's task is to monitor the usage of Mongodb system and it's load distribution. This could be needed for proactive monitoring, troubleshooting during performance degradation, root cause analysis, or capacity planning.

Mongostat is a nifty tool which comes out of the box with Mongodb which provides wealth of information in a nicely and familiar formatted way. If you have used vmstat, iostat etc on Linux; Mongostat should seem very familiar.


Mongostat dishes out statistics like counts of database operations by type (e.g. insert, query, update, delete, getmore). The vsize column  in Mongostat output shows the amount of virtual memory in megabytes used by the process. There are other very useful columns regarding network traffic, connections, queuing etc.


Following are some of the examples of running mongostat.

[mongo@mongotest data]$ mongostat

insert query update delete getmore command flushes mapped  vsize    res faults qr|qw ar|aw netIn netOut conn     time
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:29
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:30
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:31
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:32
*0    *0     *0     *0       0     2|0       0 160.0M 646.0M 131.0M      0   0|0   0|0  133b    10k    1 12:47:33

*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:34
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:35
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:36
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:37
*0    *0     *0     *0       0     2|0       0 160.0M 646.0M 131.0M      0   0|0   0|0  133b    10k    1 12:47:38

Following displayes just 5 rows of output.

[mongo@mongotest data]$ mongostat -n 5
insert query update delete getmore command flushes mapped  vsize    res faults qr|qw ar|aw netIn netOut conn     time

*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:45
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:46
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:47
*0    *0     *0     *0       0     1|0       0 160.0M 646.0M 131.0M      0   0|0   0|0   79b    10k    1 12:47:48
*0    *0     *0     *0       0     2|0       0 160.0M 646.0M 131.0M      0   0|0   0|0  133b    10k    1 12:47:49

In order to see full list of options:

[mongo@mongotest data]$ mongostat --help

Usage:
mongostat

Monitor basic MongoDB server statistics.

See http://docs.mongodb.org/manual/reference/program/mongostat/ for more information.

general options:

--help                     print usage
--version                  print the tool version and exit
verbosity options:
-v, --verbose                  more detailed log output (include multiple times for more verbosity, e.g. -vvvvv)
--quiet                    hide all log output
connection options:
-h, --host=                    mongodb host to connect to (setname/host1,host2 for replica sets)
--port=                    server port (can also use --host hostname:port)
authentication options:
-u, --username=                username for authentication
-p, --password=                password for authentication
--authenticationDatabase=  database that holds the user's credentials
--authenticationMechanism= authentication mechanism to use
stat options:
--noheaders                don't output column names
-n, --rowcount=                number of stats lines to print (0 for indefinite)
--discover                 discover nodes and display stats for all
--http                     use HTTP instead of raw db connection
--all                      all optional fields
--json                     output as JSON rather than a formatted table
Categories: DBA Blogs

Royal High School Students Visit Oracle

Oracle AppsLab - Thu, 2015-08-06 20:29

Last week a group of high school students from Royal High School visited Oracle Headquarters in Redwood Shores, California.

Royal High School, a public school in Simi Valley, California, is launching an International Business Pathway program. This program is part of California’s Career Pathways Trust (CCPT), which was established in 2013 by the California State Legislature to better prepare students for the 21st Century workplace.

The goal of the visit was to introduce students to real life examples of what they will be studying in the year ahead, which include Business Organization and Environment, Marketing, Human Resources, Operations, and Finance.

I was honored to be invited to be on a career panel with three other Oracle colleagues and share our different careers and career paths.

L-R Chris Kite, VP Finance A&C/NSG; Jessica Moore, Sr Director Corporate Communications; Thao Nguyen, Director Research & Design; Kym Flaigg, College Recruiting Manager

While Oracle is known as a technology company, it is comprised of many different functional areas beyond engineering. The panel shared our diverse backgrounds and education, our different roles within the organization, the different cultures within Oracle, and more.

Since these are students in an international business program, we also discussed Oracle as a global business. The panelists shared our individual involvement and impact on Oracle’s international business – from working with Oracle colleagues located throughout the world to engaging with global customers, partners, and journalists.

By the end, the students heard stories of our professional and personal journeys to where we are now. The common themes were to be authentic and true to yourself, change is inevitable, and it is a lifetime of learning. All of the panelists started on one path but ultimately found new interests and directions.

The students learned there are many different opportunities in companies and many different paths to achieve career and life goals. Bring your passion to work and you’ll succeed.

On a personal note, I grew up in the same area as these students, that being the San Fernando Valley in Southern California. I moved from the San Fernando Valley to the Silicon Valley years ago, but thanks to Oracle Giving, I am able to give back to my roots and proud to participant in Oracle’s community outreach .Possibly Related Posts:

Universities As Innovators That Have Difficulty Adopting Their Own Changes

Michael Feldstein - Thu, 2015-08-06 19:16

By Phil HillMore Posts (355)

George Siemens made an excellent point in his recent blog post after his White House meeting.

I’m getting exceptionally irritated with the narrative of higher education is broken and universities haven’t changed. This is one of the most inaccurate pieces of @#%$ floating around in the “disrupt and transform” learning crowd. Universities are exceptional at innovating and changing.

While I agree with his primary point about false narratives with simplistic no-change assumptions, I think there is a risk about going too far the other direction. Universities have certainly changed, and there are many innovations within universities, but universities are not very good about diffusing the innovations that they do make. I made this same argument here and here.[1] Campus changes are apparent, but too often I see innovative course designs showing real results, but courses in the same department remain unchanged.

In my opinion Universities are exceptional at innovating, but they are not exceptional at changing.

In our e-Literate TV series on personalized learning, every case we reviewed was university, not vendor or foundation, driven. The universities drove the changes, and much of what we saw was very encouraging. But that does not mean that universities don’t face barriers in getting more faculty and course offerings to adopt changes that work. Take the University of California at Davis, where they are transforming large lecture intro to STEM courses into active learning laboratories that get students to really learn concepts and not just memorize facts. I’ve highlighted what they’re doing and how they’re doing it, but episode 3 of the case study also highlights the key barriers they face in adopting their own changes. I do not think UC Davis is unique here, just very open about it. The following is an interview with the iAMSTEM group that is supporting faculty and teaching assistants with the changes.

Phil Hill: But the biggest barrier might be with faculty members. Too often, the discussion is about resistance to new ideas without addressing the existing structural barriers.

It sounds like there are some very exciting changes—boisterous students, people wanting to learn—is some of what I’m hearing. What’s the biggest barrier that you guys face in terms of getting more of this cultural change to go throughout UC Davis? What do you see as the biggest barrier moving forward?

Erin Becker: Can I take this one?

Chris Pagliarulo: I think we all have some in mind.

Phil Hill: I’ll ask each one of you, so Erin?

Erin Becker: Incentivizing good teaching at the university—as it currently stands, most incentives that are built into the tenure package are based on research quality not on teaching quality.

So, asking instructors to put a lot of time and effort and energy into making these big instructional changes—it’s hard to incentivize that. If they’re going up for tenure, they want to spend more time in the lab.

Chris Pagliarulo: It’s risky.

Phil Hill: So, it’s the faculty compensation or reward system is not in alignment with spending time on improving teaching. Is that an accurate statement?

Chris Pagliarulo: Yep, that’s a key structural barrier.

Phil Hill: So, Chris, what would you say? Even if it’s the same thing, what do you see as the biggest barrier to this cultural shift?

Chris Pagliarulo: The next step would be, let’s imagine it was incentivized. It takes a lot of work to transform your instruction, and it’s also a bit of an emotional rollercoaster. When you change out of a habitual behavior, they call it the “J curve”. Immediately, your performance goes down, your attitude and affect goes down, and it takes somebody there to help you through both that process—and we need expertise, so there’s a major resource deficit that we have now.

If everyone was intellectually and emotionally ready to transform their instruction, it’s going to take a lot of work and a lot of resources to get there. So, that’s another thing that we would need to ramp up.

In other parts of the same episode, the UC Davis team talks about student expectations (active learning is hard and requires accountability for students, which is not easy at first) and student course evaluations (designed more for ‘do you like teacher & style’ than ‘is this an effective course’). In separate interviews with two faculty members (Marc Facciotti and Michelle Igo) who not only are teaching the redesigned courses but were key parts of the design process (you know, innovating), they both talked about how much time this takes. They have to get up to speed on pedagogical design, teach the course, sit in their peer’s courses to watch and learn, adjust their own courses, and improve each semester.  They described not only the time commitments but also the risk to their own careers by spending this much time on course redesign.[2]

There is nothing new here, just the opportunity to hear it from first-hand participants.

The point is, universities are not exceptional at adopting their own changes as there are structural barriers such as faculty reward, student expectations and student course evaluations. Change happens but it is difficult and slow. The faculty who lead change often do so at their own risk and in spite of their career needs, not in support of. None of this obviates George’s frustration at the no-change, “disrupt and transform” learning crowd (and I agree that is a big problem). But let’s not adopt the opposite viewpoint that all is well with the change process.

Note that I do not think that George is actually arguing for the all-is-well point, as evidenced in the Chronicle article on his blog post.

“Admittedly colleges have been slower to respond than corporations have” to changes in technology, Mr. Siemens added. But that’s how it should be, he argued. “When a university takes a big pedagogical risk and fails, that’s impacting someone’s life.” He admitted that colleges could be moving faster, but he felt that it is disingenuous to ignore the changes that are happening.

  1. The first article had more of a technology focus, but the same applies to the pedagogical side of change.
  2. Unfortunately these parts of the interviews ended up on the cutting room floor and are not in the videos.

The post Universities As Innovators That Have Difficulty Adopting Their Own Changes appeared first on e-Literate.

Oracle Priority Support Infogram for 06-AUG-2015

Oracle Infogram - Thu, 2015-08-06 15:04

RDBMS
Is Your Database The Next Ticking Time Bomb?, from Database Trends and Applications.
SQL
The Problem with SQL Calling PL/SQL Calling SQL, from All Things SQL.
Oracle Technology
From ArchBeat:
Twitter Tuesday: Top 10 Tweets - July 30-August 3, 2015
Top Ten 2 Minute Tech Tip Videos for July 2015
Oracle Utilities
Installing Application Management for Oracle Utilities in Offline Mode, from The Shorten Spot (@theshortenspot).
Data Integration
Chalk Talk Video: Oracle Big Data Preparation Cloud Service, from the Data Integration blog.
Solaris
Creating Scheduled Services in Oracle Solaris, from the Solaris SMF Weblog.
WebLogic
Responsive UI Support in ADF 12.1.3, from WebLogic Partner Community EMEA.
And from the same source:
Additional new material WebLogic Community
Java
Singletons, Singletons..., from The Java Source.
Several good postings at Geertjan’s Blog:
Released: NetBeans IDE 8.1 Beta
Quickly See Implemented/Overridden Methods in Navigator in NetBeans IDE
Enum Code Completion in NetBeans IDE
NetBeans "Find Usages" Includes Dependencies
SOA
SOA 12c End-to-end (e2e) Tutorial, from the SOA & BPM Partner Community Blog.
And from the same source:
SOA Suite 12c Essentials Exam available
EBS
From the Oracle E-Business Suite Support blog:
Webcast: Release from Advanced Supply Chain Planning (ASCP) to Oracle Process Manufacturing (OPM)
Webcast: Unprocessed Subledger Analyzer for Cost Management
Procurement Analyzers
BOM Calendar - Is Your MRP Bucket Half Full?
Webcast: Setup & Troubleshooting Dunning Plans in Oracle Advanced Collections
Troubleshooting the Closing of Work Orders in EAM and WIP
From the Oracle E-Business Suite Technology blog:
Using Oracle Database In-Memory with Oracle E-Business Suite
Oracle Data Guard 12.1.0.2 Certified with EBS 12.2
Database 12.1.0.2 Certified with EBS 11i on Additional Platforms
Transportable Database 12c Certified for EBS 12.2 Database Migration

Database 12.1.0.2 Certified with EBS 12.2 on Additional Platforms