Skip navigation.

Feed aggregator

2015 hyundai santa fe sport Release Date

EBIZ SIG BLOG - Thu, 2014-10-30 01:29
autos much the same as the 2015 hyundai santa fe sport changes Fe are in charge of Hyundai's fame as a brand that offers quality and reasonableness. the spot the past Santa Fe had a strange however unique appear to be, the quite recently upgraded, present adaptation is prepared as standard as you can get. View the Santa Fe from more than a couple of edges and its conceivable you'll recognize a similarity to more upscale moderate size hybrid Suvs, for example, the Lexus RX and VW Touareg. that is to not say it appears to be broadly broad - it doesn't. What the Santa Fe does give is a commonplace appear to be and a choice inside that verges on costly, especially in restricted trim.

hyundai santa fe 2015 caracteristicas might likewise be furnished with this area's foreseen alternatives, including a third-column seat and a route machine. supporters might moreover browse styles with considered one of two V6 motors, door or all-wheel force and a handbook or modernized transmission. lamentably, that you can't actually mix'n'match - base styles have a more modest V6 with less strength, and various the limited's alleviation and solace alternatives are restrictive to that trim.

in general, despite the fact that, the 2015 hyundai santa fe sport review is a brilliant plausibility for youthful families needing all-capacity transportation. We wouldn't guide it if energetic driving progress are favored. The more diminutive Mazda CX-7 and Mitsubishi Outlander, and additionally the average size Nissan Murano would all be most well known for this situation, and Toyota's Highlander is roomier and speedier. yet in the event that you require different hybrid for the cash, the Hyundai Santa Fe is sensibly convincing.

2015 hyundai santa fe sportThe when will the 2015 hyundai santa fe be available medium size hybrid action utility is accessible in three trim extents: base GLS, SE and limited. The GLS begins off with sixteen-inch compound wheels, full power supplies, keyless section, journey direct and a six-speaker sound machine with a CD/Mp3 player, satellite radio, an assistant sound jack and a USB port. The midlevel SE trim joins a finer V6 motor, 18-inch amalgam wheels, an auto-darkening rearview mirror, robotized headlights, a drive pc and direction wheel-snared sound controls. The best Santa Fe constrained includes a sunroof, cowhide based upholstery, warmed front seats, a force driver seat, twin-zone programmed nearby climate control and a top rate Infinity sound contraption with a six-CD changer.

one of the vital restricted's additional features are offered as options on the GLS and SE. different options include a third-row seat with auxiliary rear local weather controls and Bluetooth for wirelessly connecting your phone to the auto. A towing training bundle is same old on SE and limited fashions, and an not obligatory navigation device and a rear-seat leisure system are offered on the Santa Fe limited only.
hyundai santa fe 2015 price
In GLS trim, the Santa Fe comes with a 2.7-liter V6 that produces 185 horsepower and 183 pound-feet of torque. The SE and limited function a larger three.3-liter V6 excellent for 242 hp and 226 lb-feet of torque. A five-velocity guide transmission is same old with the bottom engine, and a 4-speed automatic is not obligatory. The greater V6 comes same old with a 5-speed automated transmission and accelerates the when is the 2015 hyundai santa fe release date from zero to 60 mph in 8.7 seconds.

All hyundai santa fe 2015 fashions are provided with both front-wheel-pressure or all-wheel-drive powertrains. The electronically controlled AWD gadget routinely routes energy to the wheels with the perfect traction. For more suitable performance in slippery or off-street stipulations, a driver-selectable AWD lock offers a fixed 50/50 torque split between the front and rear wheels.

correctly equipped, the Santa Fe can tow up to three,500 pounds. EPA estimated fuel economic system is pretty much an identical for both engines: An AWD variation with the 3.three-liter V6 has scores of 17 mpg city/24 mpg highway and 19 mpg blended, a tad above reasonable for this phase.
2015 hyundai santa fe mpgThe 2015 hyundai santa fe limited offers an impressive array of same old safety options together with antilock disc brakes, traction keep watch over, steadiness keep watch over, front-seat facet airbags, full-length head curtain airbags and lively front-seat head restraints.

In executive crash exams, the hyundai santa fe 2015 philippines got an excellent 5 stars for defense in frontal and side influences. In insurance Institute for highway safety testing, the nuevo hyundai santa fe 2015 earned the absolute best imaginable rating of "excellent" in each frontal-offset and aspect-influence assessments.
2015 hyundai santa fe sport Release DateThis second-technology Santa Fe has a lovely dashboard and high quality supplies during. In restricted trim, the convincing faux timber and aluminum accents give the crossover a certain luxurious feel. Blue instrument lighting fixtures and an non-compulsory 10-speaker Infinity sound machine handiest add to the Santa Fe's plush inside ambience.

The using position can be awkward for some, although, because the front seats are established overly high and the short bottom cushions supply minimal thigh fortify for taller adults.

With the optional third-row seat, the Santa Fe can accommodate up to seven passengers. Like most models in this segment, alternatively, the third row is truly best suitable for kids. The second row is notably above average in relation to alleviation. The cut up rear seats can be folded flat in each rows, and the Santa Fe splits the variation between smaller and larger crossover SUVs with seventy eight cubic ft of most cargo room.
hyundai santa fe 2015 model
Smaller crossover SUVs just like the Mazda CX-7 and Mitsubishi Outlander are sportier and extra worthwhile to force arduous, although the Santa Fe's handling is indisputably composed and might in reality be enjoyable every now and then. The trade-off is that the trip may also be very busy on the highway on fashions with the larger wheels. during standard using, the brake pedal feels about proper, however can get smooth all through onerous braking.

the 2.7-liter V6 gives first rate acceleration, however the additional kick and subtle nature of the three.three-liter V6, which is sort of as gas environment friendly, is important and gives the when does the 2015 hyundai santa fe come out a extra large feel. In testing, although, now we have discovered the greater V6's 5-velocity computerized can on occasion be sluggish to downshift for quick passing or merging maneuvers.

Categories: APPS Blogs Introduction to Zone Maps Part II (Changes)

Richard Foote - Thu, 2014-10-30 00:45
In Part I, I discussed how Zone Maps are new index like structures, similar to Exadata Storage Indexes, that enables the “pruning” of disk blocks during accesses of the table by storing the min and max values of selected columns for each “zone” of a table. A Zone being a range of contiguous (8M) blocks. I […]
Categories: DBA Blogs

Cupuwatu Resto Jogja

Daniel Fink - Thu, 2014-10-30 00:39
Be sure to book your trip and settle your plans before hand as a result of you'll be onerous ironed to try to to otherwise. All the advanced preparations are worthwhile anyway from the sheer quantity Cupuwatu Resto Tempat Kuliner of howling bird species you may make certain to ascertain together with the valuable sociableness and networking you'll do with different new and previous bird observation enthusiasts alike.

Whether you think about bird observation as an informal hobby or your life’s passion, keep in mind that bird observation throughout spring migration are some things that you just should attempt, notwithstanding you merely attempt native haunts to start with.

Be forewarned tho' that experiencing this sort of event might convert you into Tempat Kuliner Khas Jogja a significant bird watcher whether or not you propose to be one or not.
Bird observation Scopes

Bird observation could be a in style hobby within the North American country with already quite fifty million members and this figure is predicted to grow. Of course, you've got to urge near them so as to ascertain it that is why the amateur has to have a bird observation scope.

While some folks use binoculars, searching optics appear to be far better and if you wish to understand that of those square measure the most effective to shop for, they're particularly ATN, Bushnell, Leupold, Nikon and Swarovski.

So you recognize what species you’ve seen in your scope, you may need to talk Cupuwatu Resto Khas Jogja over with a bird observation book that's pronto obtainable at the shop.  

But what if you wish to ascertain and take photos of them at constant time? For that, you may would like the photographic camera binoculars.

This type of bird observation scope permits you're taking prime quality pictures.  These happen to be the most recent innovation in technology that enables you to require shots even in the dead of night then transfer it into your desktop or portable computer.

Very light-weight and compact, you'll simply carry it regardless of wherever you go. If you don’t just like the image you took, you'll review it by displaying this on the liquid crystal display screen then deleting this image and taking a stronger shot.

Another nice feature of the photographic camera binoculars is that you Cupuwatu Resto Tempat Kuliner Khas Jogja just will record live video for up to twenty seconds or additional. it's terribly cheap and an excellent addition to those that wish to require bird observation to ensuing level.

Some of the brands that sell this bird observation photographic camera embody Barka, Bushnell, Celestron and Meade. These corporations square measure the most effective within the business and if you wish to require additional shots, you simply need to modification the flash card from a 64MB to at least one} which will accommodate 1 GB.

There square measure 2 options you've got to seem for once comparison these brands. These square measure particularly the target lens size and also the magnification power. Objective lens size can verify the sphere of read whereas the magnification power is also from 7x to 10x.

Of course, don’t forget to see if the bird observation scope incorporates a clear show|LCD|digital display|alphanumeric display} display, sensible resolution and as mentioned earlier, the video capture choice. Since it's onerous to remain in one position for a protracted time, you must conjointly check if your new toy are often mounted on a rack.  

As much as we wish to shop for the most effective photographic camera binoculars around, one issue that we've to think about is our budget. If some brands square measure on the far side your worth vary, you'll either wait till you've got extra money or accept people who square measure at intervals your reach. Once you've got narrowed that, it's time for you to do them out.

It ought to be sturdy, light-weight weight and water-proof as a result of it's to Cupuwatu Resto Tempat Kuliner Khas Jogja face up to the weather. Speed is additionally another issue therefore you're ready to capture the bird ought to it fly away suddenly and you're ready to add this to your assortment reception.

Creating a WebLogic 12c Data Source Connection to Pivotal GemFireXD 1.3

Pas Apicella - Wed, 2014-10-29 20:05
I am going to show how you would create a WebLogic data source to Pivotal GemFireXD 1.3. In this example I am using the developer edition of Weblogic which known as "Free Oracle WebLogic Server 12c (12.1.3) Zip Distribution and Installers for Developers" You can download / configure it as follows.

Note: I am assuming you have WebLogic 12C running with GemFireXD also running. I am also assuming a WLS install directory as follows with a domain called "mydomain"


1. Ensure you have the GemFireXD client driver copied into your WLS domain lib directory as follows, prior to starting WLS


2. Navigate to the WebLogic Console as follows


3. Login using your server credentials

4. From the Domain Structure tree navigate to "Services -> Data Sources"

5. Click on "New -> Generic Data Source"

6. Fill in the form as follows

Name: GemFireXD-DataSource
JNDI Name: jdbc/gemfirexd-ds
Type: Select "Other" from the drop down list box

7. Click "Next"
8. Click "Next"
9. Uncheck "Supports Global Transactions" and click next
10. Enter the following details for credentials. The GemFireXD cluster is not setup for auhentication so this is just a fake username/password to allow us to proceed.

Username: app
Password: app

11. Click "Next"
12. Enter the following CONFIG parameters for your GemFireXD Cluster

Driver Class Name: com.pivotal.gemfirexd.jdbc.ClientDriver
URL: jdbc:gemfirexd://localhost:1527/
Test Table Name: sysibm.sysdummy1

Leave the rest as their default values , it's vital you don't alter the default values here.

13. Click the "Test Configuration" button at this point to verify you can connect, if Successful you will see a message as follows

14. Click "Next"
15. Check the server you wish to target this Data Source for. If you don't do this the Data Source will not be deployed and accessible. In DEV only WLS you only have "myserver" to select.

16. Click "Finish"

It should show your all done and no re-starts are required. To access the Data Source you need to use JNDI with the path "jdbc/gemfirexd-ds"
Categories: Fusion Middleware

OBIEE How-To: A View Selector for your Dashboard

Rittman Mead Consulting - Wed, 2014-10-29 20:00

A common problem report developers face is user groups having different needs and preferences, and as a consequence these user groups want to see their data presented in different ways. Some users prefer to see a graph when others want a table is a classic example. So, how do we do this? It’s a no brainer… we use a view selector. View selectors give us a great amount of flexibility by allowing us to swap out one analysis view for another. You might even take it a step further and use a view selector to swap out an entire compound layout for another one, giving the user an entirely different set of views to look at. Truly powerful stuff, right?

But view selectors do have one limitation… they’re only available at the analysis level. What if you wanted this selector functionality at the dashboard level so that you could swap out an analysis from one subject area for one from different subject area? Or what if you wanted to be able to switch one dashboard prompt for another one? You’re out of luck, it’s just not possible…

Just kidding… of course it’s possible. As it turns out, it’s fairly straightforward to build your own dashboard level view selector using other objects already provided by OBIEE out-of-the-box.

Create a dashboard variable prompt to drive the content. We need a way for the users to select the view they want to see. View selectors have a built in dropdown prompt to accomplish this at the analysis level. To do this at the dashboard level we’re going to use a dashboard prompt.

So, the first step is to create a new dashboard prompt object and add a variable prompt. You can name the variable whatever you wish, for this example we’re just going to call it P_SECTION. You can set the User Input to whatever you want, but it’s important that only one option is selected at a time… multiple values should not be allowed. Let’s set the user input to “Choice List” and add some custom values.

What you name these custom values isn’t important but the labels should be descriptive enough so that the users understand the different options. Just keep in mind, the values you use here will need to exactly match the analysis we create in the next step. For this example, let’s use ‘Section1′, ‘Section2′, and ‘Section3′ to keep things simple.

JFB - View Selector - P_SECTION Prompt

 Create an analysis to drive the conditional logic. We need to create an analysis that will return a set number of rows for each of the options in the prompt we just created. The number of rows returned then drives which section we see on the dashboard.

Ultimately, the logic of this analysis doesn’t matter, and there are a dozen ways to accomplish this. To keep things simple, we’re just going to use CASE statements. While not an elegant solution, it’ll work just fine for our basic example.

Add three columns to the criteria, we’ll use a Time dimension and modify the column formula with the following CASE statements. Make sure that the text strings match the Custom Values used in the prompt.

CASE WHEN "Time"."T05 Per Name Year" IN ('2006') THEN 'Section1' END

CASE WHEN "Time"."T05 Per Name Year" IN ('2006', '2007') THEN 'Section2' END

CASE WHEN "Time"."T05 Per Name Year" IN ('2006', '2007', '2008') THEN 'Section3' END

JFB - View Selector - Table

Now we need to update the filter so that the appropriate rows are shown based upon what the user selects. Basically, we need the request to return 1, 2, or 3 rows based upon our P_SECTION presentation variable.

For our example we’re going to create a filter for each of the options and set them equal to the presentation variable we created earlier in our dashboard prompt. Only one filter will be true at a time so the operator between these filters has been set to OR. Also you’ll notice that the default value for the presentation variable has been set to ‘Section1′, across the board. If, for whatever reason, the P_SECTION variable isn’t set we want the dashboard to default to the first section.

JFB - View Selector - Filter

CASE WHEN "Time"."T05 Per Name Year" IN ('2006') THEN 'Section1' END is equal to / is in @{P_SECTION}{Section1}
OR CASE WHEN "Time"."T05 Per Name Year" IN ('2006', '2007') THEN 'Section2' END is equal to / is in @{P_SECTION}{Section1}
OR CASE WHEN "Time"."T05 Per Name Year" IN ('2006', '2007', '2008') THEN 'Section3' END is equal to / is in @{P_SECTION}{Section1}

So, let’s quickly walk through how this works. The end user selects ’Section1’ from the dashboard prompt. That selection is stored in our P_SECTION presentation variable, which is then passed to and used by our filter. With ‘Section1’ selected only the 1st line of the filter will hold true which will result in a single row returned. When ‘Section2’ is chosen, the second row of the filter is true which returns two rows, and so on.

We’re almost done, in the next step we’ll create some conditions on the individual dashboard sections and put it all together.

Create sections and set some conditions. We just need to create our sections and set some conditions so that they are shown/hidden appropriately. Create a new dashboard page. Edit the dashboard page and drag three empty sections on to the page. Create a condition on the first section using the Analysis created in the last step. The first condition we need to create should be True If Row Count is equal to 1.

JFB - View Selector - Condition

Are you beginning to see how this is going to work? The only time we’ll get a single row back is when the presentation variable is set to ‘Section1’. When P_SECTION is set to ‘Section2’ we’ll get two rows back from our analysis. Go ahead and create a second condition that is True If Row Count is equal to 2 for section 2. For section 3 create a condition that’s True If Row Count is equal to 3.

JFB - View Selector - Dashboard Editor

Since we aren’t adding content to these sections, you’ll want to make sure to enable the option to “Show Section Title” or add a couple text fields so that you can easily identify which section is rendered on the page. Lastly, drag the dashboard prompt onto the page. Save the dashboard page and let’s take a look.

When the page first renders, you should see something similar to the following screenshot. The prompt is set to ‘Section1’ and sure enough, Section 1 appears below it. If you change the selection to ‘Section2’ or ‘Section3’ and hit apply, Section 1 will be hidden and the corresponding content will appear. All that’s left now would be to go back and add content to the sections.

JFB - View Selector - Result

So, using only out-of-the-box features, we were able to create an extremely versatile and dynamic bit of functionality… and all it took was a dashboard prompt, an analysis to hold our conditional logic, and some sections and conditions.

This approach is just another tool that you can use to help deliver the dynamic content your users are looking for. It provides flexibility within the context of a single dashboard page and also limits the need to navigate (and maintain) multiple pages. Admittedly, the example was just walked through isn’t all that exciting, but hopefully you can see the potential.

Some of your users want a minimalist view allowing them to filter on just the basics, while others want to slice and dice by everything under the sun? Create two prompts, a basic and an advanced, and allow the users to switch between the two.

JFB - View Selector - BasicAdv

Want to pack a large amount of charts into a page while still minimizing scrolling for those poor souls working with 1024×768? No problem, have a low-res option of the dashboard.

JFB - View Selector - LowRes

 The finance department wants a to see a dashboard full of bar charts, but the payroll department is being totally unreasonable and only wants to see line graphs? Well, you get the idea…

Categories: BI & Warehousing

Script to count and recompile invalid objects

Bobby Durrett's DBA Blog - Wed, 2014-10-29 16:48

This is pretty simple, but I thought I would share it since it is helpful to me.  I have been preparing for a large migration which involves table, index, type, function, package, and procedure changes.  When I run a big migration like this I check for invalid objects before and after the migration and attempt to recompile any that are invalid.  By checking before and after the migration I know which objects the migration invalidated.

Here’s the script:

select status,count(*)
from dba_objects
where owner='YOURSCHEMA'
group by status
order by status;

select 'alter '||object_type||' '||owner||'.'||object_name||
       ' compile;'
from dba_objects
where owner='YOURSCHEMA' and
status='INVALID' and
object_type <> 'PACKAGE BODY'
select 'alter package '||owner||'.'||object_name||' compile body;'
from dba_objects
where owner='YOURSCHEMA' and
status='INVALID' and
object_type = 'PACKAGE BODY';

Replace “YOURSCHEMA” with the schema that your objects are in.

Output is something like this:

------- ----------
INVALID          7
VALID        53581

alter package YOURSCHEMA.YOURPACKAGE compile body;

The counts give me a general idea of how many objects are invalid and the alters gives me sql that I can paste into a script and run to attempt to compile the objects and make them valid.

Hope this is helpful to someone else.  It’s helpful to me.

– Bobby

Categories: DBA Blogs

Pythian at Percona Live London 2014

Pythian Group - Wed, 2014-10-29 14:29

Percona Live London takes place next week from November 3-4 where Pythian is a platinum sponsor—visit us at our booth during the day on Tuesday, or at the reception in the evening. Not only are we attending, but we’re taking part in exciting speaking engagements, so be sure to check out our sessions and hands-on labs. Find those details down below.


MySQL Break/Fix Lab by Miklos Szel, Alkin Tezuysal, and Nikolaos Vyzas
Monday November 3 — 9:00AM-12:00PM
Cromwell 3 & 4

Miklos, Alkin, and Nikolaos will be presenting a hands-on lab by demonstrating an evaluation of operations errors and issues in MySQL 5.6, and recovering from them. They will be covering instance crashes and hangs, troublesehooting and recovery, and significant performance issues. Find out more about the speakers below.

About Miklos: Miklos Szel is a Senior Engineer at Pythian, based in Budapest. With greater than 10 years’ experience in system and network administration, he has also worked for Walt Disney International as its main MySQL DBA. Miklos specializes in MySQL-based high availability solutions, performance tuning, and monitoring, and has significant experience working with large-scale websites.

About Alkin: Alkin Tezuysal has extensive experience in enterprise relational databases, working in various sectors for large corporations. With greater than 19 years’ of industry experience, he has been able to work on large projects from the group up to production. In recent years, he has been focusing on eCommerce, SaaS, and MySQL technologies.

About Nikolaos: Nik Vyzas is a Lead Database Consultant at Pythian, and an avid open source engineer. He began his career as a software developer in South Africa, and moved into technology consulting firms for various European and US-based companies. He specializes in MySQL, Galera, Redis, MemcacheD, ad MongoDB on many OS platforms.


Setting up Multi-Source Replication in MariaDB 10 by Derek Downey
Monday November 3 — 2:00-5:00PM
Cromwell 3 & 4

For a long time, replication in MySQL was limited to only a single master. When MariaDB 10.0 became generally available, the ability to allow multiple masters became a reality. This has opened up the door to previously impossible architectures. In this hands-on tutorial, Derek will discuss some of the features in MariaDB 10.0, demonstrate establishing a four-node environment running on participants’ computer using Vagrant annd VirtualBox, and even discuss some limitations associated with  10.0. Check out Derek’s blog post for more detailed info about his session.

About Derek:Derek began his career as a PHP application developer, working out of Knoxville, Tennessee. Now a Principal Consultant in Pythian’s MySQL practice, Derek is sought after for his deep knowledge of Galera and diagnosing replication issues.


Understanding Performance Through Measurement, Benchmarking, and Profiling by René Cannaò
Monday November 3 — 2:00-5:00PM
Orchard 2

It is essential to understand how your system performs at different workloads to measure the impacts of changes and growth and to understand how those impacts will manifest. Measuring the performance of current workloads is not trivial and the creation of a staging environment where different workloads need to be tested has it’s own set of challenges. Performing capacity planning, exploring concerns about scalability and response time and evaluating new hardware or software configurations are all operations requiring measurement and analysis in an environment appropriate to your production set up. To find bottlenecks, performance needs to be measured both at the OS layer and at the MySQL layer: an analysis of OS and MySQL benchmarking and monitoring/measuring tools will be presented. Various benchmark strategies will be demonstrated for real-life scenarios, as well as tips on how to avoid common mistakes.

About René: René has 10 years of working experience as System, Network and Database Administrator mainly on Linux/Unix platform. In recent years, he has been focused mainly on MySQL, previously working as Senior MySQL Support Engineer at Sun/Oracle and now as Senior Operational DBA at Pythian (formerly Blackbird, acquired by Pythian.)


Low-Latency SQL on Hadoop — What’s Best for Your Cluster? by Danil Zburivsky
Tuesday November 4 — 11:20AM-12:10PM
Cromwell 3 & 4

Low-latency SQL is the Holy Grail of Hadoop platforms, enabling new use cases and better insights. A number of open-source projects have sprung up to provide fast SQL querying; but which one is best for your cluster? This session will present results of Danil’s in-depth research and benchmarks of Facebook Presto, Cloudera Impala and Databricks Shark. Attendees will look at performance across multiple storage formats, query profiles and cluster configurations to find the best engine for a variety of use cases. This session will help you to pick the right query engine for new cluster or get most out of your existing Hadoop deployment.

About Danil: Danil Zburivsky is a Big Data Consultant/Solutions Architect at Pythian. Danil has been working with databases and information systems since his early years in university, where he received a Master’s Degree in Applied Math. Danil has 7 years of experience architecting, building and supporting large mission-critical data platforms using various flavors of MySQL, Hadoop and MongoDB. He is also the author of the book Hadoop Cluster Deployment.


Scaling MySQL in Amazon Web Services by Mark Filipi and Laine Campbell
Tuesday November 4 — 5:30-6:20PM
Cromwell 3 & 4

Mark Filipi, MySQL Team Lead at Pythian, will explain the options for running MySQL at high volumes at Amazon Web Services, exploring options around database as a service, hosted instances/storages and all appropriate availability, performance and provisioning considerations. He will be using real-world examples from companies like Call of Duty, Obama for America, and many more.

Laine will demonstrate how to build highly available, manageable, and performant MySQL environments that scale in AWS—how to maintain them, grow them, and deal with failure.

About Mark: With years of experience as a MySQL DBA, Mark Felipi has direct experience administrating everything from multinational corporations to tiny web start-ups. He leads a global team of talented DBAs to identify performance bottlenecks and provide consistent daily operations.

About Laine: Laine is currently the Co-Founder and Associate Vice President of Pythian’s open source database practice—the result of the acquisition of by Pythian in June 2014. itself was the product of a merger that involved PalominoDB, a company that Laine founded in January 2006. Prior to that, Laine spent her career working in various corporate environments, including working at Travelocity for nearly a decade building out their database team. Laine is passionate about supporting members of underserved populations to gain experience, skills, and jobs in technology.


Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s MySQL expertise.

Categories: DBA Blogs

A World View

Tim Hall - Wed, 2014-10-29 13:53

I’ve mentioned this before, but I thought I would show something visual…

The majority of my readers come from the USA and India. Since they are in different time zones, it spreads the load throughout the day. When I wake up, India are dominant.


In the afternoon the USA come online, by which time Russia have given up, but there is still a hardcore of Indian’s going for it! :)


I haven’t posted an evening shot as it’s the same as the afternoon one. Don’t you folks in India ever sleep?

I’m sure this is exactly the same with all other technology-related websites, but it does make me pause for thought occasionally. Most aspects of our lives are so localised, like traffic on the journey to work or family issues. It’s interesting to stop and look occasionally at the sort of reach this internet thing has given us. It may be a little rash, but I predict this interwebs thing might just catch on!



A World View was first posted on October 29, 2014 at 8:53 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

BPM & SOA Application missing in JDeveloper 12c gallery

Darwin IT - Wed, 2014-10-29 13:48
A few weeks ago I did a BPM12c Quickstart Installation under Oracle Linux 6. Everything went smoothly, as described in the install guide as well as on many blogs already.
But I found that most of those blogs did an installation under Windows, where I did it under Oracle Linux in Virtualbox.

You would think (as I did) that it shouldn't matter. However, it turns out that in JDeveloper I was missing the 'BPM Application' amongst others in the JDeveloper New Gallery. Very inconvenient. I couldn't find any hints on the big internet. My friend Google wasn't very helpful in this.

But I wouldn't write this blog if I did not solve it. It turns out that with an update I got it solved.

It turns out that I lacked the 'Spring & Oracle Weblogic SCA' extension. Using the Help->Update functionality I downloaded and installed that and after restarting JDeveloper my 'New Gallery' was properly filled.

For those not so familiar with the JDeveloper update mechanism, here a step by step guide:
  1. Choose Help->Update:
  2.  Leave the Update Centers checked as default and click Next:
  3. Check 'Spring & Oracle Weblogic SCA' and click Next:
  4. Click Finish:
  5. Confirm when asked for restarting JDeveloper.

    "PL/SQL: The Scripting Language Liberator" - video recording now available

    Christopher Jones - Wed, 2014-10-29 09:35

    Oracle University has released a video from Oracle OpenWorld of a great session by Steven Feuerstein and myself. We walked through a PHP application, showed some application tuning techniques for Oracle Database, and then looked at improving the use of Oracle Database features to aid performance and scalability, and also easily add features to the application.

    The official blurb was:

    PL/SQL: The Scripting Language Liberator: While scripting languages go in and out of favor, Oracle Database and PL/SQL persist, managing data and implementing business logic. This session walks through a web application to show how PL/SQL can be integrated for better logic encapsulation and performance; how Oracle's supplied packages can be used to enhance application functionality and reduce application complexity; and how to efficiently use scripting language connection and statement handling features to get better performance and scalability. Techniques shown in this session are applicable to mobile, web, or midtier applications written in languages such as JavaScript, Python, PHP, Perl, or Ruby on Rails. Using the right tool for the right job can be liberating.

    The video is free for everyone. Lots of the other good content in the Oracle Learning Streams is available via subscription, if you're interested.

    How to Get the Most out of a Technology Conference (Podcast)

    Christopher Jones - Wed, 2014-10-29 09:29

    We did good in this recent podcast How to Get the Most out of a Technology Conference (which is cleverly disguised as a video). It has everything the inexperienced conference-goer needs to know. I'm pleased to have been able to give a shout out to PHPWomen! Despite the official blurb, the content applies to all technology conferences and there is very little that is specific to Oracle.

    Deploying a Private Cloud at Home — Part 7

    Pythian Group - Wed, 2014-10-29 08:09

    Welcome to part 7, the final blog post in my series, Deploying Private Cloud at Home, where I will be sharing the scripts to configure controller and computer nodes. In my previous post, part six, I demonstrated how to configure the controller and compute nodes.

    Kindly update the script with the password you want and then execute. I am assuming here that this is a fresh installation and no service is configured on the nodes.

    Below script configures controller node, and has two parts

    1. Pre compute node configuration
    2. Post compute node configuration

    The “ -pre” will run the pre compute node configuration and prepare the controller node and OpenStack services. “ -post” will run the post compute node configuration of the controller node as these services are dependant of compute node services.

    #Configure controller script v 4.4
    # Rohan Bhagat             ##################
    # Email:Me at ###############
    #set variables used in the configuration
    #Admin user password
    #Demo user password
    #Keystone database password
    #Admin user Email
    #Demo user Email
    #Glance db user pass
    #Glance user pass
    #Glance user email
    #Nova db user pass
    #Nova user pass
    #Nova user Email
    #Neutron db user pass
    #Neutron user pass
    #Neutron user email
    #Metadata proxy pass
    #IP to be declared for controller
    #FQDN for controller hostname or IP
    #MYSQL root user pass
    #Heat db user pass
    #Heat user pass
    #Heat user email
    #IP range for VM Instances
    #Secure MySQL
    #Current MySQL root password leave blank if you have not configured MySQL
    # Get versions:
    if [ "$1" = "--version" -o "$1" = "-v" ]; then
    	echo "`basename $0` script version $SCRIPT_VER"
      exit 0
    elif [ "$1" = "" ] || [ "$1" = "--help" ]; then
      echo "Configures controller node with pre compute and post compute deployment settings"
      echo "Usage:"
      echo "       `basename $0` [--help | --version | -pre | -post]"
      exit 0
    elif [ "$1" = "-pre" ]; then
    echo "============================================="
    echo "This installation script is based on OpenStack icehouse guide"
    echo "Found"
    echo "============================================="
    echo "============================================="
    echo "controller configuration started"
    echo "============================================="
    echo "Installing MySQL packages"
    yum install -y mysql mysql-server MySQL-python
    echo "Installing RDO OpenStack repo"
    yum install -y
    echo "Installing openstack keystone, qpid Identity Service, and required packages for controller"
    yum install -y yum-plugin-priorities openstack-utils mysql mysql-server MySQL-python qpid-cpp-server openstack-keystone python-keystoneclient expect
    echo "Modification of qpid config file"
    perl -pi -e 's,auth=yes,auth=no,' /etc/qpidd.conf
    chkconfig qpidd on
    service qpidd start
    echo "Configuring mysql database server"
    cat > /etc/my.cnf <&1 | grep -q token_flush) || echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' >> /var/spool/cron/keystone
    echo "Define users, tenants, and roles"
    export OS_SERVICE_ENDPOINT=http://$CONTROLLER:35357/v2.0
    echo "keystone admin creation"
    keystone user-create --name=admin --pass=$ADMIN_PASS --email=$ADMIN_EMAIL
    keystone role-create --name=admin
    keystone tenant-create --name=admin --description="Admin Tenant"
    keystone user-role-add --user=admin --tenant=admin --role=admin
    keystone user-role-add --user=admin --role=_member_ --tenant=admin
    echo "keystone demo creation"
    keystone user-create --name=demo --pass=$DEMO_PASS --email=$DEMO_EMAIL
    keystone tenant-create --name=demo --description="Demo Tenant"
    keystone user-role-add --user=demo --role=_member_ --tenant=demo
    keystone tenant-create --name=service --description="Service Tenant"
    echo "Create a service entry for the Identity Service"
    keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
    keystone endpoint-create --service-id=$(keystone service-list | awk '/ identity / {print $2}') \
    --publicurl=http://$CONTROLLER:5000/v2.0 \
    --internalurl=http://$CONTROLLER:5000/v2.0 \
    echo "Verify Identity service installation"
    echo "Request a authentication token by using the admin user and the password you chose for that user"
    keystone --os-username=admin --os-password=$ADMIN_PASS \
      --os-auth-url=http://$CONTROLLER:35357/v2.0 token-get
    keystone --os-username=admin --os-password=$ADMIN_PASS \
      --os-tenant-name=admin --os-auth-url=http://$CONTROLLER:35357/v2.0 \
    cat > /root/ <<EOF
    export OS_USERNAME=admin
    export OS_TENANT_NAME=admin
    export OS_AUTH_URL=http://controller:35357/v2.0
    source /root/
    echo "keystone token-get"
    keystone token-get
    echo "keystone user-list"
    keystone user-list
    echo "keystone user-role-list --user admin --tenant admin"
    keystone user-role-list --user admin --tenant admin
    echo "Install the Image Service"
    yum install -y openstack-glance python-glanceclient
    openstack-config --set /etc/glance/glance-api.conf database connection mysql://glance:$GLANCE_DBPASS@$CONTROLLER/glance
    openstack-config --set /etc/glance/glance-registry.conf database connection mysql://glance:$GLANCE_DBPASS@$CONTROLLER/glance
    echo "configure glance database"
    mysql -uroot -p$MYSQL_PASS -hlocalhost -e "CREATE DATABASE glance;"
    mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '$GLANCE_DBPASS';"
    mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$GLANCE_DBPASS';"
    echo "Create the database tables for the Image Service"
    su -s /bin/sh -c "glance-manage db_sync" glance
    echo "creating glance user"
    keystone user-create --name=glance --pass=$GLANCE_PASS --email=$GLANCE_EMAIL
    keystone user-role-add --user=glance --tenant=service --role=admin
    echo "glance configuration"
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host $CONTROLLER
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_port 35357
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_protocol http
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance
    openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password $GLANCE_PASS
    openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host $CONTROLLER
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_port 35357
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_protocol http
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance
    openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password $GLANCE_PASS
    openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
    echo "Register the Image Service with the Identity service"
    keystone service-create --name=glance --type=image --description="OpenStack Image Service"
    keystone endpoint-create \
      --service-id=$(keystone service-list | awk '/ image / {print $2}') \
      --publicurl=http://$CONTROLLER:9292 \
      --internalurl=http://$CONTROLLER:9292 \
    echo "Start the glance-api and glance-registry services"
    service openstack-glance-api start
    service openstack-glance-registry start
    chkconfig openstack-glance-api on
    chkconfig openstack-glance-registry on
    echo "Testing image service"
    echo "Download the cloud image"
    wget -q -O /root/cirros-0.3.2-x86_64-disk.img
    echo "Upload the image to the Image Service"
    source /root/
    glance image-create --name "cirros-0.3.2-x86_64" --disk-format qcow2 \
    --container-format bare --is-public True \
    --progress  < /root/cirros-0.3.2-x86_64-disk.img
    echo "Install Compute controller services"
    yum install -y openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient
    source /root/
    echo "Configure compute database"
    openstack-config --set /etc/nova/nova.conf database connection mysql://nova:$NOVA_DBPASS@$CONTROLLER/nova
    echo "configuration keys to configure Compute to use the Qpid message broker"
    openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
    openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname $CONTROLLER
    source /root/
    echo "Set the my_ip, vncserver_listen, and vncserver_proxyclient_address configuration options"
    echo "to the management interface IP address of the $CONTROLLER node"
    openstack-config --set /etc/nova/nova.conf DEFAULT my_ip $MY_IP
    openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen $MY_IP
    openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address $MY_IP
    echo "Create a nova database user"
    mysql -uroot -p$MYSQL_PASS -hlocalhost -e "CREATE DATABASE nova;"
    mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '$NOVA_DBPASS';"
    mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$NOVA_DBPASS';"
    echo "Create the Compute service tables"
    su -s /bin/sh -c "nova-manage db sync" nova
    echo "Create a nova user that Compute uses to authenticate with the Identity Service"
    keystone user-create --name=nova --pass=$NOVA_PASS --email=$NOVA_EMAIL
    keystone user-role-add --user=nova --tenant=service --role=admin
    echo "Configure Compute to use these credentials with the Identity Service running on the controller"
    openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host $CONTROLLER
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
    openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
    openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
    openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password $NOVA_PASS
    echo "Register Compute with the Identity Service"
    keystone service-create --name=nova --type=compute --description="OpenStack Compute"
    keystone endpoint-create \
      --service-id=$(keystone service-list | awk '/ compute / {print $2}') \
      --publicurl=http://$CONTROLLER:8774/v2/%\(tenant_id\)s \
      --internalurl=http://$CONTROLLER:8774/v2/%\(tenant_id\)s \
    echo "Start Compute services and configure them to start when the system boots"
    service openstack-nova-api start
    service openstack-nova-cert start
    service openstack-nova-consoleauth start
    service openstack-nova-scheduler start
    service openstack-nova-conductor start
    service openstack-nova-novncproxy start
    chkconfig openstack-nova-api on
    chkconfig openstack-nova-cert on
    chkconfig openstack-nova-consoleauth on
    chkconfig openstack-nova-scheduler on
    chkconfig openstack-nova-conductor on
    chkconfig openstack-nova-novncproxy on  
    echo "To verify your configuration, list available images"
    echo "nova image-list"
    sleep 5
    source /root/
    nova image-list
    if [ "$1" = "-post" ]; then
    #set variables used in the configuration
    source /root/
    ############OpenStack Networking start here##############
    echo "configure legacy networking"
    openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class
    openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova 
    echo "Restart the Compute services"
    service openstack-nova-api restart
    service openstack-nova-scheduler restart
    service openstack-nova-conductor restart
    echo "Create the network"
    source /root/
    nova network-create vmnet --bridge br0 --multi-host T --fixed-range-v4 $RANGE
    echo "Verify creation of the network"
    nova net-list
    ############OpenStack Legacy ends##############
    echo "Install the dashboard"
    yum install -y mod_wsgi openstack-dashboard
    echo "Configure openstack dashborad"
    sed -i 's/\*/g' /etc/openstack-dashboard/local_settings
    echo "Start the Apache web server and memcached"
    service httpd start
    chkconfig httpd on

    Below is the script which configures compute node

    #configure comutue script v4
    # Rohan Bhagat             ##################
    # Email:Me at ###############
    #set variables used in the configuration
    #Nova user pass
    #NEUTRON user pass
    #Nova db user pass
    #FQDN for $CONTROLLER hostname or IP
    #IP of the compute node
    echo "============================================="
    echo "This installation script is based on OpenStack icehouse guide"
    echo "Found"
    echo "============================================="
    echo "============================================="
    echo "compute configuration started"
    echo "============================================="
    echo "Install the MySQL Python library"
    yum install -y MySQL-python
    echo "Install the Compute packages"
    yum install -y openstack-nova-compute openstack-utils
    echo "Edit the /etc/nova/nova.conf configuration file"
    openstack-config --set /etc/nova/nova.conf database connection mysql://nova:$NOVA_DBPASS@$CONTROLLER/nova
    openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host $CONTROLLER
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
    openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
    openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
    openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password $NOVA_PASS
    echo "Configure the Compute service to use the Qpid message broker"
    openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
    openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname $CONTROLLER
    echo "Configure Compute to provide remote console access to instances"
    openstack-config --set /etc/nova/nova.conf DEFAULT my_ip $MY_IP
    openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
    openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen
    openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address $MY_IP
    openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://$CONTROLLER:6080/vnc_auto.html
    echo "Specify the host that runs the Image Service"
    openstack-config --set /etc/nova/nova.conf DEFAULT glance_host $CONTROLLER
    echo "Start the Compute service and its dependencies. Configure them to start automatically when the system boots"
    service libvirtd start
    service messagebus start
    service openstack-nova-compute start
    chkconfig libvirtd on
    chkconfig messagebus on
    chkconfig openstack-nova-compute on
    echo "kernel networking functions"
    perl -pi -e 's,net.ipv4.ip_forward = 0,net.ipv4.ip_forward = 1,' /etc/sysctl.conf
    perl -pi -e 's,net.ipv4.conf.default.rp_filter = 1,net.ipv4.conf.default.rp_filter = 0,' /etc/sysctl.conf
    echo "net.ipv4.conf.all.rp_filter=0" >> /etc/sysctl.conf
    sysctl -p
    echo "Install legacy networking components"
    yum install -y openstack-nova-network openstack-nova-api
    sleep 5
    echo "Configure legacy networking"
    openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class
    openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova
    openstack-config --set /etc/nova/nova.conf DEFAULT network_manager
    openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.libvirt.firewall.IptablesFirewallDriver
    openstack-config --set /etc/nova/nova.conf DEFAULT network_size 254
    openstack-config --set /etc/nova/nova.conf DEFAULT allow_same_net_traffic False
    openstack-config --set /etc/nova/nova.conf DEFAULT multi_host True
    openstack-config --set /etc/nova/nova.conf DEFAULT send_arp_for_ha True
    openstack-config --set /etc/nova/nova.conf DEFAULT share_dhcp_address True
    openstack-config --set /etc/nova/nova.conf DEFAULT force_dhcp_release True
    openstack-config --set /etc/nova/nova.conf DEFAULT flat_network_bridge br0
    openstack-config --set /etc/nova/nova.conf DEFAULT flat_interface $FLAT_INTERFACE
    openstack-config --set /etc/nova/nova.conf DEFAULT public_interface $PUB_INTERFACE
    echo "Start the services and configure them to start when the system boots"
    service openstack-nova-network start
    service openstack-nova-metadata-api start
    chkconfig openstack-nova-network on
    chkconfig openstack-nova-metadata-api on
    echo "Now restart networking"
    service network restart
    echo "Compute node configuration competed"
    echo "Now you can run -post on the controller node"
    echo "To complete the OpenStack configuration"
    Categories: DBA Blogs

    Oracle WebCenter Contract Lifecycle Management

    WebCenter Team - Wed, 2014-10-29 07:44
    Oracle WebCenter Contract Lifecycle Management

    Contracts rule B2B relationships. Whether you’re a growing mid-market company or a large-scale global organization, you need an effective system to manage surges in contract volumes and ensure accuracy in reporting. Contract Lifecycle Management (CLM) is the proactive, methodical management of a contract from initiation through award, compliance and renewal. Implementing CLM can lead to significant improvements in cost savings and efficiency. Also, CLM can help companies minimize liability and increase compliance with legal requirements.


    TekStream’s CLM software is built on Oracle’s industry leading document management system, WebCenter Content, and is designed to seamlessly integrate with enterprise applications like JD Edwards, PeopleSoft and Oracle’s Enterprise Business Suite (EBS).  Combining Oracle’s enterprise level applications with TekStream’s deep understanding of managing essential business information, delivers a contract management tool powerful enough to facilitate even the most complex processes. TekStream’s solution tracks and manages all aspects of your contract work streams from creation and approval to completion and expiration. Companies can rely on TekStream’s CLM to ensure compliance and close deals faster.



    • Centralized repository for all in-process and executed contracts. This ensures that users can quickly find documents “in-flight” or review decisions and details about contracts previously executed.
    • Increase efficiency through better control of the contract process.  By utilizing dynamic workflows based on the actual text of the contracts (and supporting documents), the review process of in-flight contracts becomes more streamlined and targeted. Workflows allow you to review terms and clauses before they become a costly oversight. For example, targeted workflows ensuring that the right people are reviewing the right information at the right time.
    • Support for “Evergreen” contracts help to improve contract renewal rates.  TekStream’s CLM notifies the correct parties when contracts are due for renewal/review and initiates the appropriate workflow streams.  Too many times, organizations fail to capitalize on opportunities to renew or improve existing contracts by missing key negotiation or renewal dates.
    • Improve compliance to regulations and standards by providing clear and concise reporting of procedures and controls. TekStream’s CLM also provides robust Records Management features including document controls for holds and freezes during litigation along with audit details of when documents are reviewed, archived, and destroyed.  The ability to accurately retrieve and report financial data like contracts, greatly reduces time, effort and cost during quarterly and annual audits.
    • Existing Process
      • 5 people x 4 hrs/contract x 2 contracts/day x $250/hour = $10,000/day
    • TekStream CLM
      • 5 people x 2 hrs/contract x 2 contracts/day x $250/hour = $5,000/day
    Learn more about TekStream's Oracle WebCenter Contract Lifecycle Management and join us for a webcast on Thursday, October 30 at 10:00am PT!

    Oracle Trivia Quiz

    Iggy Fernandez - Wed, 2014-10-29 07:36
    All the answers can be found in the November 2014 issue of the NoCOUG Journal. I am the editor of the NoCOUG Journal. What’s NoCOUG, you ask? Only the oldest and most active Oracle users group in the world. If you live in the San Francisco bay area and have never ever attended a NoCOUG […]
    Categories: DBA Blogs

    Oracle cloud control / SQL Details / Statistics

    Yann Neuhaus - Wed, 2014-10-29 06:06

    A question that I had several times: in Enterprise Manager, in the screen about one SQL statement, the 'statistics' tab shows the number of executions, elapsed time, etc. Question is: which time window does it cover? There is a one hour chart above, and two timestamps displayed as 'First Load Time' and 'Last load Time', and we don't know which one is related with the execution statistics numbers. I'll explain it clearly on an example.

    I'll check a query I have on my system which has several cursors, with two different execution plans. And I check from V$SQL because here is where is the most detailed information, and columns are well documented.

    From the documentation:

    • FIRST_LOAD_TIME is the Timestamp of the parent creation time
    • LAST_LOAD_TIME is the Time at which the query plan was loaded into the library cache

    It's clear that because V$SQL show information about child cursors, the FIRST_LOAD_TIME will be the same for all children.

    SQL> select sql_id,plan_hash_value,executions,first_load_time,last_load_time,last_active_time from v$sql where sql_id='dcstr36r0vz0d' order by child_number
    ------------- --------------- ---------- ------------------- ------------------- -------------------
    dcstr36r0vz0d        17720163         60 2014-10-29/07:01:59 2014-10-29/07:01:59 2014-10-29/13:01:25
    dcstr36r0vz0d      3798950322        102 2014-10-29/07:01:59 2014-10-29/07:03:49 2014-10-29/13:05:54
    dcstr36r0vz0d      3798950322         24 2014-10-29/07:01:59 2014-10-29/07:05:55 2014-10-29/13:05:54
    dcstr36r0vz0d      3798950322          1 2014-10-29/07:01:59 2014-10-29/08:11:19 2014-10-29/08:11:19
    dcstr36r0vz0d      3798950322          1 2014-10-29/07:01:59 2014-10-29/08:29:34 2014-10-29/08:29:34

    The plan with hash value 17720163 has been executed 60 times since 07:01:59. It was the first child cursor (child_number=0) for that parent, so this is why FIRST_LOAD_TIME=LAST_LOAD_TIME

    And, the plan with hash value 3798950322 has been executed 128 times since 07:03:49 by cursors that are not shared but have come to the same plan anyway

    Two remarks:

    • FIRST_LOAD_TIME is the same for all children because it is a parent information
    • LAST_LOAD_TIME is different for each child and that's important because Enterprise Manager don't show that detail, aggregating together the children with same execution plan.
    Time to look at the Enterprise Manager screen.   I'm talking about the 'Real Time'  statistics:   EMLastLoadTime1.png   and I've selected the plan hash value 17720163:   EMLastLoadTime2.png   Ok. So we have 60 executions here. This matches the line in V$SQL. And we know that is it 60 executions since 07:01:59 because both timestamps are the same. No doubt here.   Then, let's select the other plan hash value from the popup:   EMLastLoadTime3.png   128 executions for this plan. This is what we had when summing the lines from V$SQL. And look at the Shared Cursor Statistics. The number of 'Child Cursors' is 4 which is what we know. The 'First Load Time' is the one of the parent.   However, what is the 'Last Load Time' when we know that there are 4 different values in V$SQL for it? Look, they choose the latest one, 08:29:34, and that's a good choice according to the name. It's the last load time.   But what I want to know is the time from which the 128 executions are counted. And that should be the earliest one. In my example, we know from V$SQL what we had 128 executions since 07:03:49 but that timestamp is not displayed here.   If you want a date, you should take the 'First Load time' because it's true that there were 128 executions of cursors with that plan hash value since 07:01:59   Sometimes the first load time is very old and it would be better to have the MIN(LAST_LOAD_TIME). But anyway if we want better time detail, we can choose the 'Historical' view instead of the 'Real Time' one and we have the numbers related with the AWR snapshots.   Here is an example for the cursor with plan hash value 17720163:   EMLastLoadTime4.png   From the historical view, we select a timestamp, we see the begin and end timestamps. Here I have 10 executions per hour.   Everything looks good there, except that 'Child Cursors' is 5, which is for the whole statement and not only for the cursors selected by the plan hash value.   Then I've two conclusions:
    • 'Last Load Time' is not useful to know the time window covered by the Real Time statistics. Use First 'Load time instead'
    • In case of any doubt, fall back to V$ views which are much more documented, and give more detail.

    OTN APAC Tour 2014 : It’s Nearly Here!

    Tim Hall - Wed, 2014-10-29 03:38

    airplane-flying-through-clouds-smallIn a little less than a week I start the OTN APAC Tour. This is where I’m going to be…

    • Perth, Australia : November 6-7
    • Shanghai, China : November 9
    • Tokyo, Japan : November 11-13
    • Beijing, China : November 14-15
    • Bangkok, Thailand : November 17
    • Auckland, New Zealand : November 19-21

    Just looking at that list is scary. When I look at the flight schedule I feel positively nauseous. I think I’m in Bangkok for about 24 hours. It’s sleep, conference, fly. :)

    After all these years you would think I would be used to it, but every time I plan a tour I go through the same sequence of events.

    • Someone asks me if I want to do the tour.
    • I say yes and agree to do all the dates.
    • They ask me if I am sure, because doing the whole tour is a bit stupid as it’s a killer and takes up a lot of time.
    • I say, no problem. It will be fine. I don’t like cherry-picking events as it makes me feel guilty, like I’m doing it for a holiday or something.
    • Everything is provisionally agreed.
    • I realise the magnitude of what I’ve agreed to and secretly hope I don’t get approval.
    • Approval comes through.
    • Mad panic for visas, flights and hotel bookings etc.
    • The tour starts and it’s madness for X number of days. On several occasions I will want to throw in the towel and get on a plane home, but someone else on the tour will provide sufficient counselling to keep me just on the right side of sane.
    • Tour finishes and although I’ve enjoyed it, I promise myself I will never do it again.

    With less than a week to go, I booked the last of my hotels this morning, so you can tell what stage I’m at now… :)

    I was reflecting on this last night and I think I know the reason I agree to these silly schedules. When I was a kid, only the “posh” kids did foreign holidays. You would come back from the summer break and people would talk about eating pasta on holiday and it seemed rather exotic. Somewhere in the back of my head I am still that kid and I don’t really believe any of these trips will ever happen, so I agree to anything. :)





    OTN APAC Tour 2014 : It’s Nearly Here! was first posted on October 29, 2014 at 10:38 am.
    ©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

    AIOUG annual Oracle conference - SANGAM14

    Syed Jaffar - Tue, 2014-10-28 23:43
    All India Oracle User Group (AIOUG) annual Oracle conference Sangam14 is less than 10 days away. This is the largest Oracle conference that take place every year in different cities of India with thousand's of attendees plus over 100 different topics by many Oracle experts across the globe.

    This year's SANGAM is scheduled on Nov 7,8,9 in Bangalore city. Don't let the opportunity go vain, avail/grab the opportunity if you are in India. I am super excited about the conference and look forward attending Tom Kyte's 'Optimizer master class', a full day class and also Maria's 'Oracle database in-memory option' session.

    My sessions are as follow:

    For more details on agenda, speakers, enrollment, visit

    Look forward to seeing you in-person at the conference.

    Significant Milestone: First national study of OER adoption

    Michael Feldstein - Tue, 2014-10-28 22:02

    For years we have heard anecdotes and case studies about OER adoption based on one (or a handful) of institutions. There are many items we think we know, but we have lacked hard data on the adoption process to back up these assumptions that have significant policy and ed tech market implications.

    OtC Cover PageThe Babson Survey Research Group (BSRG) – the same one that administers the annual Survey of Online Learning – has released a survey of faculty titled “Opening the Curriculum” on the decision process and criteria for choosing teaching resources with an emphasis on Open Educational Resources (OER). While their funding from the Hewlett Foundation and from Pearson[1] is for the current survey only, there are proposals to continue the Faculty OER surveys annually to get the same type of longitudinal study that they provide for online learning.

    While there will be other posts (including my own) that will cover the immediate findings of this survey, I think it would be worthwhile to first provide context on why this is a significant milestone. Most of the following background and author findings is based on my interview with Dr. Jeff Seaman, one of the two lead researchers and authors of the report (the other is Dr. I. Elaine Allen).


    Three years ago when the Survey for Online Learning was in its 9th iteration, the Hewlett Foundation approached BSRG about creating reports on OER adoption. Jeff did a meta study to see what data was already available and was disappointed with results, so the group started to compile surveys and augment their own survey questionnaires.

    The first effort, titled Growing the Curriculum and published two years ago, was a combination of results derived from four separate studies. The section on Chief Academic Officers was “taken from the data collected for the 2011 Babson Survey Research Group’s online learning report”. This report was really a test of the survey methodology and types of questions that needed to be asked.

    The Hewlett Foundation is planning to develop an OER adoption dashboard, and there has been internal debate on what to measure and how. This process took some time, but once the groups came to agreement, the current survey was commissioned.

    Pearson came in as a sponsor later in the process and provided additional resources to expand the scope of survey, augmented the questions to be asked, and helped with infographics, marketing, and distribution.

    A key issue on OER adoption is that the primary decision-makers are faculty members. Thus the current study is based on responses from teaching faculty “(defined as having at least one course code associated with their records)”.

    A total of 2,144 faculty responded to the survey, representing the full range of higher education institutions (two-year, four-year, all Carnegie classifications, and public, private nonprofit, and for-profit) and the complete range of faculty (full- and part-time, tenured or not, and all disciplines). Almost three-quarters of the respondents report that they are full-time faculty members. Just under one-quarter teach online, and they are evenly split between male and female, and 28% have been teaching for 20 years or more.

    Internal Lessons

    I asked Jeff what his biggest lessons have been while analyzing the results. He replied that the key meta findings are the following:

    • We have had a lot of assumptions in place (e.g. faculty are primary decision-makers on OER adoption, cost is not a major driver of the decision), but we have not had hard data to back up these assumptions, at least beyond several case studies.
    • The decision process for faculty is not about OER – it is about selecting teaching resources. The focus of studies should be on this general resource selection process with OER as one of the key components rather than just asking about OER selection.

    Thus the best way to view this report is not to look for earth-shaking findings or to be disappointed if there are no surprises, but rather to see data-backed answers on the teaching resource adoption process.

    Most Surprising Finding

    Given this context, I pressed Jeff to answer what findings may have surprised him based on prior assumptions. The two answers are encouraging from an OER perspective.

    • Once you present OER to faculty, there’s a real affinity and alignment of OER with faculty values. Jeff was surprised more about the potential of OER than he had thought going in. Unlike other technology-based subjects of BSRG studies, there is almost no suspicion of OER. Everything else BSRG has measured has had strong minority views from faculty against the topic (online learning in particular), with incredible resentment detected. This resistance or resentment is just not there with OER. It is interesting for OER, with no organized marketing plan per se, to have no natural barriers from faculty perceptions.[2]
    • In the fundamental components of OER adoption – such as perceptions of quality and discoverability and currency – there is no significant difference between publisher-provided content and OER.
    Notes on Survey

    This is valuable survey, and I would hope that BSRG succeeds in getting funding (hint, hint Hewlett and Pearson) to make this into an annual report with longitudinal data. Ideally the base demographics will increase in scope so that we get a better understanding of the unique data between institution types and program types. Currently the report separates 2-year and 4-year institutions, but it would be useful to compare 4-year public vs. private and even for program type (e.g. competency-based programs vs. gen ed vs. fully online traditional programs).

    There is much to commend in the appendices of this report – with basic data tables, survey methodology, and even the questionnaire itself. Too many survey reports neglect to include these basics.

    You can download the full report here or read below. I’ll have more in terms of analysis of the specific findings in an upcoming post or two.

    Download (PDF, 1.89MB)

    1. Disclosure: Pearson is a client of MindWires Consulting – see this post for more details.
    2. It’s no bed of roses for OER, however, as the report documents issues such as lack of faculty awareness and the low priority placed on cost as a criteria in selecting teaching resources.

    The post Significant Milestone: First national study of OER adoption appeared first on e-Literate.

    Results of the NoCOUG SQL Mini-Challenge

    Iggy Fernandez - Tue, 2014-10-28 15:55
    As published in the November 2014 issue of the NoCOUG Journal The inventor of the relational model, Dr. Edgar Codd, was of the opinion that “[r]equesting data by its properties is far more natural than devising a particular algorithm or sequence of operations for its retrieval. Thus, a calculus-oriented language provides a good target language […]
    Categories: DBA Blogs

    Google Glass, Android Wear, and Apple Watch

    Oracle AppsLab - Tue, 2014-10-28 15:43

    I have both the Google Glass and Android Wear (Samsung Gear Live, Moto 360), and often times I wear them together.  People always come up with a question:  “How do you compare Google Glass and Android watches?”  Let me address couple of the view points here.  I would like to talk about Apple Watch, but since it has not been officially released yet, let’s say that shape-wise it is square and looks like a Gear Live, and features seem to be pretty similar to Android Wear, with the exceptions of the attempt to add more playful colors and features.  Lets discuss more about it once it is out.

    unnamed                             Moto-360-Dynamic-Black

    423989-google-glass              10-apple-watch.w529.h352.2x

    I am the first batch of the Google Glass Explorer and got my Glass mid 2013.  In the middle of this year, I first got the Gear Live, then later Moto 360.  I always find it peculiar that Glass is an old technology while Wear is a newer technology.  Should it not be easier to design a smart watch first before a glassware?

    I do find a lot of similarities between Glass and Wear.  The fundamental similarity is that both are Android devices.  They are voice-input enabled and show you notifications.  You may install additional Android applications for you to personalize your experience and maximize your usage.  I see these as the true values for wearables.

    Differences?  Glass does show a lot of capabilities that Android Wear is lack of at the moment.  Things that probably matter for most people would be sound, phone calls, video recording, pictures taking, hands-free with head-on display, GPS, wifi.  Unlike Android Wear, it can be used standalone;  Android Wear is only a companion gadget and has to be paired up with a phone.

    Is Glass more superior?   Android Wear does provide a better touch-based interaction, comparing to the swiping at the side of the Glass frame.  You can also play simple games like Flopsy Droid on your watch.  Also commonly included are pedometers and heart activity sensor.  Glass also tends to get over-heated easily.  Water-resistance also plays a role here: you would almost never want to get your Glass wet at all, while Android Wear is water-resistant to certain degree.  When you are charging your watch at night, it also serves as a bedtime clock.


    For me, personally, although I own Glass longer than Wear, I have to say I prefer Android Wear over Glass for couple reasons.  First, there is the significant price gap ($1500 vs $200 price tag).  Second, especially when you add prescription to Glass, it gets heavy and hurts the ear when wearing it for an extended period of time.  Third, I do not personally find the additional features offered by Glass useful to my daily activities;  I do not normally take pictures other than at specific moments or while I am traveling.

    I also find that even Glass is now publicly available within the US, Glass is still perceived as an anti-social gadget.  The term is defined in the Urban Dictionary as well.  Most of the people I know of who own Glass do not wear it themselves due to all various reasons.  I believe improving the marketing and advertising strategy for Glass may help.

    Gadget preference is personal.  What’s yours?Possibly Related Posts: