Skip navigation.

Feed aggregator

Google Glass + iBeacons

Oracle AppsLab - Thu, 2014-04-17 10:43

GlassBeacon

If you haven’t talk to me IRL for the past 10 months, then I haven’t pestered you about the wonders of BLE and micro-location. My love affair with BLE (Bluetooth Low Energy) beacons became clear when I heard at WWDC 2013 that Apple was implementing BLE beacon detection in their CoreLocation framework. Apple showed how a small BLE beacon sending a constant signal (UUID + Major + Minor *) at a given interval could help for what is now known as micro-location.

At the time I just happened to be experimenting with wifi and bluetooth RSSI to accomplish similar results. I was prototyping a device that sniffed MAC addresses from surrounding devices and trigger certain interactions based on our  enterprise software (CRM, HCM, etc). You can find more on this topic in the white paper “How the Internet of Things Will Change the User Experience Status Quo” (sorry but its not free) that I presented last year at the FiCloud conference.

The BLE beacon or iBeacon proved to be a better solution after all, given its user opt-in nature and low power consumption capabilities. Since then I have been prototyping different mobile apps using this technology. The latest of these is a Google Glass + iBeacon ( github link: GlassBeacon) example. I’m claiming to be the first to do this implementation since the ability to integrate BLE on Glass just became available on April 15 2014 :).

Stay tuned for more BLE beacon goodness. We will be showing more enterprise related use cases with this technology in the future.

*UUID: a unique id to distinguish your beacons. Major: used to group related sets of beacons. Minor: used to identify a beacon within a groupPossibly Related Posts:

LogDump and Understanding header information in Oracle GoldenGate Trail Files

DBASolved - Thu, 2014-04-17 08:30

Replication of data is always a fun thing to look at; What is replicating?!  Discussions around, How do I get data from server/database A to server/database B or even to server/database C are valid questions and are often asked by management.  Often the simple (knee jerk) answer is, just set it up and start replicating.  Although Oracle GoldenGate may be simple (for some architectures) to meet the demands of management and the task at hand, problems will arise with the data being replicated.

when problems arise, the need to identify and resolve the replication issue becomes a critical and time consuming task.  Oracle GoldenGate provides a few utility to help in diagnosing and resolving replication issues.  One such utility is the LogDump utility.  The LogDump utility is used to read the local and remote trail files that are used to support the continuous extraction and replication of transaction changes within the database.

Knowing what trail files are used for is part of the battle when troubleshooting replication issues with Oracle GoldenGate.  How do we use LogDump to read these trail files?  What are we looking for or at in a trail file to understand what is gong on?  To answer these questions, we need to start the LogDump utility.

To start LogDump, we just need to be in the OGG_HOME and run the LogDump command.  The below code set shows you how to run LogDump.


[oracle@oel oggcore_1]$ pwd
/oracle/app/product/12.1.2/oggcore_1
[oracle@oel oggcore_1]$ ./logdump

Oracle GoldenGate Log File Dump Utility for Oracle
Version 12.1.2.0.0 17185003 17451407

Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved.

 

Logdump 22 >

Note: Your LogDump session should start at 1 not 22 (Logdump 22).  LogDump remembers session info until you log out of the server.

Once LogDump has been started, we need to open a trail file and setup how we want the information to be displayed.  Commands for LogDump can be displayed by using the “help” command.  In the following code block, we see that we are opening a local trail (lt) file and setting a few environment options.

Note: Trail files (local and remote) normally are pre-fixed with two (2) letters followed by a six ( 6 ) digit string.  In new environments trail files will start with (prefix)000000 (lt000000 or rt000000).


Logdump 15 >open ./dirdat/lt000000
Current LogTrail is /oracle/app/product/12.1.2/oggcore_1/dirdat/lt000000
Logdump 16 >ghdr on
Logdump 17 >detail on
Logdump 18 >detail data
Logdump 19 >usertoken on
Logdump 20 >

The “help” command inside of LogDump provides more options.  The options that we are using in this example are:

  • ghdr on =  toggle header display on | off
  • detail on = toggle detailed data display (on | off | data)
  • detail data =  toggle detailed data display (on | off | data)  (repeated this just to make sure)
  • usertoken on = show user token information (on | off| detail)

With the LogDump environment set, we can now use the “next (n)” command to see the information in the trail file.


Logdump 20 > n

Once the header output is displayed, we need to understand how to read this information.  Image 1 provides us with a quick explanation of each major component within a trial file transaction.  We can see the following items for a transaction in trail file (lt000000):

  • Header Area: Transaction information
  • Data/Time and type of transaction
  • Object associated with the transaction
  • Image of transaction (before/after)
  • Columns associated with the transaction
  • Transaction data formatted in Hex
  • Length of the record
  • ASCII  format of the data
  • Record position within the trail file (RBA)

Image 1: Header Informationimage

At this point, we maybe asking: Why is this important?  Understanding the trail files and how to find information within the trail files is an important part of troubleshooting the Oracle GoldenGate environment.

Example: If a replicat abends and we need to start the replicat from a given RBA. Being able to identify the first, next  and last RBA in the trail file is helpful in understanding why the abend happened and identifying a starting point to restarting successfully.

In the end, the Oracle GoldenGate environment can be simple yet complex at the same time. Understanding the different components of the environment is very useful and worth the time involved to learn it.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Golden Gate
Categories: DBA Blogs

General Availability: Simplified User Experience Design Patterns eBook

Usable Apps - Thu, 2014-04-17 08:01

The Oracle Applications User Experience team is delighted to announce that our Simplified User Experience Design Patterns for the Oracle Applications Cloud Service eBook is available for free

Simplified UI eBook

The Simplified User Experience Design Patterns for the Oracle Applications Cloud Service eBook

We’re sharing the same user experience design patterns, and their supporting guidance on page types and Oracle ADF components that Oracle uses to build simplified user interfaces (UIs) for the Oracle Sales Cloud and Oracle Human Capital Management (HCM) Cloud, with you so that you can build your own simplified UI solutions.

Click to register and download your free copy of the eBook.

Design patterns offer big wins for applications builders because they are proven, reusable, and based on Oracle technology. They enable developers, partners, and customers to design and build the best user experiences consistently, shortening the application's development cycle, boosting designer and developer productivity, and lowering the overall time and cost of building a great user experience.

Now, Oracle partners, customers and the Oracle ADF community can share further in the Oracle Applications User Experience science and design expertise that brought the acclaimed simplified UIs to the Cloud and they can build their own UIs, simply and productively too!

Oracle Java Cloud Service - April 2014 Critical Patch Update

Oracle Security Team - Thu, 2014-04-17 07:59

Hi, this is Eric Maurice.

In addition to the release of the April 2014 Critical Patch Update, Oracle has also addressed the recently publicly disclosed issues in the Oracle Java Cloud Service.  Note that the combination of this announcement with the release of the April 2014 Critical Patch Update is not coincidental or the result of the unfortunate public disclosure of exploit code, but rather the result of the need to coordinate the release of related fixes for our on-premise customers. 

Shortly after issues were reported in the Oracle Java Cloud Service, Oracle determined that some of these issues were the result of certain security issues in Oracle products (though not Java SE), which are also licensed for traditional on-premise use.  As a result, Oracle addressed these issues in the Oracle Java Cloud Service, and scheduled the inclusion of related fixes in the following Critical Patch Updates upon completion of successful testing so as to avoid introducing regression issues in these products.

 

For more information:

The April 2014 Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/topics/security/cpuapr2014-1972952.html

More information about Oracle Software Security Assurance, including details about Oracle’s secure development and ongoing security assurance practices is located at http://www.oracle.com/us/support/assurance/overview/index.html

Log Buffer #367, A Carnival of the Vanities for DBAs

Pythian Group - Thu, 2014-04-17 07:47

Log Buffer is globe trotting this week from end to end. From every nook, it has brought you some sparkling gems of blog posts. Enjoy!!!

Oracle:

On April 16th, Oracle announced the Oracle Virtual Compute Appliance X4-2.

Do your Cross Currency Receipts fail Create Accounting?

Oracle Solaris 11.2 Launch in NY.

WebCenter Portal 11gR1 dot8 Bundle Patch 3 (11.1.1.8.3) Released.

What do Sigma, a Leadership class and a webcast have in common?

SQL Server:

Stairway to SQL Server Agent – Level 9: Understanding Jobs and Security.

SQL Server Hardware will provide the fundamental knowledge and resources you need to make intelligent decisions about choice, and optimal installation and configuration, of SQL Server hardware, operating system and the SQL Server RDBMS.

SQL Server 2014 In-Memory OLTP Dynamic Management Views.

Why every SQL Server installation should be a cluster.

SQL Server Backup Crib Sheet.

MySQL:

Looking for Slave Consistency: Say Yes to –read-only and No to SUPER and –slave-skip-errors.

More details on disk IO-bound, update only for MongoDB, TokuMX and InnoDB.

Making the MTR rpl suite GTID_MODE Agnostic.

Open Source Appreciation Day’ draws OpenStack, MySQL and CentOS faithful.

MongoDB, TokuMX and InnoDB for disk IO-bound, update-only by PK.

Categories: DBA Blogs

Deploying JAXWS to JCS?? Getting "java.lang.ClassNotFoundException: org.apache.xalan.processor.TransformerFactoryImpl" error

Angelo Santagata - Thu, 2014-04-17 06:21

Hey all,

  • Problem
  • The issue
    • Its a bug on Java Cloud Server (bug#18241690), basically JCS is picking up the wrong XSL transformer
  •  Solution
    • In your code simply put the following piece of java code to execute when your application starts up

System.setProperty("javax.xml.transform.TransformerFactory",
        "com.sun.org.apache.xalan.internal.xsltc.trax.TransformerFactoryImpl");

 And all should be fine :-)


How to start the Internet of Things adventure with Java

Looking for new challenges? Want to get back to the roots of Computer Science? How about starting to explore Internet of Things? No doubt it is one of the fastest growing area of IT and an...

We share our skills to maximize your revenue!
Categories: DBA Blogs

OSP #3a: Build a Standard Cluster Platform

Jeremy Schneider - Thu, 2014-04-17 05:15

This is the fifth article in a series called Operationally Scalable Practices. The first article gives an introduction and the second article contains a general overview. In short, this series suggests a comprehensive and cogent blueprint to best position organizations and DBAs for growth.

We’ve looked in some depth at the process of defining a standard platform with an eye toward Oracle database use cases. Before moving on, it would be worthwhile to briefly touch on clustering.

Most organizations should hold off as long as possible before bringing clusters into their infrastructure. Clusters introduce a very significant new level of complexity. They will immediately drive some very expensive training and/or hiring demands – in addition to the already-expensive software licenses and maintenance fees. There will also be new development and engineering needed – perhaps even within application code itself – to support running your apps on clusters. In some industries, clusters have been very well marketed and many small-to-medium companies have made premature deployments. (Admittedly, my advice to hold off is partly a reaction to this.)

When Clustering is Right

Nonetheless there definitely comes a point where clustering is the right move. There are four basic goals that drive cluster adoption:

  1. Parallel or distributed processing
  2. Fault tolerance
  3. Incremental growth
  4. Pooled resources for better utilization

I want to point out immediately that RAC is just one way of many ways to do clustering. Clustering can be done at many tiers (platform, database, application) and if you define it loosely then even an oracle database can be clustered in a number of ways.

Distributed Processing

Stop for a moment and re-read the list of goals above. If you wanted to design a system to meet these goals, what technology would you use? I already suggested clusters – but that might not have been what came to your mind first. How about grid computing? I once worked with some researchers in Illinois who wrote programs to simulate protein folding and DNS sequencing. They used the Illinois BioGrid – composed of servers and clusters managed independently by three different universities across the state. How about cloud computing? The Obama Campaign in 2008 used EC2 to build their volunteer logistics and coordination platforms to dramatically scale up and down very rapidly on demand. According to the book In Search of Clusters by Gregory Pfister, these four reasons are the main drivers for clustering – but if they also apply to grids and clouds then then what’s the difference? Doesn’t it all accomplish the same thing?

In fact the exact definition of “clustering” can be a little vague and there is a lot of overlap between clouds, grids, clusters – and simple groups of servers with strong & mature standards. In some cases these terms might be more interchangeable than you would expect. Nonetheless there are some general conventions. Here is what I have observed:

CLUSTER Old term, most strongly implies shared hardware resources of some kind, tight coupling and physical proximity of servers, and treatment of the group as a single unit for execution of tasks. While some level of single system image is presented to clients, each server may be individually administered and strong standards are desirable but not always implied. GRID Medium-aged term, implies looser coupling of servers, geographic dispersion, and perhaps cross-organizational ownership and administration. There will not be grid-wide standards for node configuration; individual nodes may be independently administered. The grid may be composed of multiple clusters. Strong standards do exist at a high level for management of jobs and inter-node communication.

Or, alternatively, the term “grid” may more loosely imply a group of servers where nodes/resources and jobs/services can easily be relocated as workload varies. CLOUD New term, implies service-based abstraction, virtualization and automation. It is extremely standardized with a bias toward enforcement through automation rather than policy. Servers are generally single-organization however service consumers are often external. Related to the term “utility computing” or the “as a service” terms (Software/SaaS, Platform/PaaS, Database/DaaS, Infrastructure/IaaS).

Or, alternatively, may (like “grid”) more loosely imply a group of servers where nodes/resources and jobs/services can easily be relocated as workload varies. Google Trends for Computers and Electronics Category

Google Trends for Computers and Electronics Category

These days, the distributed processing field is a very exciting place because the technology is advancing rapidly on all fronts. Traditional relational databases are dealing with increasingly massive data volumes, and big data technology combined with pay-as-you-go cloud platforms and mature automation toolkits have given bootstrapped startups unforeseen access to extremely large-scale data processing.

Building for Distributed Processing

Your business probably does not have big data. But the business case for some level of distributed processing will probably find you eventually. As I pointed out before, the standards and driving principles at very large organizations can benefit your commodity servers right now and eliminate many growing pains down the road.

In the second half of this article I will take a look at how this specifically applies to clustered Oracle databases. But I’m curious, are your server build standards ready for distributed processing? Could they accommodate clustering, grids or clouds? What kinds of standards do you think are most important to be ready for distributed processing?

Webcast: Database Cloning in Minutes using Oracle Enterprise Manager 12c Database as a Service Snap Clone

Pankaj Chandiramani - Thu, 2014-04-17 04:02

Since the demands
from the business for IT services is non-stop, creating copies of production
databases in order to develop, test and deploy new applications can be
labor intensive and time consuming. Users may also need to preserve private
copies of the database, so that they can go back to a point prior to when
a change was made in order to diagnose potential issues. Using Snap Clone,
users can create multiple snapshots of the database and “time
travel
” across these snapshots to access data from any point
in time.


Join us for an in-depth
technical webcast and learn how Oracle Cloud Management Pack for Oracle
Database's capability called Snap Clone, can fundamentally improve the
efficiency and agility of administrators and QA Engineers while saving
CAPEX on storage. Benefits include:



  • Agile provisioning
    (~ 2 minutes to provision a 1 TB database)

  • Over 90% storage
    savings

  • Reduced administrative
    overhead from integrated lifecycle management


Register
Now!


April 24 — 10:00 a.m. PT | 1:00 p.m. ET

May 8 — 7:00 a.m. PT | 10:00 a.m. ET | 4:00 p.m. CET

May 22 — 10:00 a.m. PT | 1:00 p.m. ET





Categories: DBA Blogs

The Drive To Visualize Data: Dashboards

Usable Apps - Thu, 2014-04-17 03:03

Introduction: Cars and Context

Like many people of a certain age, my first exposure to the term dashboard was when I heard my dad using it when driving the car. He referred to it as “the dash”.

Dad’s “dash” was an analog affair that told him the car’s speed, the miles traveled, the engine oil level and temperature, if he had enough gas in the tank, and a few other little bits of basic information. It was all whirring dials, trembling needle pointers on clock-style faces, switches to toggle on and off, a couple of sliders, and little lights that blinked when there was trouble.

Drivers in those days needed to pay attention, all the time, to their dashboards.

Ford dashboard from the 1970s

Old school car dashboards: quaint and charming. And a lot of work. (Source: WikiMedia Commons)

Dashboards in cars, and how drivers use them, are different now. The days of a dashboard with switches to flick or dials to turn are gone.

Today, a family car generates hundreds of megabytes of data every second. Most of this data is discarded immediately, and is not useful to the driver, but some is and may even be life saving. Technology makes sense of the surging data so that drivers can respond easily to important information because it’s presented to them in a timely, easily consumed, and actionable way.

Car dashboards are now closer to the “glass cockpit” world that fighter jet pilots experience. Cars have tiny sensors, even cameras, and other technology inside and outside the vehicle that detect and serve up striking digital visualizations about the health of the car and driver performance. Drivers are empowered to be “situationally aware” about what’s going on (what us UXers would call “context”), as they listen to or watch for signals and cues and respond to them naturally, using voice, for example.

Some car dashboards even use heads-up displays, projecting real-time information onto the windshield. Drivers know what’s going on with their car without taking their eyes off the road.

Chevrolet Camaro Heads-up Display

Chevrolet Corvette Heads-up Display (Source: www.chevrolet.com)

Dashboard design itself is now the essence of simplicity and cutting edge technology, and stylish with it too, arising passions about what makes a great interface inside a car. It’s all part of creating an experience to engage drivers for competitive advantage in a tight automobile market.

Tesla Model S Dashboard

Tesla Model S Dashboard (Source: www.teslamotors.com)

The Emergence of Digital Dashboards User Experience

When it comes to software applications and websites, dashboards are around us everywhere too. We’re all long familiar with how such dashboards work and how to use them, beginning with the pioneering My Yahoo! portal that popularized the use of the “My” pronoun in web page titles, right through to today’s wearable apps dashboards that are a meisterwerk of information visualization, integrating social media and gamification along the way.

Fitbit Dashboard (Author's own)

FitBit Dashboard (Source: Author)

An enterprise application dashboard is a one-stop shop of information. It’s a page made up of portlets or regions, chunking up related information into displays of graphs, charts, and graphics of different kinds. Dashboards visualize a breadth of information that spans a whole range of activities in a functional area.

Dashboards aggregate data into meaningful visual displays and cues, using processor horsepower at the backend to do the work that users used to do with notepads, calculators or spreadsheets to find what out what’s changed or in need of attention.

Dashboards enable users to prioritize work and to manage exceptions by taking light-weight actions immediately from the page, or to drill down to explore and do more in a transactional or analytics work area, if necessary.

The dashboard concept remains a core part of the enterprise applications user experience, particularly for work roles that rely on monitoring of information, providing reports on performance, or needing a range of information to make well-timed and high-level decisions.

Developing Dashboards

In work, we now also have to deal with that other torrent of data we hear about: big data. Dashboards are ideal ways to make sense of this data and to represent the implications of its analysis to a viewer, bringing insight to users rather than the other way around.

To this end, Oracle provides enterprise application developers with the Oracle ADF Data Visualization Tools (DVT) components to build dashboards using data in the cloud, and with design guidance in the form of the Oracle Fusion Applications, Oracle Endeca and Oracle Business Intelligence Enterprise Edition UI patterns and guidelines for making great-looking dashboards.

Fusion Apps Desktop UI Dashboard

Typical Oracle Fusion Applications Desktop UI Dashboard (Source: Oracle)

Beyond Desktop Dashboards…

Dashboards’ origins as a desktop UI concept obviously predated the “swipe and pinch” world of mobility, today’s cross-device, flexible way of working with shared data in the cloud. Sure, we still have a need for what dashboards were originally about. But, we now need new ways for big data to be organized and visualized. We need solutions that reflect our changing work situations--our context --so that we that we can act on the information quickly, using a tablet or a smart phone, or whatever’s optimal. And, we need new ways of describing this dashboard user experience.

Enter the era of “glance, scan, and commit”, a concept that we will explore in a future Usable Apps blog.

MongoDB is growing up

DBMS2 - Thu, 2014-04-17 02:56

I caught up with my clients at MongoDB to discuss the recent MongoDB 2.6, along with some new statements of direction. The biggest takeaway is that the MongoDB product, along with the associated MMS (MongoDB Management Service), is growing up. Aspects include:

  • An actual automation and management user interface, as opposed to the current management style, which is almost entirely via scripts (except for the monitoring UI).
    • That’s scheduled for public beta in May, and general availability later this year.
    • It will include some kind of integrated provisioning with VMware, OpenStack, et al.
    • One goal is to let you apply database changes, software upgrades, etc. without taking the cluster down.
  • A reasonable backup strategy.
    • A snapshot copy is made of the database.
    • A copy of the log is streamed somewhere.
    • Periodically — the default seems to be 6 hours — the log is applied to create a new current snapshot.
    • For point-in-time recovery, you take the last snapshot prior to the point, and roll forward to the desired point.
  • A reasonable locking strategy!
    • Document-level locking is all-but-promised for MongoDB 2.8.
    • That means what it sounds like. (I mention this because sometimes an XML database winds up being one big document, which leads to confusing conversations about what’s going on.)
  • Security. My eyes glaze over at the details, but several major buzzwords have been checked off.
  • A general code rewrite to allow for (more) rapid addition of future features.

Of course, when a DBMS vendor rewrites its code, that’s a multi-year process. (I think of it at Oracle as spanning 6 years and 2 main-number releases.) With that caveat, the MongoDB rewrite story is something like:

  • Updating has been reworked. Most of the benefits are coming later.
  • Query optimization and execution have been reworked. Most of the benefits are coming later, except that …
  • … you can now directly filter on multiple indexes in one query; previously you could only simulate doing that by pre-building a compound index.
  • One of those future benefits is more index types, for example R-trees or inverted lists.
  • Concurrency improvements are down the road.
  • So are rewrites of the storage layer, including the introduction of compression.

Also, you can now straightforwardly transform data in a MongoDB database and write it into new datasets, something that evidently wasn’t easy to do before.

One thing that MongoDB is not doing is offer any ODBC/JDBC or other SQL interfaces. Rather, there’s some other API — I don’t know the details — whereby business intelligence tools or other systems can extract views, and a few BI vendors evidently are doing just that. In particular, MicroStrategy and QlikView were named, as well as a couple of open source usual-suspects.

As of 2.6, MongoDB seems to have a basic integrated text search capability — which however does not rise to the search functionality level that was in Oracle 7.3.2. In particular:

  • 15 Western languages are supported with stopwords, tokenization, etc.
  • Search predicates can be mixed into MongoDB queries.
  • The search language isn’t very rich; for example, it lacks WHERE NEAR semantics.
  • You can’t tweak the lexicon yourself.

And finally, some business and pricing notes:

  • Two big aspects of the paid-versus-free version of MongoDB (the product line) are:
    • Security.
    • Management tools.
  • Well, actually, you can get the management tools for free, but only on a SaaS basis from MongoDB (the company).
    • If you want them on premises or in your part of the cloud, you need to pay.
    • If you want MongoDB (the company) to maintain your backups for you, you need to pay.
  • Customer counts include:
    • At least 1000 or so subscribers (counting by organization).
    • Over 500 (additional?) customers for remote backup.
    • 30 of the Fortune 100.

And finally, MongoDB did something many companies should, which is aggregate user success stories for which they may not be allowed to publish full details. Tidbits include:

  • Over 100 organizations run clusters with more than 100 nodes. Some clusters exceed 1,000 nodes.
  • Many clusters deliver hundreds of thousands of operations per second (combined read and write).
  • MongoDB clusters routinely store hundreds of terabytes, and some store multiple petabytes of data. Over 150 clusters exceed 1 billion documents in size. Many manage more than 100 billion documents.
Categories: Other

MongoDB is growing up

Curt Monash - Thu, 2014-04-17 02:56

I caught up with my clients at MongoDB to discuss the recent MongoDB 2.6, along with some new statements of direction. The biggest takeaway is that the MongoDB product, along with the associated MMS (MongoDB Management Service), is growing up. Aspects include:

  • An actual automation and management user interface, as opposed to the current management style, which is almost entirely via scripts (except for the monitoring UI).
    • That’s scheduled for public beta in May, and general availability later this year.
    • It will include some kind of integrated provisioning with VMware, OpenStack, et al.
    • One goal is to let you apply database changes, software upgrades, etc. without taking the cluster down.
  • A reasonable backup strategy.
    • A snapshot copy is made of the database.
    • A copy of the log is streamed somewhere.
    • Periodically — the default seems to be 6 hours — the log is applied to create a new current snapshot.
    • For point-in-time recovery, you take the last snapshot prior to the point, and roll forward to the desired point.
  • A reasonable locking strategy!
    • Document-level locking is all-but-promised for MongoDB 2.8.
    • That means what it sounds like. (I mention this because sometimes an XML database winds up being one big document, which leads to confusing conversations about what’s going on.)
  • Security. My eyes glaze over at the details, but several major buzzwords have been checked off.
  • A general code rewrite to allow for (more) rapid addition of future features.

Of course, when a DBMS vendor rewrites its code, that’s a multi-year process. (I think of it at Oracle as spanning 6 years and 2 main-number releases.) With that caveat, the MongoDB rewrite story is something like:

  • Updating has been reworked. Most of the benefits are coming later.
  • Query optimization and execution have been reworked. Most of the benefits are coming later, except that …
  • … you can now directly filter on multiple indexes in one query; previously you could only simulate doing that by pre-building a compound index.
  • One of those future benefits is more index types, for example R-trees or inverted lists.
  • Concurrency improvements are down the road.
  • So are rewrites of the storage layer, including the introduction of compression.

Also, you can now straightforwardly transform data in a MongoDB database and write it into new datasets, something that evidently wasn’t easy to do before.

One thing that MongoDB is not doing is offer any ODBC/JDBC or other SQL interfaces. Rather, there’s some other API — I don’t know the details — whereby business intelligence tools or other systems can extract views, and a few BI vendors evidently are doing just that. In particular, MicroStrategy and QlikView were named, as well as a couple of open source usual-suspects.

As of 2.6, MongoDB seems to have a basic integrated text search capability — which however does not rise to the search functionality level that was in Oracle 7.3.2. In particular:

  • 15 Western languages are supported with stopwords, tokenization, etc.
  • Search predicates can be mixed into MongoDB queries.
  • The search language isn’t very rich; for example, it lacks WHERE NEAR semantics.
  • You can’t tweak the lexicon yourself.

And finally, some business and pricing notes:

  • Two big aspects of the paid-versus-free version of MongoDB (the product line) are:
    • Security.
    • Management tools.
  • Well, actually, you can get the management tools for free, but only on a SaaS basis from MongoDB (the company).
    • If you want them on premises or in your part of the cloud, you need to pay.
    • If you want MongoDB (the company) to maintain your backups for you, you need to pay.
  • Customer counts include:
    • At least 1000 or so subscribers (counting by organization).
    • Over 500 (additional?) customers for remote backup.
    • 30 of the Fortune 100.

And finally, MongoDB did something many companies should, which is aggregate user success stories for which they may not be allowed to publish full details. Tidbits include:

  • Over 100 organizations run clusters with more than 100 nodes. Some clusters exceed 1,000 nodes.
  • Many clusters deliver hundreds of thousands of operations per second (combined read and write).
  • MongoDB clusters routinely store hundreds of terabytes, and some store multiple petabytes of data. Over 150 clusters exceed 1 billion documents in size. Many manage more than 100 billion documents.

Twilio: Democratizing Communications to Build a Better User Experience in the Oracle Cloud

Usable Apps - Thu, 2014-04-17 02:11

Oracle has a powerful partner ecosystem in the Oracle Cloud, adding value to our applications in many areas. Enabling partners to integrate with our cloud applications is key to Oracle’s “Extending SaaS through PaaS” approach. Sharing our expertise with partners, which helps them to productively build a great user experience (UX), is a major drive of Oracle Applications User Experience (OAUX) outreach.

One of the latest additions to the Oracle PartnerNetwork  is the very cool and happening Twilio. Followers of the AppsLab know the OAUX team loves exploring the UX possibilities of Twilio-based voice and SMS integrations. I took a trip to Twilio's San Francisco HQ to ask David Wacker (@dlwacker) of Twilio Channel Sales and Partnerships to find out more about the whys and hows of integrating in the cloud and simplifying user experience...

Being in the cloud offers the potential to make a major difference with a superior UX. The days of cumbersome, on-premise installations and horrible UX are gone. Now scalable, cloud-based applications, customizable and reflecting each customer’s business, are changing the UX across datacenter management, CRM, marketing automation, and ERP, all driven through how we power communications.

Twilio is a cloud-based communications platform that offers a powerful, open API for building communications applications, what Twilio refers to as "democratizing access" to communication in a traditionally complex and expensive world of telephony.

Using Twilio, developers can easily access the means to create robust communications integrations, fundamentally changing the UX landscape for applications users in the cloud. Twilio’s open API framework means developers can utilize prebuilt solutions in the Oracle Marketing Cloud, Oracle Service Cloud, and Oracle Sales Cloud. Developers can build such UX integrations productively, without the cost and effort normally associated with such projects.

David pointed out a few ways how Twilio enhanced the user experience for Oracle application users, such as the Oracle Marketing Cloud, Oracle Service Cloud, and Oracle Sales Cloud.

Twilio’s seamless integration to the Oracle Marketing Cloud (Eloqua) means that users can just drag and drop the Twilio Cloud Connector onto a marketing campaign canvas to provide for outbound SMS, MMS (multimedia messaging), and voice calls. This delivers a great multichannel user experience, such as for mobile marketing campaigns with pictures or QR coupon codes.

Twilio Cloud Connector

Dragging the Twilio Cloud Connector onto a campaign canvas easily adds Twilio SMS, MMS, and voice to marketing campaigns.

Twilio's embedding of SMS and voice capabilities right into the Oracle Service Cloud (RightNow) means a superior customer experience built in a scalable, flexible way. A service agent can use click-to-call to phone an end customer, automatically creating the event on their system and then recording the call, for example. An SMS capability can also enable customers to chat with service agents using SMS on their phones instead of web chat, if preferred, and more.

Twilio Click-to-Call

Click-to-call for customer engagement, which allows customers to call inbound more effectively.

Twilio's integration into the Oracle Sales Cloud, drives efficiency by simplifying the UX. Twilio uses the Oracle Sales Cloud native CTI toolbar to track and record phone calls, allowing for seamless conference calls, and all integrated to drive sales productivity. For example, a sales rep can use Twilio’s click-to-call to contact opportunities, automated dialing, or conference line bridges powered by Twilio, creating events and logging activities easily within the Oracle Sales Cloud.

Twilio integrated with Oracle Sales Cloud

Computer Telephony Integration (CTI) toolbar for easy access to inbound and outbound dialing in Oracle Sales Cloud powered by Twilio.

David tells me that “Twilio’s integration possibilities are endless. That's the best part about working with developers in the Twilio and Oracle communities; finding new ways to solve user problems, unconstrained by technology or traditional project limitations. I’m excited to explore new and unique ways that the Oracle developer community and Twilio can change the UX landscape in the Oracle Cloud.”

Those are some great UX insights from David, and there are more to come. The OAUX team will be working with Twilio over the coming months, so stay tuned to your usual outreach and communications channels for news and events.

Twilio is also exhibiting at, and sponsoring, Oracle CloudWorld in Chicago on Thursday, April 17, 2014. Stop by the Twilio booth to learn more (or to just say, Hi!), and give the Usable Apps blog a shout-out.

Business professionals consider moving to Office 365

Chris Foot - Thu, 2014-04-17 02:00

Many executives favor Microsoft products over competing software. Since its inception, the corporation has established itself as a developer of business-standard technology, with millions of subscribers distributed throughout the world. Due to recent improvements spearheaded by new CEO Satya Nadella, many organizations previously unfamiliar with the company's products are implementing Microsoft solutions with the help of database administration services

Releasing a more affordable product 
Pete Pachal, a contributor to technology blog Mashable, noted that Microsoft began selling Office 365 Personal earlier this week for $6.99 a month, accommodating subscribers with applications such as Word, Excel, PowerPoint and Outlook, among others. In contrast to the solution's counterpart, Office 365 Home, Personal only allows users to install the program on a single PC or Mac. However, the offer makes sense for enterprises working primarily with such machines. 

Personal's integration with Microsoft's cloud solution, One Drive, enables employees to share, store and edit files seamlessly. As this process expedites business operations, senior-level management may consider Office 365 to be a viable option for satisfying the needs of their departments. For those looking to abandon products manufactured by Microsoft's competitors, however, the transition may be easier said than done. 

Steps for migration 
Moving a large variety of email into Office 365 may require the assistance of database administration professionals. According to InfoWorld contributor Peter Bruzzese, corporations need to consider what information should be transitioned into Outlook, where that data is stored and whether or not it will be manipulated after all digital intelligence is successfully relocated. In order to ensure a smooth transfer, Bruzzese recommended making the following considerations:

  • Perform a preparatory review of all messaging needs and orchestrate a plan that will supplement those requirements. 
  • If a company is migrating from Exchange, database support services can it transfer all on-premise data into the cloud through Exchange Web Services, which allows users to export 400GB a day. 
  • Those relocating data from Google, Network File Systems or Notes should consider using Archive360, which can filter data through Exchange and then transfer it into Office 365.
  • Companies transitioning email data from GroupWise could  find solace in funneling the information through Mimecast and connecting the storage with Office 365 mailboxes. 

Obviously, a command of certain programs is required, depending on what kind of route an organization chooses. For this reason, consulting database experts may be the best option. 

Indexing Foreign Key Constraints With Bitmap Indexes (Locked Out)

Richard Foote - Thu, 2014-04-17 01:29
Franck Pachot made a very valid comment in my previous entry on Indexing Foreign Keys (FK) that the use of a Bitmap Index on the FK columns does not avoid the table locks associated with deleting rows from the parent table. Thought I might discuss why this is the case and why only a B-Tree index does […]
Categories: DBA Blogs

SQL Developer’s Interface for GIT: Interacting with a GitHub Repository Part 2

Galo Balda's Blog - Wed, 2014-04-16 17:46

In this post I’m going to show to synchronize the remote and local repositories after an existing file in local gets modified. What I’ll do is modify the sp_test_git.pls file in our local repository and then push those changes to the remote repository (GitHub).

First, I proceed to open the sp_test_git.pls file using SQL Developer, add another dbms_output line to it and save it. The moment I save the file, the Pending Changes (Git) window gets updated to reflect the change and the icons in the toolbar get enabled.

modify_file

Now I can include a comment and then add the file to the staging area by clicking on the Add button located on the Pending Changes (Git) window. Notice how the status changes from “Modified Not Staged” to “Modified Staged”.

staged_file

What if I want to compare versions before doing a commit to the local repository? I just have to click on the Compare with Previous Version icon located on the Pending Changes (Git) window.

compare2

The panel on the left displays the version stored in the local repository and the panel on the right displays the version in the Staging Area.

The next step is to commit the changes to the local repository. For that I click on the Commit button located on the Pending Changes (Git) window and then I click on the OK button in the Commit window.

commit

Now the Branch Compare window displays information telling that remote and local are out of sync.

branch_compare2

So the final step is to sync up remote and local by pushing the changes to GitHub. For that I go to the main menu and click on  Team -> Git -> Push to open the “Push to Git” wizard where I enter the URL for the remote repository, the user name and password to complete the operation. Now I go to GitHub to confirm the changes have been applied.

updated_github


Filed under: GIT, SQL Developer, Version Control Tagged: GIT, SQL Developer, Version Control

Categories: DBA Blogs

Online Chat Available for Existing Service Requests

Joshua Solomin - Wed, 2014-04-16 16:16
Untitled Document

An online chat session can often answer a question or clarify a situation quickly.

My Oracle Support now offers a new chat feature that enables Oracle Support engineers to contact you instantly online to discuss an open Service Request—to ask a question, share detailed commands and troubleshooting information, or confirm that your issue is resolved.

Chat

You always control your availability for an online chat. When you are involved in critical projects or meetings, set your status to “Not Available” and the engineer will contact you using your preferred method. Keeping yourself in the “Available” status lets your Support engineer know when you are online and available for a chat about your Service Request.

If you receive a chat request from a Support engineer, you can decide to accept the chat, request a different time for the chat, or decline the chat.

Find out more
—watch a short video demonstration and read additional details.

KeePass 2.26 Released

Tim Hall - Wed, 2014-04-16 16:08

KeePass 2.26 has recently been released. I would suggest going with the portable version, which is an unzip and go application.

If you want to know how I use KeePass, check out my article called Adventures with Dropbox and KeePass.

Cheers

Tim…

KeePass 2.26 Released was first posted on April 16, 2014 at 11:08 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

MobaXterm 7.1 Released

Tim Hall - Wed, 2014-04-16 16:03

If you are using a Windows desktop, you need MobaXterm in your life! Version 7.1 has recently been released…

I know you think you can’t live without Putty, Cygwin and/or Xming, but you really can. Give MobaXterm a go and I would be extremely surprised if you ever go back to that rag-tag bunch of apps…

Cheers

Tim…

PS. Includes “Updated OpenSSL library to 1.0.1g (for “Heartbleed Bug” correction)”

MobaXterm 7.1 Released was first posted on April 16, 2014 at 11:03 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

ORA-00600 [3631] recovering pluggable database after flashback database in Oracle 12c

Bobby Durrett's DBA Blog - Wed, 2014-04-16 15:44

I was trying to recreate the scenario where a 12c container database is flashed back to a SCN before the point that I recovered a pluggable database to using point in time recovery.

I got this ugly ORA-00600:

RMAN> recover pluggable database pdborcl;

Starting recover at 16-APR-14
using channel ORA_DISK_1

starting media recovery
media recovery failed
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 04/16/2014 06:07:40
ORA-00283: recovery session canceled due to errors
RMAN-11003: failure during parse/execution of SQL statement: alter database recover if needed
 datafile 32 , 33 , 34 , 35
ORA-00283: recovery session canceled due to errors
ORA-00600: internal error code, arguments: [3631], [32], [4096], [4210689], [], [], [], [], [], [], [], []

I think the above error message stems from this bug:

Bug 14536110  ORA-600 [ktfaput: wrong pdb] / crash using PDB and FDA

There may have been some clever way to recover from this but I ended up just deleting and recreating the CDB through DBCA which was good experience playing with DBCA in Oracle 12c.  I’m trying to learn 12c but I have a feeling that I have hit a bug that keeps me from testing this flashback database, point in time recovery of a pluggable database scenario.  I wonder if I should patch?  I think that Oracle has included a patch for this bug in a patch set.  It could be good 12c experience to apply a patch set.

- Bobby

Categories: DBA Blogs