Skip navigation.

Feed aggregator

Oracle Database 12.1.0.2.0 – Native JSON Support (1)

Marco Gralike - 5 hours 8 min ago
Oracle Database 12.1.0.2 has now native support build-in for handling JSON (Javascript Object Notation) data. Oracle Database supports JSON natively with relational database features, including...

Read More

Recurring Conversations: AWR Intervals (Part 2)

Doug Burns - 8 hours 25 min ago

(Reminder, just in case we still need it, that the use of features in this post require Diagnostics Pack license.)

Damn me for taking so long to write blog posts these days. By the time I get around to them, certain very knowledgeable people have commented on part 1 and given the game away! ;-)

I finished the last part by suggesting that a narrow AWR interval makes less sense in a post-10g Diagnostics Pack landscape than it used to when we used Statspack.

Why do people argue for a Statspack/AWR interval of 15 or 30 minutes on important systems? Because when they encounter a performance problem that is happening right now or didn’t last for very long in the past, they can drill into a more narrow period of time in an attempt to improve the quality of the data available to them and any analysis based on it. (As an aside, I’m sure most of us have generated additional Statspack/AWR snapshots manually to *really* reduce the time scope to what is happening right now on the system, although this is not very smart if you’re using AWR and Adaptive Thresholds!)

However, there are better tools for the job these days.

If I have a user complaining about system performance then I would ideally want to narrow down the scope of the performance metrics to that user’s activity over the period of time they’re experiencing a slow-down. That can be a little difficult on modern systems that use complex connection pools, though. Which session should I trace? How do I capture what has already happened as well as what’s happening right now? Fortunately, if I’ve already paid for Diagnostics Pack then I have *Active Session History* at my disposal, constantly recording snapshots of information for all active sessions. In which case, why not look at

- The session or sessions of interest (which could also be *all* active sessions if I suspect a system-wide issue)
- For the short period of time I’m interested in
- To see what they’re actually doing

Rather than running a system-wide report for a 15 minute interval that aggregates the data I’m interested in with other irrelevant data? (To say nothing of having to wait for the next AWR snapshot or take a manual one and screwing up the regular AWR intervals ...)

When analysing system performance, it’s important to use the most appropriate tool for the job and, in particular, focus your data collection on what is *relevant to the problem under investigation*. The beauty of ASH is that if I’m not sure what *is* relevant yet, I can start with a wide scope of all sessions to help me find the session or sessions of interest and gradually narrow my focus. It has the history that AWR has, but with finer granularity of scope (whether that be sessions, sql statements, modules, actions or one of the many other ASH dimensions). Better still, if the issue turns out to be one long-running SQL statement, then a SQL Monitoring Active Report probably blows all the other tools out of the water!

With all that capability, why are experienced people still so obsessed with the Top 5 Timed Events section of an AWR report as one of their first points of reference? Is it just because they’ve become attached to it over the years of using Statspack? AWR has it’s uses (see JB’s comments for some thoughts on that and I’ve blogged about it extensively in the past) but analysing specific performance issues on Production databases is not it’s strength. In fact, if we’re going to use AWR, why not just use ADDM and let software perform automatically the same type of analysis most DBAs would do anyway (and in many cases, not as well!)

Remember, there’s a reason behind these Recurring Conversations posts. If I didn’t keep finding myself debating these issues with experienced Oracle techies, I wouldn’t harbour doubts about what seem to be common approaches. In this case, I still think there are far too many people using AWR where ASH or SQL Monitoring are far more appropriate tools. I also think that if we stick with a one hour interval rather than a 15 minute interval, we can retain four times as much *history* in the same space! When it comes to AWR – give me long retention over a shorter interval every time!

P.S. As well as thanking JB for his usual insightful comments, I also want to thank Martin Paul Nash. When I was giving an AWR/ASH presentation at this springs OUGN conference, he noticed the bullet point I had on the slide suggesting that we *shouldn’t* change the AWR interval and asked why. Rather than going into it at the time, I asked him to remind me at the end of the presentation and then because I had no time to answer, I promised I’d be blogging about it that weekend. That was almost 4 months ago! Sigh. But at least I got there in the end! ;-)

Oracle Database 12c Release 12.1.0.2 – My First Observations. Licensed Features Usage Concerns – Part I.

Kevin Closson - 9 hours 13 min ago

My very first words on Oracle Database 12c Release 12.1.0.2 can be summed up in a single quotable quote:

This release is hugely important.

I’ve received a lot of email from folks asking me to comment on the freshly released In-Memory Database Option. These words are so overused. This post, however, is about much more than word games. Please read on…

When querying the dba_feature_usage_statistics view the option is known as “In-Memory Column Store.”  On the other hand, I’ve read a paper on oracle.com that refers to it as the “In-Memory Option” as per this screen shot:

in-memory-paper

A Little In-Memory Here, A Little In-Memory There

None of us can forget the era when Oracle referred to the flash storage in Exadata as a “Database In-Memory” platform. I wrote about all that in a post you can view here: click this. But I’m not blogging about any of that. Nonetheless, I remained confused about the option/feature this morning as I was waiting for my download of Oracle Database 12c Release 12.1.0.2 to complete. So, I spent a little time trying to cut through the fog and get some more information about the In-Memory Option. My first play was to search for the term in the search bar at oracle.com. The following screen shot shows the detritus oracle.com returned due to the historical misuse and term overload–but, please, remember that I’m not blogging about any of that:

In-Memory-12c-Oraclecom-Pollution

As the screenshot shows one must eyeball their way down through 8 higher-ranking search results that have nothing to do with this very important new feature before one gets to a couple of obscure blog links. All this term overload and search failure monkey-business is annoying, yes, but I’m not blogging about any of that.

What Am I Blogging About?

This is part I in a short series about Oracle licensing ramifications of the In-Memory Option/In-Memory Column Store Feature.

The very first thing I did after installing the software was to invoke the SLOB database create scripts to quickly get me on my way. The following screen shot shows how I discovered that the separately-licensed In-Memory Option/In-Memory Column Store Feature is enabled by default:

inmem1

Now, this is no huge red flag because separately-licensed features like Real Application Clusters and Partitioning are sort of “on” by default. This doesn’t bother me because a) one can simply unlink RAC to be safe from accidental usage and b) everyone that uses Enterprise Edition uses the Partitioning Option (I am correct on that assertion, right?).  However, I think things are a little different with the In-Memory Option/In-Memory Column Store Feature since it is “on” by default and a simple command like the one in the following screen shot means your next license audit will be, um, more entertaining.

in-mem-timebomb

OK, please let me point out  that I’m trying as hard as I can to not make a mountain out of a mole-hill. I started this post by stating my true feelings about this release. This release is, indeed, hugely important. That said, I do not believe for a second that every Enterprise Edition deployment of Oracle Database 12c Release 12.1.0.2 will need to have the In-Memory Option/In-Memory Column Store Feature in the shopping cart–much unlike Partitioning for example. Given the crushing cost of this option/feature I expect that its use will be very selective. It’s for this reason I wanted to draw to people’s attention the fact that–in my assessment–this option/feature is very easy to use “accidentally.” It really should have a default initialization setting that renders the option/feature nascent–but the reality is quite the opposite.

Summary

I have to make this post short and relegate it to part I in a series because I can’t take it to the next level which is to write about monitoring the feature usage. Why? Well, as I tweeted earlier today, the scripts most widely used for monitoring feature usage are out of date because they don’t (yet) report on the In-Memory Column Store feature. The scripts I allude to are widely known by Google search as MOS 1317265.1. Once these are updated to report usage of the In-Memory Option/In-Memory Column Store Feature I’ll post part II in the series.

Thoughts?

 

 

 


Filed under: oracle

My Oracle Support Community Enhancement Brings New Features

Joshua Solomin - 17 hours 52 min ago
Untitled Document

 

GPIcon


Be sure to visit our My Oracle Support Community Information Center to see what is new. Choose from the tabs to watch the How to Video Series. You can also enroll for a live webcast on Wednesday, August 6 at 9am PST.

One change, you can now read blogs in My Oracle Support Community. The new Support Blogs space provides access to Support related blogs. The My Oracle Support Blog provides posts on the portal and tools that span all product areas.

Support Blogs also allow you to stay in touch with the latest product-specific news, tools, and troubleshooting tips in a growing list of product blogs maintained by Support engineers. Check back frequently to read new posts and discover new blogs.

My Oracle Support Upgrade

Joshua Solomin - 19 hours 8 min ago
Untitled Document

GPIcon
Over the weekend, we upgraded My Oracle Support. This upgrade brings changes to help you work more effectively with Oracle Support.

Among the areas you will notice enhancements are:

  • The My Oracle Support customer experience
  • My Oracle Support Community
  • Customer User Administration
  • Knowledge Management
For details about the latest features visit the My Oracle Support User Resource Center.

 

 

Vamos to the OTN Latin America Tour!

OTN TechBlog - Wed, 2014-07-23 15:33

Rick Ramsey, OTN Systems Community Manager, just wrote a GREAT blog post about the upcoming OTN Latin America Tour! A brief excerpt is below - go to his blog post to see the full schedule and register for this series of events which kicks off August 2nd.

"Oracle User Groups in Latin America, our friends in the Oracle product teams, Oracle ACES, and the Oracle Technology Network have put together a terrific agenda for the 2014 Tour. Hands-on labs, demos, and presentations for developers and deployers of the technologies in the Oracle stack, from applications all the way to systems, including Oracle Database and trending topics such as Big Data. Presenters will be product experts drawn from the Oracle ACE Community and Oracle product teams."


CCSF Update: Accreditation appeal denied, but waiting for court date

Michael Feldstein - Wed, 2014-07-23 13:30

It looks like I’ll have the California trifecta for the past week, having already posted on Cal State and University of California news recently. Maybe I should find a Stanford or some other private university story.

In my last post on CCSF from January:

Last week, as expected, a California superior court judge ruled on whether to allow the Accrediting Commission for Community and Junior Colleges (ACCJC)  to end accreditation for City College of San Francisco (CCSF) as of July 31, 2014. As reported inmultiple news outlets, the judge granted an injunction preventing ACCJC from stripping CCSF’s accreditation at least until a court trial based on the city of San Francisco lawsuit, which would occur in the summer 2014 at the earliest. This means that CCSF will stay open for at least another academic term (fall 2014), and it is possible that ACCJC would have to redo their accreditation review.

 In the meantime, ACCJC reviewed CCSF’s appeal of the accrediting decision, and ACCJC is sticking to its guns on the decision, as described in the San Francisco Chronicle:

City College of San Francisco remains out of compliance with eight accreditation standards, so the threat to revoke its accreditation stands, said the commission that set July 31 for the action that would shut the college down.

Accreditation won’t be revoked on that date, however, because a judge delayed the deadline until an October trial can determine if the Accrediting Commission for Community and Junior Colleges properly conducted its 2012 evaluation of City College.

In other words, ACCJC has changed its determination that CCSF should lose accreditation. There are only two caveats at this point:

  • The injunction that prevents ACCJC from revoking accreditation until the October court date; and
  • A new loophole called “restoration status”.

From the SF Chronicle again:

Besides pinning its hopes on the lawsuit – which could trigger a completely new evaluation – the college has one more option, made possible in June when the U.S. Department of Education firmly explained to the reluctant commission that it had the power to extend the revocation deadline.

As a result of that intervention, the commission created a new “restoration status” for City College – and any other college that finds itself in such a precarious position – giving it two more years to improve and comply with a new range of requirements.

City College would have to apply for the new status by July 31.

But Phil, you say, I am fascinated by the accreditation review process and want more! To keep you going, here is the letter from ACCJC to CCSF rejecting the appeal. In the letter ACCJC calls out the areas where CCSF is still not in compliance:

I.B   Improving Institutional Effectiveness

II.A  Instructional Programs

II.B  Student Support Services

II.C  Library and Learning Support Services

III.B Physical Resources

III.C Technology Resources

III.D Financial Resources

IV.B Board and Administrative Organization

For historical context of how we got here, see this post.

The high-profile game of Chicken continues.

The post CCSF Update: Accreditation appeal denied, but waiting for court date appeared first on e-Literate.

Using SaltStack for Configuration Management

Pythian Group - Wed, 2014-07-23 12:40

In my last blog post I mentioned that SaltStack is a fully featured configuration management solution, but we never looked into using the tool in that way. Today we will begin to explore some basic examples of configuration management with SaltStack.  We will look at two aspects of configuration management, installing a package, and will manage a service.

The scenario

A great repeatable task which can be automated with configuration management, and one which is faced by many systems administrators is having to add more capacity to an existing front end webserver pool.

Without a configuration management solution, you generally have to rely on an install document that is maintained by your systems administration team. One of those admins gets the job of preparing the new box, and follows the steps in that document to install all of the required packages and configure all of the required services to make that box a “webserver”

This method introduces a high potential for human error. The person following the document might miss step #17 on page 3, and you end up with a webserver in the pool that delivers content to your users in a strange and inconsistent way.  Depending on the maturity of your infrastructure, you also may or may not have the tools in place to even identify that the webserver is acting strangely due to this misconfiguration until clients begin to complain that your service delivers an unreliable experience.

From a resourcing point of view, this task can tie up 2 resources. The person doing the box install, and a second person you need to “QA” the box after the install is done to catch the fact that the first person missed step #17 on page 3.

Using a configuration management tool you define what your box should look like (a model) at a higher, abstracted level and the tool knows what is required to bring the server in line with it’s desired state. The tool does not need to be told that on a RedHat based system you use “yum” to install a package and on Debian systems you use “apt” as the operator you just say that the systems needs to have the package and the tool takes it from there.

By modelling your systems the tool can then provide accurate repeatability of the task of bringing your systems into line with the defined specifications of the model. And while this does shift the responsibility of eliminating any human error within the model itself, once it has been tested and validated the result is that each subsequent execution will be done programmatically without error.

Using SaltStack to install a package and manage a service

The first thing that we will need to do is tell the salt master that we would like to start using it for configuration management. We do this by uncommenting, or adding the following to our /etc/salt/master config:


file_roots:

base:
- /srv/salt

in the /srv directory as root make a “salt” subdir.

mkdir -p /srv/salt

Everything else, from this point forward will be written under the assumption that you are working in the /srv/salt dir.

Salt formulas

In SALT the set of instructions, or “model” that you define is known as a formula. Salt uses PyYALM as it’s configuration syntax. The first thing that we need to defile a base formula called “top.sls”


base:

'*':
- motd
'web*':
- apache
- webserver

This tells salt that all boxes should have the motd formula and that minions with hostnames starting with “web” should also get the apache formula.

Our Apache formula (apache.sls) is very basic for the purposes of this post:


httpd:

pkg:
- installed
service:
- running
- require:
- pkg: httpd

This tells the minion that it needs to install the package named httpd (remember the minion knows how to do this) and that the service should be running and that the service has a dependency on the package being installed. That is to say, you can’t manage the service unless the package that provides that server also is there.

When we apply the formula you can see that the minion receives the instruction. The minion installs the package and it’s dependant packages. Then it starts the service.


[root@ip-10-0-0-170 salt]# salt '*' state.sls apache

ip-10-0-0-171.ec2.internal:
----------
ID: httpd
Function: pkg.installed
Result: True
Comment: The following packages were installed/updated: httpd.
Changes:
----------
apr:
----------
new:
1.5.0-2.11.amzn1
old:

apr-util:
----------
new:
1.4.1-4.14.amzn1
old:

apr-util-ldap:
----------
new:
1.4.1-4.14.amzn1
old:

httpd:
----------
new:
2.2.27-1.2.amzn1
old:

httpd-tools:
----------
new:
2.2.27-1.2.amzn1
old:

mailcap:
----------
new:
2.1.31-2.7.amzn1
old:

----------
ID: httpd
Function: service.running
Result: True
Comment: Started Service httpd
Changes:
----------
httpd:
True

Summary
------------
Succeeded: 2
Failed: 0
------------
Total: 2

On subsequent runs, you can see that the package is already installed and the service is already running.


[root@ip-10-0-0-170 salt]# salt '*' state.sls apache

ip-10-0-0-171.ec2.internal:
----------
ID: httpd
Function: pkg.installed
Result: True
Comment: Package httpd is already installed
Changes:
----------
ID: httpd
Function: service.running
Result: True
Comment: The service httpd is already running
Changes:

Summary
------------
Succeeded: 2
Failed: 0
------------
Total: 2

If either was not true, if I were to go onto the box and stop the service:


[root@ip-10-0-0-171 ~]# service httpd stop

Stopping httpd: [ OK ]
[root@ip-10-0-0-171 ~]#

The next salt run would start the service again bringing the box back into compliance with my defined model.


ip-10-0-0-171.ec2.internal:

----------
ID: httpd
Function: pkg.installed
Result: True
Comment: Package httpd is already installed
Changes:
----------
ID: httpd
Function: service.running
Result: True
Comment: Started Service httpd
Changes:
----------
httpd:
True

Summary
------------
Succeeded: 2
Failed: 0
------------
Total: 2
[root@ip-10-0-0-171 ~]# service httpd status
httpd (pid 2493) is running...
[root@ip-10-0-0-171 ~]#

This becomes a powerful auditing tool which can allow you to quickly ensure that all boxes of a specific type match each other, and eliminates the above mentioned problem of missing step #17 on page 3 of your install doc.  With the heavy lifting of this task moved from human operators to the tool, and knowing that each node will be built identical to the others you can now scale up much quicker in response to your changing business needs, a task which previously could take a few days is now done in minutes.

 

Categories: DBA Blogs

Is your mobile network HIPAA compliant?

Chris Foot - Wed, 2014-07-23 11:21

As hospital personnel continue to access patient records through mobile devices, health care organizations are taking new approaches to database security.

Assessing initial requirements
The best way for CIOs in the medical industry to measure the performance of their server protection strategies is to ensure all software deployments are compliant with the Health Insurance Portability and Accountability Act. Information Week contributor Jason Wang acknowledged the basic requirements HIPAA obligates mobile applications and networks to possess:

  • Authorized, defended user access to protected health information
  • Encryption features that hide sensitive data from unsanctioned personnel
  • Routine security updates to eliminate bugs or loopholes in the network
  • A remote access data elimination feature that can be activated by administrators in the event a mobile device is lost, stolen or compromised
  • A solid business continuity/disaster recovery framework that can be tested on a regular basis

With these points in mind, health care organizations would greatly benefit from having a third party develop an enterprise-wide mobile application for their facilities. Salesforce CRM in particular is a solid option for those looking to install such an implementation, primarily due to its reputation for having HIPAA-compliant security features.

The risks involved
Many medical professionals believe employing a mobile network will help their subordinates allot more attention to patients. While this concept may be true, there are a number of threats that left unacknowledged could infect such a system. Having a third-party company constantly conduct database active monitoring tasks is imperative to deterring the following dangers:

  • Mobile devices, as well as wearables, are easily misplaced, meaning that those who come across these mechanisms could access private patient information
  • As a number of health care providers are communicating with patients through social media – malware and other Web-based attacks could be funneled through such mediums to infect devices.
  • Because mobile keyboards are rudimentary, users are more likely to use uncomplicated passwords that can easily be unmasked.

Be a smart user
Database administration needs aside, health care companies must also provide personnel with a secure line of communication. HIT Consultant noted that text messaging is a solid way for hospital staff to transfer information quickly and on the go, but the avenue lacks the encryption technology necessary to keep these communications secure.

Installing an encoding program geared specifically toward mobile text messaging is a good move to make. However, employees should also be cognizant of the fact that they should not explicitly share vital information, if they can help it.

The post Is your mobile network HIPAA compliant? appeared first on Remote DBA Experts.

Combining RESTful, JQuery ,and KnockoutJS

Kris Rice - Wed, 2014-07-23 08:53
Here's a very easy example of how to join RESTful services with JQuery and some data binding from KnockoutJS.  This is a page that shows the last 100 requested features in the SQL Developer Exchange. SQL Developer Exchange Requests The REST service is very simple as it just strips out a <p> that the editor injects and a little format on the date. I did this all on jsbin.com so it's nice and

Using REST to search in JQuery

Kris Rice - Wed, 2014-07-23 08:41
Building on yesterday's post,  here's adding a search to the same JQuery list: SQL Developer Exchange Requests SQL Developer Exchange Requests The REST Template is only different by adding in a bind variable and adjusting the select some: The other thing I added into the JQuery list is the weight of the searched items.  Which can be seen in the blue bar here: The javascript changes are

Spark: A Discussion

Greg Pavlik - Wed, 2014-07-23 08:36
A great presentation, worth watching in its entirety.

With apologies to my Hadoop friends but this is good for you too.

The Customer Experience

Steve Karam - Wed, 2014-07-23 07:00
The Apple Experience[This entry is part 6 of 6 in the series Grow Your Career

I’m going to kick this post off by taking sides in a long-standing feud.

Apple is amazing.

There. Edgy, right? Okay, so maybe you don’t agree with me, but you have to admit that a whole lot of people do. Why is that?

NOT part of the Customer Experience. Image from AppleFanSite.com

Image from AppleFanSite.com

Sure, there’s the snarky few that believe Apple products are successful due to an army of hipsters with thousands in disposable income, growing thick beards and wearing skinny jeans with pipes in mouth and books by Jack Kerouac in hand, sipping lattes while furiously banging away on the chiclet keyboard of their Macbook Pro with the blunt corner of an iPad Air that sports a case made of iPhones. I have to admit, it does make for an amusing thought. And 15 minutes at a Starbucks in SoHo might make you feel like that’s absolutely the case. But it’s not.

If you browse message boards or other sites that compare PCs and Apple products, you’ll frequently see people wondering why someone would buy a $2,000 Macbook when you can have an amazing Windows 8.1 laptop with better specs for a little over half the price. Or why buy an iPad when you can buy a Samsung tablet running the latest Android which provides more freedom to tinker. Or why even mess with Apple products at all when they’re not compatible with Fragfest 5000 FPS of Duty, or whatever games those darn kids are playing these days.

Part of the Customer Experience. Image provided by cnet.com

Image from cnet.com

The answer is, of course, customer experience. Apple has it. When you watch a visually stunning Apple commercial, complete with crying grandpas Facetiming with their newborn great-grandson and classrooms of kids typing on Macbook Airs, you know what to expect. When you make the decision to buy said Macbook Air, you know that you will head to the Apple Store, usually in the posh mall in your town, and that it will be packed to the gills with people buzzing around looking at cases and Beats headphones and 27″ iMacs. You know that whatever you buy will come in a sleek white box, and will be placed into a thick, durable bag with two drawstring cords that you can wear like a backpack.

When you get it home and open the box, it’s like looking at a Tesla Model S. Your new laptop, situated inside a silky plastic bed and covered in durable plastic with little tabs to peel it off. The sleek black cardboard wrapped around a cable wound so perfectly that there’s not a single millimeter of space between the coils, nor a plug out of place. The laptop itself will be unibody, no gaps for fans or jiggly CD-ROM trays or harsh textures.

All of which is to say, Apple provides an amazing customer experience. Are their products expensive, sometimes ridiculously so? Of course. But people aren’t just buying into the product, they’re buying into the “Apple life.” And why not? I’d rather pay for experiences than products any day. I may be able to get another laptop with better specs than my Macbook Pro Retina, but there will always be something missing. Not the same Customer Experience.Maybe the screen resolution isn’t quite so good, maybe the battery doesn’t last as long, or maybe it’s something as simple as the power cord coming wrapped in wire bag ties with a brick the size of my head stuffed unceremoniously into a plastic bag. The experience just isn’t there, and I feel like I’ve bought something that’s not as magnificent as the money I put into it, features and specs be damned.

Customer experience isn’t just a buzz phrase, and it doesn’t just apply to how you deal with angry customers or how you talk to them while making a sale. It also doesn’t mean giving your customer everything they want. Customer experience is the journey from start to finish. It’s providing a predictable, customer-centric, and enjoyable experience for a customer that is entrusting their hard-earned cash in your product. And it applies to every business, not just retail computer sellers and coffee shops. What’s more, it applies to anyone in a service-oriented job.

Customer Experience for IT Professionals

In a previous post I mentioned how important it is to know your client. Even if your position is Sub-DBA In Charge of Dropping Indexes That Start With The Letter Z, you still have a customer (Sub-DBA In Charge Of Dropping Indexes That Start With The Letters N-Z, of course). Not just your boss, but the business that is counting on you to do your job in order to make a profit. And you may provide an exceptional level of service. Perhaps you spend countless hours whittling away at explain plans until a five page Cognos query is as pure as the driven snow and runs in the millisecond range. But it’s not just what you do, but how you do it that is important.

I want you to try something. And if you already do this, good on you. Next time you get a phone call request from someone at your work, or have a phone meeting, or someone sends you a chat asking you to do something, I want you to send a brief email back (we call this an “ack” in technical terms) that acknowledges their request, re-lists what they need in your own words (and preferably with bullets), and lists any additional requirements or caveats. Also let them know how long it will take. Make sure you don’t underestimate, it’s better to quote too much time and get it to them early. Once you’ve finished the work, write a recap email. “As we discussed,” you might say, “I have created the five hundred gazillion tables you need and renamed the table PBRDNY13 to PBRDNY13X.” Adding, of course, “Please let me know if you have any other requests.”

If the task you did involves a new connection, provide them the details (maybe even in the form of a TNSNAMES). If there are unanswered questions, spell them out. If you have an idea that could make the whole process easier next time, run it by them. Provide that level of experience on at least one task you accomplish for your customer if you do not already, and let me know if it had any impact that you can tell. Now do it consistently.

The Apple ExperienceFrom what I’ve seen, this is what separates the “workers” from the “rockstars.” It’s not the ability to fix problems faster than a speeding bullet (though that helps, as a service that sells itself), but the ability to properly communicate the process and give people a good expectation that they can count on.

There’s a lot more to it than that, I know. And some of you may say that you lack the time to have this level of care for every request that comes your way. Perhaps you’re right, or perhaps you’re suffering from IT Stockholm Syndrome. Either way, just give it a shot. I bet it will make a difference, at least most of the time.

Conclusion

Recently, I became the Director of Customer Education and Experience at Delphix, a job that I am deeply honored to have. Delphix is absolutely a product that arouses within customers an eager want, it solves complex business problems, has an amazing delivery infrastructure in the Professional Services team, and provides top notch support thereafter. A solid recipe for Customer Experience if there ever was one. But it’s not just about the taste of the meal, it’s about presentation as well. And so it is my goal to continuously build an industrialized, scalable, repeatable, and enjoyable experience for those who decide to invest their dollar on what I believe to be an amazing product. Simply put, I want to impart on them the same enthusiasm and confidence in our product that I have.

I hope you have the chance to do the same for your product, whatever it may be.

The post The Customer Experience appeared first on Oracle Alchemist.

Javascript Driven ADF Taskflows for WebCenter Portal

This is a continuation from my previous post - Developing WebCenter Content Cross Platform iDoc Enabled Components for Mobile, ADF, Sharepoint, Liferay.

You can see a video of JIVE Forums integration with a JS Taskflows vs ADF Taskflow running in WebCenter Portal here -

Click here for hi-resolution

This post is aimed at Web Developers, Designers and Marketing web teams who aren’t familiar with ADF and want to create reusable dynamic taskflows without the need to learn ADF or Java to provide interactive dynamic regions using Javascript, HTML and CSS with custom frameworks like jQuery designed not to conflict with ADF JS environment.

Read on for a step by step run through on creating JS driven taskflows  -

    1. You will need to download JDeveloper – I’m using JDev 11.1.1.7.0 for WebCenter Portal 11g where I will deploy my custom taskflow driven entirely with Javascript.
    2. Run through the following Oracle guide to setup your project to extend Portal (11.1.1.8.3) - Developing Components for WebCenter Portal Using JDeveloper
    3. Add new taskflow to library by right-clicking WebCenterSpacesExtensions and selecting “New…”
    4. Add ADF Task Flow (JSF)
      .
      1
      .
    5. Name the xml file, leaving the Directory the JDev default
      .
      2
      .
    6. Double click the new xml file and drag a View element into the diagram from the Component Palette
      .
      3
      .
    7. Rename “view1″ to “[taskflow name]View”.
    8. Double click the new view to create a page fragment.
      Update the directory and add \taskflows\[taskflow name]\view
      This will make it easier to sort through in the future when you develop more taskflows.
      .
      4
      .
    9. Edit the JSFF and display code in source view.
      .
      5
      .
    10. Replace with the following -
      <?xml version='1.0' encoding='UTF-8'?>
      <jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1"
                xmlns:af="http://xmlns.oracle.com/adf/faces/rich"
                xmlns:f="http://java.sun.com/jsf/core">
      <af:resource type="javascript">
      <![CDATA[
      /**
       * CREATE BASE JS CONTAINER OBJ
       * This is base class to assist PSA javascript methods to init after page loaded.
       * You can add this script in the head of you template instead of the portlet.
       */
      var FB = window.FB || {},
      	Base = Base || (function() {
      		return {
      			//create multi-cast delegate.
      			onPortalInit: function(function1, function2) {
      				return function() {
      					if (function1) {
      						function1();
      					}
      					if (function2) {
      						function2();
      					}
      				}
      			},
      			//used for chaining methods
      			chainPSA: function() {}
      		}
      	})();
      
      //Use Base method if FB.Base hasn't been created
      FB.Base = FB.Base || Base;
      /************************/
      
      
      
      
      /**
       * CREATE CHAIN WRAPPER
       * Chain method will initialise from Base requirejs core script
       */
      FB.Base.chainPSA = FB.Base.onPortalInit(FB.Base.chainPSA, function() {
      	//set base mustache template name to load and inject
      	var vUID = 'FB_sampleContainer_${pageFlowScope.containerID}', //(UID) Unique Classname to inject template into - can't use IDs in portal 
      		oConstructor = {
      			vTemplate: 		'import/tpl/sampleTpl', //location of sampleTpl.mustache to load
      			oParams: { //Obj list of default params pulled from sample.xml Input definition
      				title:			'${pageFlowScope.title}',
      				displayTitle: 	'${pageFlowScope.displayTitle}',
      				activeUser: 	'${pageFlowScope.activeUser}'
      			},
      			containerID: 		vUID
      		};
      	
      	//check if array exists from other custom JS Portlets
      	if (typeof(FB.loadTemplate) === 'object') {
      		FB.loadTemplate.portletUIDList.push(vUID);
      	//create empty object
      	} else {
      		FB.loadTemplate = {
      			portletUIDList:[vUID],
      			portlets: {}
      		};
      	}
      
      	//inject params
      	FB.loadTemplate.portlets[vUID] = oConstructor;
      
      });
      /************************/
      ]]>
      </af:resource>
      
      
      <!-- Sample template will be injected here -->
      <af:panelGroupLayout layout="vertical" id="FB-SampleContainer" styleClass="FB_sampleContainer_#{pageFlowScope.containerID} portlet-sampleContainer"></af:panelGroupLayout>
      <!-- xSample template will be injected here -->
      
      
      </jsp:root>

      OVERVIEW:

      This is where the mustache template will be injected into to provide the sample component functionality.

    11. <af:panelGroupLayout layout="vertical" id="FB-SampleContainer" styleClass="FB_sampleContainer_#{pageFlowScope.containerID} portlet-sampleContainer"></af:panelGroupLayout>

      The oConstructor specifies the configuration of the the component to inject.
      vTemplate points to a JS file that requireJS imports and configures the base multiUploader components from the params defined.

      oParams contains all configuration for the App at the moment these are scoped params associated with the taskflow that you can allow the user to define and use within you sample component as a JS var.

      var vUID = 'FB_sampleContainer_${pageFlowScope.containerID}', //(UID) Unique Classname to inject template into - can't use IDs in portal 
      		oConstructor = {
      			vTemplate: 		'import/tpl/sampleTpl', //location of sampleTpl.mustache to load
      			oParams: { //Obj list of default params pulled from sample.xml Input definition
      				title:			'${pageFlowScope.title}',
      				displayTitle: 	'${pageFlowScope.displayTitle}',
      				activeUser: 	'${pageFlowScope.activeUser}'
      			},
      			containerID: 		vUID
      		};

      A simple check to see if other components exist on the page and append the new component within the JS Array “PortletUIDList” associated with a JS Object holding the component params in “portlets”

      //check if array exists from other custom JS Portlets
      	if (typeof(FB.loadTemplate) === 'object') {
      		FB.loadTemplate.portletUIDList.push(vUID);
      	//create empty object
      	} else {
      		FB.loadTemplate = {
      			portletUIDList:[vUID],
      			portlets: {}
      		};
      	}
      
      	//inject params
      	FB.loadTemplate.portlets[vUID] = oConstructor;

      Finally the JS configuration is wrapped in JS chain wrapper that will only initialise when requireJS has loaded in all its core base libraries like Jquery etc.

      FB.Base.chainPSA = FB.Base.onPortalInit(FB.Base.chainPSA, function() {
      
      //code
      
      });

      Make sure within your ADF Template you have setup requirejs core and have the following to initialise the FB.Base.chainPSA and loop through the custom taskflows to display on the page -

      //load JS Components
      		if (FB.Base.chainPSA) {
      			FB.Base.chainPSA();
      		}

      //loop and request all templates required
      			for (x;x<lPortletList;x++) {
      				var vPortletUID 	= aPortletList[x],
      					oPortlet 		= FB.loadTemplate.portlets[vPortletUID];
      				
      				//define temp object info to pass into script when init	
      				define('temp'+x, oPortlet);
      				
      				//request and initialise portlet template & pass params
      				require([oPortlet.vTemplate,'temp'+x], function(tpl,oPortlet) {
      					console.log('[IMPORTED TEMPLATE]',tpl.component,oPortlet);
      					tpl.init(oPortlet);
      				});
      			}

    12. To add taskflow parameters open the xml file again.
    13. Select Overview tab bottom left of the screen.
      Select the Parameters side tab.
      Add the following four example params -
      .
      6
      You will see these when we add and edit the taskflow to a portal page in WebCenter Composer.
    14. Deploy the taskflow to WebCenter Portal following the last steps in the Oracle GuideOnce the new taskflow / spaces extension project has been deployed load WebCenter Portal.
      The following screenshots from PS5 the UI has changed since PS7 but you should be able to work out the differences.
    1. Go into administration area of the portal and select the “Resources” Tab
    2. Select the “Resource Catalogs” from the items on the left under the “Structure” heading.
      A list of Resource Catalogs will be available. You can create a new one or use an existing one. Make sure the one you are updating is the one being used by the portal you want to add the taskflow into.
      .
      7
      .
    3. Select the resource catalogue and Edit from the Edit Menu drop down down.
      .
      8
      .
    4. A window will appear hear you can add folders and where you want your components to appear.
      I have created a Demo Taskflow folder.
    5. Select “Add From Library” from the Add dropdown menu.
      .
      9
      .
    6. Drill into Taskflows and add your [Taskflow] – I am adding the sample taskflow I created earlier.
    7. Go into your portal create a new page and add the new taskflow.
      Here is an example of the Jive Forums that I recreated as a JS driven taskflow.
      .
      11
      .
      12
      .
    8. And the final output of the taskflow on the page.
      13

 

 

 

The post Javascript Driven ADF Taskflows for WebCenter Portal appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Developing WebCenter Content Cross Platform iDoc Enabled Components for Mobile, ADF, Sharepoint, Liferay

frankensteinSo over the last couple of months I’ve been thinking and tinkering with code – “Whats the best approach for creating a WebCenter Content (WCC) components that I can consume and reuse across multiple platforms and environments?”
Is it… pagelet producer or maybe an iFrame; but these solutions just weren’t good enough or allowed the flexibility I really wanted.

I needed a WCC Solution that could easily be consumed into Mobile either cordova (Hybrid APP) or ADF Mobile (AMX views) and that worked on different devices/platforms as well as on any enterprise app ie Sharepoint (.Net), Lifreray,  WebCenter Portal (ADF) or even consumed into the new WebCenter Content ADF WebUI – Providing the added advantage that there would not need to be multiple branches of code or redevelopment of the component for each platform and environment.. 

And in the famous words of Victor Frankenstein.. “It’s Alive!!”

Yes; after tinkering around and trying different approaches this is the solution I created to support the above model.
I’m not saying this is the right approach or supported by the enterprise vendors but an approach that is reusable and can work on all enterprise apps.

 

[VIDEO CONVERTING]…

Here’s a quick video of a drag/drop MultiUploader component I created for WebCenter Content Classic that I can reuse on .Net and ADF WebCenter Portal/Content as well as mobile.

Read on to find out more on how this was achieved..

1) First I’m going dig into WebCenter Content and explain underlying structure of the component.

To create a flexible base model I created a light Javascript framework very similar to AngularJS or ReactJS.

This would be the base component that would enable additional components on the page with the use of Mustache (JS templates) to drive and inject dynamic functional areas of content into a specified DOM node by ID or className.
Any changes of layout with the component are handle via an ajax request to a cached mustache template which updates the DOM when needed (similar to ADFs PPR) any user interaction is handled through event driven actions from the imported templates.

RequireJS is used to supply a flexible module loading framework – where I do not need to be worried over conflicts of JS libraries and is used to load in mustache templates and additional JS functionality when needed.

You’re probably thinking that there are going to be a lot of AJAX requests going back and forth and its going to be slow.. Just Check out the video.. And the answer is not really – the mustache templates are going to be smaller than average images you load on a page..

So as an example for the MultiUploader I only have 1 mustache template that is 9kb.. All interaction is handled by 2 JS files that are 39kb uncompressed.

2) As mentioned a base model WCC component –  “FishbowlModuleLoader” will load in and initiate all other components on the page and will only load and cache required templates and JS files as and when is needed. There is no point to load in all templates and JS functionality on a page if it is not needed improving performance and interaction of the component.

3) The WCC components – “FishbowlMultiUploader”: I’m going to run through and give a quick overview of how this component works.

WebCenter Content Resource Asset

This is the base structure of the Content Component configuration “fb_multi_upload_page_body” it is consumed into a custom template “MULTI_UPLOAD_PAGE” which is requested via a custom service request “?IdcService=GET_FB_MULTI_UPLOAD_PAGE”.

<!--
Name:           fb_multi_upload_page_body
Author:         John Sim  [18/06/2014]
Parameters:		
Description:	Page Body for Multi Checkin used in MULTI_UPLOAD_PAGE template
-->
<@dynamichtml fb_multi_upload_page_body@>
[[% FB fb_multi_upload_page_body Template body MULTI_UPLOAD_PAGE %]]

<div id="FB-multiCheckin" class="FB_multiCheckin"></div>

<script>
/**
 * CREATE CHAIN WRAPPER
 * Chain method will load from Base ModuleLoader requirejs core script
 */
FB.Base.chainPSA = FB.Base.onPortalInit(FB.Base.chainPSA, function() {
	//set base mustache template name to load and inject
	var vUID = 'FB_multiCheckin', //(UID) Unique Classname to inject template into - can't use IDs in portal 
		oConstructor = {
			vTemplate: 'import/tpl/multiUploadTpl', //location of template.mustache to load
			oParams: { //Obj list of default params pulled from multiUploader.xml Input definition
				maxUploadSize:			'10mb',
				defaultDocType:			('<$multiUploadDefaultType$>' !== '')? '<$multiUploadDefaultType$>': 'Document', 
				defaultSecurityGroup:		('<$multiUploadDefaultSecurityGroup$>' !== '')? '<$multiUploadDefaultSecurityGroup$>': 'Public',
				defaultAccount:			'Workspace/'+userName, 
				author:				(typeof(userName) !== 'undefined')? userName: '', 
				httpEnterpriseCgiPath: 		(typeof(httpEnterpriseCgiPath) !== 'undefined')? httpEnterpriseCgiPath: '',
				idcToken: 			(typeof(idcToken) !== 'undefined')? idcToken: '',
				httpWebRoot: 			(typeof(httpWebRoot) !== 'undefined')? httpWebRoot: '',
				enableTagging:			true,
				enableEmails:			true,
				enableBarcode:			true,
				enableCheckinProfiles: 		true,
				showHelpOption: 		true
			},
			containerID: 		vUID
		};
	
	//check if array exists from other custom JS Portlets
	if (typeof(FB.loadTemplate) === 'object') {
		FB.loadTemplate.portletUID.push('FB_multiUploadContainer_' + vUID);
	//create empty object
	} else {
		FB.loadTemplate = {
			portletUIDList:['FB_multiUploadContainer_' + vUID],
			portlets: {}
		};
	}

	//inject params
	FB.loadTemplate.portlets['FB_multiUploadContainer_' + vUID] = oConstructor;
});
/************************/
</script>


<@end@>

 

This is where the mustache template will be injected into to provide the multiUpload component functionality.

<div id="FB-multiCheckin" class="FB_multiCheckin"></div>

The oConstructor specifies the configuration of the the component to inject.
vTemplate points to a JS file that requireJS imports and configures the base multiUploader components from the params defined.

oParams contains all configuration for the App at the moment these are mostly hard coded but could be defined as iDoc Variables when you install and enable the component within WCC.

var vUID = 'FB_multiCheckin', //(UID) Unique Classname to inject template into - can't use IDs in portal 
		oConstructor = {
			vTemplate: 'import/tpl/multiUploadTpl', //location of template.mustache to load
			oParams: { //Obj list of default params pulled from multiUploader.xml Input definition
				maxUploadSize:			'10mb',
				defaultDocType:			('<$multiUploadDefaultType$>' !== '')? '<$multiUploadDefaultType$>': 'Document', 
				defaultSecurityGroup:		('<$multiUploadDefaultSecurityGroup$>' !== '')? '<$multiUploadDefaultSecurityGroup$>': 'Public',
				defaultAccount:			'Workspace/'+userName, 
				author:				(typeof(userName) !== 'undefined')? userName: '', 
				httpEnterpriseCgiPath: 		(typeof(httpEnterpriseCgiPath) !== 'undefined')? httpEnterpriseCgiPath: '',
				idcToken: 			(typeof(idcToken) !== 'undefined')? idcToken: '',
				httpWebRoot: 			(typeof(httpWebRoot) !== 'undefined')? httpWebRoot: '',
				enableTagging:			true,
				enableEmails:			true,
				enableBarcode:			true,
				enableCheckinProfiles: 		true,
				showHelpOption: 		true
			},
			containerID: 		vUID
		};

A simple check to see if other components exist on the page and append the new component within the JS Array “PortletUIDList” associated with a JS Object holding the component params in “portlets”

//check if array exists from other custom JS Portlets
	if (typeof(FB.loadTemplate) === 'object') {
		FB.loadTemplate.portletUID.push('FB_multiUploadContainer_' + vUID);
	//create empty object
	} else {
		FB.loadTemplate = {
			portletUIDList:['FB_multiUploadContainer_' + vUID],
			portlets: {}
		};
	}

	//inject params
	FB.loadTemplate.portlets['FB_multiUploadContainer_' + vUID] = oConstructor;

Finally the JS configuration is wrapped in JS chain wrapper that will only initialise when requireJS has loaded in all its core base libraries like Jquery etc.

FB.Base.chainPSA = FB.Base.onPortalInit(FB.Base.chainPSA, function() {

//code

});

 

4) So lets take a look at how the base component “FishbowlModuleLoader” works.

Essentially this defines the FB.Base.chainPSA chain wrapper method in the header – does not need jquery or any other library.

<!--
Name:           std_html_head_declarations
Author:         John Sim  [18/06/2014]
Parameters:		
Description:	Add required header resources
-->
<@dynamichtml std_html_head_declarations@>
[[% FB std_html_head_declaration Update head add JS libs for module loader %]]

<$include super.std_html_head_declarations$>

<script>
/**
 * CREATE BASE JS CONTAINER OBJ
 * DONOT ADD JQUERY this is base class to assist PSA javascript methods to init after page loaded.
 */
var FB = window.FB || {},
	Base = Base || (function() {
		return {
			//create multi-cast delegate.
			onPortalInit: function(function1, function2) {
				return function() {
					if (function1) {
						function1();
					}
					if (function2) {
						function2();
					}
				}
			},
			//used for chaining methods
			chainPSA: function() {}
		}
	})();

//Use Base method if FB.Base hasn't been created
FB.Base = FB.Base || Base;
/************************/
</script>

<@end@>

You could cache this and put it in a script file I’ve just put it inline easier for you to read

In the footer we define requireJS and the configuration that loads in base libraries that we need for all components ie Jquery and maybe a few others.
Also we setup fb.core.js as our base script to import and load in the core framework I built to handle routing and DOM event interaction as well as global vars.

<!--
Name:           std_page_end
Author:         John Sim  [18/06/2014]
Parameters:		
Description:	Component Module Loader RequireJS setup
-->
<@dynamichtml std_page_end@>
[[% FB std_page_end Add Module Loader RequireJS lib %]]

<$include super.std_page_end$>


<!-- Init FB Component Module Loader -->
<script src="<$HttpWebRoot$>resources/FishbowlModuleLoader/js/core/config.js"></script>
<script src="<$HttpWebRoot$>resources/FishbowlModuleLoader/js/libs/requirejs/require.min.js" data-main="fb.core"></script>
<!-- Init FB Component Module Loader -->
<@end@>

fb.core.js so here is where the magic begins

// REQUIREJS Base configuration
require([
	//Dom ready req plugin
	'domReady',
	
	
	//core 
	'import/Layout',
	'import/Action',
	'import/Navigation',
	'import/Global',
	
	
	//Plugins
	'Moment',		//date plugin momentjs
	'ftlabsFastClick', 	//fix touch 300ms delay
	'fb'			//fb global methods
	

	
], function(domReady, Layout){
console.info('[ALL MODULES LOADED]');

	domReady(function() {
		console.info('[DOM READY]');
		
		//initialise layout DOM events ie click, touch etc.
		Layout.init();
		
		//load JS Components
		if (FB.Base.chainPSA) {
			FB.Base.chainPSA();
		}
		
		//check if any JS driven template containers exist
		if (typeof(FB.loadTemplate) !== 'undefined') {
			var aPortletList 	= FB.loadTemplate.portletUIDList,
				lPortletList 	= aPortletList.length,
				x 				= 0;
				
			//loop and request all templates required
			for (x;x<lPortletList;x++) {
				var vPortletUID 	= aPortletList[x],
					oPortlet 		= FB.loadTemplate.portlets[vPortletUID];
				
				//define temp object info to pass into script when init	
				define('temp'+x, oPortlet);
				
				//request and initialise portlet template & pass params
				require([oPortlet.vTemplate,'temp'+x], function(tpl,oPortlet) {
					console.log('[IMPORTED TEMPLATE]',tpl.component,oPortlet);
					tpl.init(oPortlet);
				});
			}
		}
		
	});
	
});

Once the Dom has fully loaded FB.Base.chainPSA(); is initiated this sets up and configures the FB.loadTemplate object that contains all information associated to required components that will need to be loaded into the page.

Here we loop through and load in all templates and pass across the component configuration to the templates to be initialised:

//loop and request all templates required
			for (x;x<lPortletList;x++) {
				var vPortletUID 	= aPortletList[x],
					oPortlet 		= FB.loadTemplate.portlets[vPortletUID];
				
				//define temp object info to pass into script when init	
				define('temp'+x, oPortlet);
				
				//request and initialise portlet template & pass params
				require([oPortlet.vTemplate,'temp'+x], function(tpl,oPortlet) {
					console.log('[IMPORTED TEMPLATE]',tpl.component,oPortlet);
					tpl.init(oPortlet);
				});
			}

And thats all there is too it..

5) Lets dig into WebCenter Portal now.. How can you reuse all that code you’ve written for WebCenter Content Classic within ADF?

Easy; lets create a JS driven taskflow template that we can dump into the resource catalogue and drag drop reuse it throughout any page where ever it is needed.

I’ve created a new post for this part -
Read on here to find out how to create JS Driven Taskflow templates.

 

Some gotchas’ - 

Somethings to think about if you do decide to use this approach.

  1. You will need to make sure that all AJAX requests are made on the same domain.
    1. or enable CORs from UCM to accepts requests cross domain. (Mobile works crossdomain)
  2. WCC needs to be accessible by the users browser
    1. You can setup a proxy service and only allow access to the custom services you require to lock down other UCM environment access if needed.

And finally - one thing that comes to mind here I am using static mustache templates but there is nothing stopping you from creating a custom WCC service to generate mustache templates with embedded idoc if you want..

The post Developing WebCenter Content Cross Platform iDoc Enabled Components for Mobile, ADF, Sharepoint, Liferay appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Teradata bought Hadapt and Revelytix

Curt Monash - Wed, 2014-07-23 02:29

My client Teradata bought my (former) clients Revelytix and Hadapt.* Obviously, I’m in confidentiality up to my eyeballs. That said — Teradata truly doesn’t know what it’s going to do with those acquisitions yet. Indeed, the acquisitions are too new for Teradata to have fully reviewed the code and so on, let alone made strategic decisions informed by that review. So while this is just a guess, I conjecture Teradata won’t say anything concrete until at least September, although I do expect some kind of stated direction in time for its October user conference.

*I love my business, but it does have one distressing aspect, namely the combination of subscription pricing and customer churn. When your customers transform really quickly, or even go out of existence, so sometimes does their reliance on you.

I’ve written extensively about Hadapt, but to review:

  • The HadoopDB project was started by Dan Abadi and two grad students.
  • HadoopDB tied a bunch of PostgreSQL instances together with Hadoop MapReduce. Lab benchmarks suggested it was more performant than the coyly named DBx (where x=2), but not necessarily competitively with top analytic RDBMS.
  • Hadapt was formed to commercialize HadoopDB.
  • After some fits and starts, Hadapt was a Cambridge-based company. Former Vertica CEO Chris Lynch invested even before he was a VC, and became an active chairman. Not coincidentally, Hadapt had a bunch of Vertica folks.
  • Hadapt decided to stick with row-based PostgreSQL, Dan Abadi’s previous columnar enthusiasm notwithstanding. Not coincidentally, Hadapt’s performance never blew anyone away.
  • Especially after the announcement of Cloudera Impala, Hadapt’s SQL-on-Hadoop positioning didn’t work out. Indeed, Hadapt laid off most or all of its sales and marketing folks. Hadapt pivoted to emphasize its schema-on-need story.
  • Chris Lynch, who generally seems to think that IT vendors are created to be sold, shopped Hadapt aggressively.

As for what Teradata should do with Hadapt:

  • My initial thought Hadapt was to just double down, pushing the technology forward, presumably including a columnar option such as the one Citus Data developed.
  • But upon reflection, if it made technical sense to merge the Aster and Hadapt products, that would be better yet.

I herewith apologize to Aster co-founder and Hadapt skeptic Tasso Argyros (who by the way has moved on from Teradata) for even suggesting such heresy. :)

Complicating the story further:

  • Impala lets you treat data in HDFS (Hadoop Distributed File System) as if it were in a SQL DBMS. So does Teradata SQL-H. But Hadapt makes you decide whether the data is in HDFS or the SQL DBMS, and it can’t be in both at once. Edit: Actually, see Dan Abadi’s comments below.
  • Impala and Oracle’s new SQL-H competitor have daemons running on every data node. So does one option in Hadapt. But I don’t think SQL-H does that yet.

I was less involved with Revelytix that with Hadapt (although I’m told I served as the “catalyst” for the original Teradata/Revelytix partnership). That said, Teradata — like Oracle — is always building out a data integration suite to cover a limited universe of data stores. And Revelytix’ dataset management technology is a nice piece toward an integrated data catalog.

Related posts

Teradata bought Hadapt and Revelytix

DBMS2 - Wed, 2014-07-23 02:29

My client Teradata bought my (former) clients Revelytix and Hadapt.* Obviously, I’m in confidentiality up to my eyeballs. That said — Teradata truly doesn’t know what it’s going to do with those acquisitions yet. Indeed, the acquisitions are too new for Teradata to have fully reviewed the code and so on, let alone made strategic decisions informed by that review. So while this is just a guess, I conjecture Teradata won’t say anything concrete until at least September, although I do expect some kind of stated direction in time for its October user conference.

*I love my business, but it does have one distressing aspect, namely the combination of subscription pricing and customer churn. When your customers transform really quickly, or even go out of existence, so sometimes does their reliance on you.

I’ve written extensively about Hadapt, but to review:

  • The HadoopDB project was started by Dan Abadi and two grad students.
  • HadoopDB tied a bunch of PostgreSQL instances together with Hadoop MapReduce. Lab benchmarks suggested it was more performant than the coyly named DBx (where x=2), but not necessarily competitively with top analytic RDBMS.
  • Hadapt was formed to commercialize HadoopDB.
  • After some fits and starts, Hadapt was a Cambridge-based company. Former Vertica CEO Chris Lynch invested even before he was a VC, and became an active chairman. Not coincidentally, Hadapt had a bunch of Vertica folks.
  • Hadapt decided to stick with row-based PostgreSQL, Dan Abadi’s previous columnar enthusiasm notwithstanding. Not coincidentally, Hadapt’s performance never blew anyone away.
  • Especially after the announcement of Cloudera Impala, Hadapt’s SQL-on-Hadoop positioning didn’t work out. Indeed, Hadapt laid off most or all of its sales and marketing folks. Hadapt pivoted to emphasize its schema-on-need story.
  • Chris Lynch, who generally seems to think that IT vendors are created to be sold, shopped Hadapt aggressively.

As for what Teradata should do with Hadapt:

  • My initial thought Hadapt was to just double down, pushing the technology forward, presumably including a columnar option such as the one Citus Data developed.
  • But upon reflection, if it made technical sense to merge the Aster and Hadapt products, that would be better yet.

I herewith apologize to Aster co-founder and Hadapt skeptic Tasso Argyros (who by the way has moved on from Teradata) for even suggesting such heresy. :)

Complicating the story further:

  • Impala lets you treat data in HDFS (Hadoop Distributed File System) as if it were in a SQL DBMS. So does Teradata SQL-H. But Hadapt makes you decide whether the data is in HDFS or the SQL DBMS, and it can’t be in both at once. Edit: Actually, see Dan Abadi’s comments below.
  • Impala and Oracle’s new SQL-H competitor have daemons running on every data node. So does one option in Hadapt. But I don’t think SQL-H does that yet.

I was less involved with Revelytix that with Hadapt (although I’m told I served as the “catalyst” for the original Teradata/Revelytix partnership). That said, Teradata — like Oracle — is always building out a data integration suite to cover a limited universe of data stores. And Revelytix’ dataset management technology is a nice piece toward an integrated data catalog.

Related posts

Categories: Other

EID Holidays and things to do

Syed Jaffar - Wed, 2014-07-23 02:07
Looking forward to a much anticipated 9 day EID holiday break to complete the to-do-list which I have been carrying for a while now. Determined to complete some of the writing assignments that I have kept pending for a long period of time now. At the same time, will have to seek the possibilities to exploring the new features of v12.1.0.2 and Exadata as we might we going for the combination in the coming weeks for a Data Warehouse project.

Will surely blog about my test scenarios and will share the inputs on Oracle 12c new features.

I wish everyone a very happy and prosperous EID in advance.

12c Threaded Execution Test

Bobby Durrett's DBA Blog - Tue, 2014-07-22 17:39

I did a quick check of some facts I’m studying about Oracle 12c and its new threaded execution mode.  I set this parameter:

alter system set THREADED_EXECUTION=true scope=spfile;

I had to connect SYS as SYSDBA with a password to get the system to bounce.

Then it had these processes only:

oracle    1854     1  0 09:17 ?        00:00:00 ora_pmon_orcl
oracle    1856     1  0 09:17 ?        00:00:00 ora_psp0_orcl
oracle    1858     1  2 09:17 ?        00:00:00 ora_vktm_orcl
oracle    1862     1  3 09:17 ?        00:00:00 ora_u004_orcl
oracle    1868     1 99 09:17 ?        00:00:17 ora_u005_orcl
oracle    1874     1  0 09:17 ?        00:00:00 ora_dbw0_orcl

This differs from some of my 12c OCP study material but agrees with the manuals.  Only pmon, psp, vktm, and dbw have dedicated processes.

Also, I found that I needed this value in the listener.ora:

dedicated_through_broker_listener=on

I needed that value to connect using a thread.  Before I put that in it spawned a dedicated server process when I connected over the network.

Lastly, contrary to what I had read I didn’t need to set the local_listener parameter to get the new connections to use a thread:

SQL> show parameter local_listener

NAME                                 TYPE        VALUE
------------------------------------ ----------- ---------------------
local_listener                       string

- Bobby

Categories: DBA Blogs

REGEXP_LIKE: strange unspecified value in parameter “modifier”

XTended Oracle SQL - Tue, 2014-07-22 15:05

Today I noticed strange thing in predicate section of execution plan for simple query with regexp_like, where 3rd parameter “MODIFIER” was not specified:

SQL> select * from dual where regexp_like(dummy,'.');

D
-
X

SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------
SQL_ID  97xuqf9cmjsta, child number 0
-------------------------------------
select * from dual where regexp_like(dummy,'.')

Plan hash value: 272002086

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter( REGEXP_LIKE ("DUMMY",'.',HEXTORAW('F07FD85CFF0700006A1116
              45010000000000000000000000FC12164501000000000000000000000000000000000000
              0010000000000000001880D85CFF07000002000000000000000000000081000000') ))


20 rows selected.

It is particularly interesting that the values in HEXTORAW() are always different for different first parameters:

SQL> select * from dual where regexp_like(dummy,'x');
...
   1 - filter( REGEXP_LIKE ("DUMMY",'x',HEXTORAW('3895D330FF0700006A1116
              45010000000000000000000000FC12164501000000000000000000000000000000000000
              0011000000000000006895D330FF07000002000000000000000000000081000000') ))
SQL> select * from dual where regexp_like(dummy,'y');
...
   1 - filter( REGEXP_LIKE ("DUMMY",'y',HEXTORAW('00DA3C3FFF0700006A1116
              45010000000000000000000000FC12164501000000000000000000000000000000000000
              00110000000000000030DA3C3FFF07000002000000000000000000000081000000') ))
SQL> select * from dual where regexp_like(dummy||'','x')
...
   1 - filter( REGEXP_LIKE ("DUMMY"||'','x',HEXTORAW('70964F2FFF0700006A
              111645010000000000000000000000FC1216450100000000000000000000000000000000
              0000001100000000000000A0964F2FFF07000002000000000000000000000081000000')
               ))

I don’t know, what does it mean, but it looks like garbage from memory.
When I noticed this, I decided to check how regexp_like will work in function-based indexes:

SQL> create table xtest as
  2    select dummy||level as str
  3    from dual
  4    connect by level<=30;

Table created.

SQL> select * from xtest where case when regexp_like(str,'1') then 1 end = 1;
...
12 rows selected.

SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------
SQL_ID  7ztp0k8c1zn2h, child number 0
-------------------------------------
select * from xtest where case when regexp_like(str,'1') then 1 end = 1

Plan hash value: 4207139086

---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |       |       |     3 (100)|          |
|*  1 |  TABLE ACCESS FULL| XTEST |    12 |   264 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(CASE  WHEN  REGEXP_LIKE
              ("STR",'1',HEXTORAW('68F9CB32FF0700006A111645010000000000000000000000FC1
              216450100000000000000000000000000000000000000110000000000000098F9CB32FF0
              7000002000000000000000000000081000000') ) THEN 1 END =1)

SQL> create index xtest_fbi on xtest(case when regexp_like(str,'1') then 1 end);

Index created.

SQL> select * from xtest where case when regexp_like(str,'1') then 1 end = 1;
...
12 rows selected.

SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------
SQL_ID  7ztp0k8c1zn2h, child number 0
-------------------------------------
select * from xtest where case when regexp_like(str,'1') then 1 end = 1

Plan hash value: 1479471124

-----------------------------------------------------------------------------------------
| Id  | Operation                   | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |           |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| XTEST     |    12 |   300 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | XTEST_FBI |    12 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("XTEST"."SYS_NC00002$"=1)

SQL> select column_expression from user_ind_expressions e where e.index_name='XTEST_FBI';

COLUMN_EXPRESSION
-----------------------------------------------------------------------------------------
CASE  WHEN  REGEXP_LIKE ("STR",'1') THEN 1 END

As you can see it works fine, although the predicate from first execution plan differs from the FBI expression.
Then I dumped 10053 trace and noticed that the HEXTORAW(…) function appeared in “Explain Plan Dump” only, so it looks just like plan output bug.

Categories: Development