I’ve popped this note to the top of the stack because I’ve added an index to Randolf Geist’s series on parallel execution skew, and included a reference his recent update to the XPLAN_ASH utility.
This is the directory for a short series I wrote discussing how to interpret parallel execution plans in newer versions of Oracle.
- A quiz introducing Bloom Filters
- What we want to see in parallel execution plans
- Creating the demonstration data
- Parallel hash join with Broadcast strategy (and a note on 12c pq_replicate)
- Table Queues, message size, and Bloom filters
- Parallel hash join Hash/Hash distribution
For other aspects of parallel execution, here are links to several articles by Randolf Geist, published on his own blog or on Oracle’s Technet:
- Parallel Execution Analysis Using ASH – The XPLAN_ASH Tool (including notes on SQL Monitoring)
- XPLAN_ASH update – June 2014
- Understanding Parallel Execution – pt. 1
- Understanding Parallel Execution – pt. 2
One of the awkward problems you can encounter with parallel execution is data skew - which has the potential to make just one slave in a set do (almost) all the work hence reducing the performance to something close to serial execution times. has written a series of articles on Parallel Skew that has been published by AllthingsOracle over the last few months.
- Demonstrating Skew
- 12c Hybrid Hash distibution with skew detection
- Addressing skew through manual rewrites
- Skew caused by Outer Joins
So 188.8.131.52 is out with a number of interesting new features, of which the most noisily touted is the “in-memory columnar storage” feature. As ever the key to making best use of a feature is to have an intuitive grasp of what it gives you, and it’s often the case that a good analogy helps you reach that level of understanding; so here’s the first thought I had about the feature during one of the briefing days run by Maria Colgan.
“In-memory columnar storage gives you bitmap indexes on OLTP systems without the usual disastrous locking side effects.”
Obviously the analogy isn’t perfect … but I think it’s very close: for each column stored you use a compression technique to pack the values for a large number of rows into a very small space, and for each stored row you can derive the rowid by arithmetic. In highly concurrent OLTP systems there’s still room for some contention as the session journals are applied to the globally stored compressed columns (but then, private redo introduces some effects of that sort anyway); and the “indexes” have to be created dynamically as tables are initially accessed (but that’s a startup cost, it’s timing can be controlled, and it’s basically limited to a tablescan).
Whatever the technical hand-waving it introduces – thinking of the in-memory thing as enabling real-time bitmaps ought to help you think of ways to make good use of the feature.
In fact, the reason for this post is a kind of double entendre on silence: the relative silence in literate western circles with respect to Japanese literature of the past century. Over the last month, I realized that virtually no one I had spoken with had read a single Japanese novel. Yet, like Russia of the 19th century, Japan produced a concentration of great writers and great novelists in the last 20th century that is set apart: the forces of of profound national changes (and defeat) created the crucible of great art. That art carries the distinctive aesthetic sense of Japan - a kind of openness of form, but is necessarily the carrier of universal, humanistic themes.
Endo is a writer in the post war period - the so-called third generation, and in my view the last of the wave of great Japanese literature. Read him. But don't stop - perhaps don't start - there. The early 20th century work of Natsume Soseki are a product of the Meiji period. In my view, Soseki is not only a father of Japenese literature but one of the greatest figures of world literature taken as a whole - I am a Cat remains one of my very favorite novels. Two troubling post-war novels by Yukio Mishima merit attention - Confessions of a Mask and the Sailor Who Fell From Grace with the Sea, both I would characterize broadly as existential masterpieces. The topic of identity in the face of westernization is also a moving theme in Osamu Dazai's No Longer Human. I hardly mean this as a complete survey - something in any case I am not qualified to provide -just a pointer toward something broader and important.
My encounter with contemporary Japanese literature - albeit limited - has been less impactful (I want to like Haruki Murakami in the same way I want to like Victor Pelevin, but both make me think of the distorted echo of something far better). And again like Russia, it is difficult to know what to make of Japan today - where its future will lead, whether it will see a cultural resurgence or decline. It is certain that its roots are deep and I hope she finds a way to draw on them and to flourish.
Installer works on Mac OS platform, you can install entire Oracle BPM 12c infrastructure and run domain. However, it fails to open Human Task wizard, while running JDEV on Mac OS. I have developed extremely basic Oracle BPM 12c sample application, to test Human Task wizard with JDeveloper installed on Oracle Enterprise Linux and on Mac OS - BpmApplication.zip.
It works fine to load Human Task wizard with JDeveloper 12c installed on Linux:
The same Human Task fails to open with JDeveloper 12c installed on Mac OS. There is Null Pointer exception generated, thrown from JDeveloper IDE class:
Messages in JDeveloper log for this exception:
Probably you should not use Oracle BPM 12c for development on Mac OS, rather use Linux or Windows platforms. This is a bit pity, but on other hand JDeveloper extensions (except ADF Mobile) always had issues running on Mac OS. It should be no difference, but it seems like development on Linux or Windows with JDeveloper is more stable.
Oracle announced their Big Data SQL product a couple of weeks ago, which effectively extends Exadata’s query-offloading to Hadoop data sources. I covered the launch a few days afterwards, focusing on how it implements Exadata’s SmartScan on Hive and NoSQL data sources and provides a single metadata catalog over both relational, and Hadoop, data sources. In a Twitter conversation later in the day though, I made the comment that in my opinion, the biggest benefit of Big Data SQL will be in its ability to extend Oracle’s security model to Hadoop data sources, because Hadoop security Hadoop security is still a bit of a mess:
lang=”en”>To me the greatest benefit of Big Data SQL is the single security model; even with Sentry, Hadoop security is fragmented and a mess (IMO)
— Mark Rittman (@markrittman) July 17, 2014
I’ve been working on an Oracle Big Data Appliance project over the past few weeks, as the technical architect and initial sysadmin for the cluster, and it’s given me a first-hand experience of what security’s like on a Hadoop cluster. Over the past few weeks I’ve had to come up with a security policy covering HDFS, Hive and the Cloudera management tools (Cloudera Manager, Hue etc), and try and implement an access and authorisation approach that ensures only designated people can log in, and when they’re in, they can only see the data they’re supposed to see. Hadoop at this point, to my mind, suffers from a couple of major issues when it comes to security:
- It’s fragmented, in that each tool or Hadoop product tends to have its own security setup, and the documentation is all split up, rapidly goes out of date, and is more of a reference than a tutorial (Cloudera’s Security documentation is one of the better examples, but it still splits the key information you need over several sections and several other docs)
If we take a typical security policy that a large enterprise customer’s going to want to put in place, it’ll look something like this:
- Users should only be able to log in via their corporate LDAP account, and we’ll want that login process to be secure so it can’t easily be bypassed
- We want to be able to secure our datasets, so that only authorised users can view particular datasets, and there’s likely to be some groups we grant read-only access to, and others we grant read-write
- The data loading processes for the Hadoop cluster need to be locked-down so they can’t overwrite the datasets of other applications
- Our security policy ideally needs to sync-up, or be an extension of, our existing enterprise security policy, not something we maintain separately
- We need to be able to audit and review who’s actually accessing what dataset, to ensure that these policies are being followed and enforced
- We also need the ability to obfuscate or depersonalise data before it gets into the cluster, and also have the option of encrypting the data at-rest as well as on-the-wire
Back in the early days of Hadoop these types of security policy weren’t often needed, as the users of the Hadoop cluster were typically a small set of data scientists or analysts who’d been cleared already to view and work with the data in the cluster (or more likely, they did it and just didn’t tell anyone). But as we move to enterprise information management architectures such as the one outlined in my two-part blog post series a few weeks ago (pt.1, pt.2), the users of Hadoop and other “data reservoir” data sources are likely to increase significantly in number as data from these systems becomes just another part of the general enterprise data set.
But in practice, this is hard to do. Let’s start with HDFS first, the Hadoop Distributed File System on which most Hadoop data is stored. HDFS aims to look as similar to a Linux or Unix-type filesystem as possible, with similar commands (mkdir, ls, chmod etc) and the same POSIX permissions model, where files and directories are associated with an owner and a group and where permissions are set for that owner, the group and all others. For example, in the HDFS file listing below, the “/user/cust_segment_analysis” directory is owned by the user “mrittman” and the group “marketing”, with the directory owner having full read, write and subdirectory traversal access to the directory, the group having read-only and subdirectory traversal access, and all others having no access at all.
[root@bdanode1 ~]# hadoop fs -ls /user Found 13 items drwxrwxrwx - admin admin 0 2014-06-02 16:06 /user/admin drwxr-x--- - mrittman marketing 0 2014-07-26 21:31 /user/cust_segment_analysis drwxr-xr-x - hdfs supergroup 0 2014-05-27 13:19 /user/hdfs drwxrwxrwx - mapred hadoop 0 2014-05-25 20:47 /user/history drwxrwxr-t - hive hive 0 2014-06-04 16:31 /user/hive drwxr-xr-x - hue hue 0 2014-05-31 18:51 /user/hue drwxrwxr-x - impala impala 0 2014-05-25 20:54 /user/impala drwxrwxr-x - oozie oozie 0 2014-05-25 20:52 /user/oozie drwxrwxrwx - oracle oracle 0 2014-06-09 21:38 /user/oracle drwxr-xr-x - root root 0 2014-06-06 16:25 /user/root drwxr-xr-x - sample sample 0 2014-05-31 18:51 /user/sample drwxr-x--x - spark spark 0 2014-05-25 20:45 /user/spark drwxrwxr-x - sqoop2 sqoop 0 2014-05-25 20:53 /user/sqoop2
Which all sounds great until you then have another group that needs read-write access to the directory, but you’re limited to just one group permissions setting for the directory which you’ve already used to set up read-only access for that particular group. If you therefore need to set up different sets of security access for different groups, you typically then end-up creating multiple HDFS directories and multiple copies of the dataset in question, assigning each copy to a different group, which isn’t all that convenient and gives you other problems in terms of maintenance and keeping it all in-sync.
What you of course need is something like the “access control lists” (ACLs) you get with operating systems like Windows NT and MacOS, where you can define an arbitrary number of user groups and then assign each of them their own permission set on the directory and the files it contains. The most recent versions of Hadoop actually implement a form of ACL for HDFS, with this feature making its way into the recently-released Cloudera CDH5.1, but these ACLs are an addition to the standard POSIX user, group, others model and aren’t recommended for all files in your HDFS filesystem as according to the Hadoop docs “Best practice is to rely on traditional permission bits to implement most permission requirements, and define a smaller number of ACLs to augment the permission bits with a few exceptional rules. A file with an ACL incurs an additional cost in memory in the NameNode compared to a file that has only permission bits.” Still, it’s better than not having them at all, and I’d imagine using this feature for particular directories and sets of files that need more than one set of group permissions configured for them.
In most cases though, the way you’ll present data out to non-technical end-users and applications is through Hive and Impala tables, or through tools like Pig and Spark. Under the covers, these tools still use HDFS permissions to control access to the data within Hive and Impala tables, but again by default you’re limited to granting access to whole HDFS directories, or the files contained within those directories. Something that addresses this issue is a product called Apache Sentry, an open-source project within the Hadoop family that enables role-based access control for Hive and Impala tables. Oracle are one of the co-founders of the Sentry project and include it in the base software on the Big Data Appliance, and using Sentry you can grant SELECT, INSERT or ALL privileges to a group on a particular Hive or Impala table, rather than on the underlying HDFS directories and files. A form of fine-grained access control can be set up using Sentry by creating views with particular row-level security settings, giving you the basics of a database-like security policy that you can apply over the main way that users access data in the cluster.
But Sentry itself has a few significant prerequisites – you have to enable Kerebos authentication on your cluster, which you should do anyway because of the risk of account spoofing, but is still a significant thing to set up – and of course you need to link Hive and Impala to your corporate LDAP server and configure them to work in the way that Sentry requires. Most importantly though, you’re still left with the situation where you’ve got two separate security setups – the one for your corporate data warehouse and relational data sources, and another for data accessed on Hadoop, and it’s still hard to be sure, what with all the disparate products and partially-complete open-source products, whether data in your Hadoop cluster is still really secure (though products like Cloudera Navigator aim to provide some form of data governance and auditing over these datasets); and, there’s still no straightforward way to remove individual customers’ data out of the Hadoop dataset (“data redaction”), no easy way to obfuscate or mask data, and no easy way (apart from the Hive views mentioned before) to restrict users to accessing only certain columns in a Hive or Impala table.
And so this is where Oracle’s Big Data SQL product could be very interesting. Big Data SQL takes the Exadata model of moving as much filtering and column-projection as it can to the storage server, adding Oracle SmartScan functionality to the Hadoop node and allowing it to understand the full Oracle SQL dialect (and PL/SQL security functions), rather than just the subset of SQL provided by HiveQL and Impala SQL.
More importantly, it’ll enable a single unified data dictionary over both Oracle and Hadoop data sources, presenting Hive tables and NoSQL data as regular Oracle tables and allowing the DBA to create data security, redaction and row-level filtering policies over both relational and Hadoop data – giving you potentially the ability to define a single security policy across all data in your overall information management architecture.
So I think this is actually a “big deal”, and potentially even more game-changing that the SmartScan functionality that got most of the attention with the Big Data SQL product launch. How well it’ll work in-practice, and how much will be enabled on day one it’s hard to say, but this feature meets a real need that our customers are finding now, so I’ll be very interested to try it out when the product becomes available (presumably) later in the year.
Oracle Database 12c Release 184.108.40.206 – My First Observations. Licensed Features Usage Concerns – Part II.
In this post you’ll see that I provide an scenario of accidental paid-feature “use.” The key elements of the scenario are: 1) I enabled the feature (by “accident”) but 2) I didn’t actually use the feature because I neither created nor altered any tables.
In Part I of this series I aimed to bring to people’s attention what I see as a significant variation from the norm when it comes to Oracle licensed-option usage triggers and how to prevent them from being triggered. Oracle Database Enterprise Edition supports several separately licensed options such as Real Application Clusters, Partitioning, and so on. A feature like Real Application Clusters is very expensive but if “accidental usage” of this feature is a worry on an administrator’s mind there is a simple remedy: unlink it. If the bits aren’t in the executable you’re safe. Is that a convoluted procedure? No. An administrator simply executes make -d ins_rdbms.mk rac_off and then relinks the Oracle executable. Done.
What about other separately licensed options like Partitioning? As I learned from Paul Bullen, once can use the Oracle-supplied chopt command to remove any chance of using Partitioning if, in fact, one does not want to use Partitioning. I thought chopt might be the solution to the issue of possible, accidental usage of the In-Memory Column Store feature/option. However, I found that chopt, as of this point, does not offer the ability to neutralize the feature as per the following screenshot.
There is yet no way I know of to prevent accidental use of the In-Memory Column Store feature/option. Am I just making a mountain out of a mole hill? I’ll let you be the judge. And if you side with folks that do feel this is a mountainous-mole hill you’d be in really good company.
Lest folks think that we Oaktable Network Members are a blind, mutual admiration society, allow me to share the rather sizzling feedback I got for raising awareness to this aspect of Oracle Database 12c:
No, I didn’t just want to dismiss this feedback. Instead I pushed the belt-sander off of my face and read the words a couple of times. The author of this email asserted I’m conveying misinformation ( aka “BS”) and to fortify that position it was pointed out that one must:
- Set a database (instance initialization) parameter.
- Bounce the instance.
- Alter any object to use the feature. I’ll interpret that as a DDL action (e.g., ALTER TABLE, CREATE TABLE).
Even before I read this email I knew these assertions were false. We all make mistakes–this I know! I should point out that unlike every release of Oracle from 5.1.17 to 11gR2 I was not invited to participate in the Beta for this feature. I think a lot of Oaktable Network members were in the program–perhaps even the author of the above email snippet–but I don’t know that for certain. Had I encountered this during a Beta test I would have raised it to the Beta manager as an issue and maybe, just maybe, the feature behavior might have changed before first customer ship. Why am I blabbering on about the Beta program? Well, given the fact that even Oaktable Network members with pre-release experience with this feature evidently do not know what I’m about to show in the remainder of this post.What Is An Accident?
Better yet, what is an accident and how full of “BS” must one be to fall prey? Maybe the remainder of the post will answer that rhetorical question. Whether or not it does, in fact, answer the question I’ll be done with this blog series and move on to the exciting work of performance characterization of this new, incredibly important feature.Anatomy of a “Stupid Accident.”
Consider a scenario. Let’s say a DBA likes to use the CREATE DATABASE statement to create a database. Imagine that! Let’s pretend for a moment that DBAs can be very busy and operate in chaotic conditions. In the fog of this chaos, a DBA could, conceivably, pull the wrong database instance initialization file (e.g., init.ora or SPFILE) and use it when creating a database. Let’s pretend for a moment I was that busy, overworked DBA and I’ll show you in the following happeneds:
- I executed sqlplus from the bash command prompt.
- I directed sqlplus to execute a SQL script called cr_db.sql. Many will recognize this as the simple little create script I supply with SLOB.
- The cr_db.sql script uses a local initialization parameter file called create.ora
- sqlplus finished creating the database. NOTE: this procedure does not create a single user table.
- After the database was created I connected to the instance and forced the feature usage tracking views to be updated (thanks to Morgan’s Library for that know-how as well…remember, I’m a database platform engineer not a DBA so I learn all the time in that space).
- I executed a SQL script to report feature usage of only those features that match a predicate such as “In-%’
This screen shot shows that the list of three asserted must-happen steps (provided me by a fellow Oaktable Network member) were not, in fact, the required recipe of doom. The output of the features.sql script proves that I didn’t even need to create a user table to trigger the feature.
The following screen shot shows what the cr_db.sql script does:
The following screenshot shows the scripts I used to update the feature usage tracking views and to report against same:
Stepping on a landmine doesn’t just happen. You have to sort of be on your feet and walking around for that to happen. In the same vein, triggering usage of the separately licensed Oracle Database 12c Release 220.127.116.11 In-Memory Column Store feature/option required me to be “on my feet and walking around” the landmine–as it were. Did I have to jump through hoops and be a raging, bumbling idiot to accidentally trigger usage of this feature? No. Or, indeed, did I issue a single CREATE TABLE or ALTER TABLE DDL statement? No. What was my transgression? I simply grabbed the wrong database initialization parameter file from my repository–in the age old I’m-only-human sort of way these things often happen.
To err to such a degree would certainly not be human, would it?
The following screenshot shows the parameter file I used to prove:
- You do not need to alter parameters and bounce an instance to trigger this feature usage in spite of BS-asserting feedback from experts.
- You don’t even have to create a single application table to trigger this feature usage.
This blog thread has made me a feel a little like David Litchfield must have surely felt for challenging the Oracle9i-era claims of how Oracle Database was impenetrable by database security breaches. We all know how erroneous those claims where. Unbreakable, can’t break it, can’t break in?
Folks, I know we all have our different reasons to be fans of Oracle technology–and, indeed, I am a fan. However, I’m not convinced that unconditional love of a supposed omnipotent and omniscient god-like idol are all that healthy for the IT ecosystem. So, for that reason alone I have presented these findings. I hope it makes at least a couple of DBAs aware of how this licensed feature differs from other high-dollar features like Real Application Clusters in exactly what it takes to “use” the feature–and, moreover, how to prevent stepping on a landmine as it were.
…and now, I’m done with this series.
Filed under: oracle
In a previous post I've described Adaptive Plans. Even if I prefer to show plans with the SQL Monitor active html format, I had to stick with the dbms_xplan for that because SQL Monitoring did not show all information about adaptive plans.
This has been fixed in the Patchset 1 and I've run the same query to show the new feature.
First, an adaptive plan can be in two states: 'resolving' where all alternatives are possible and 'resolved' then the final plan has been choosen. It is resolved once the first execution statistics collector has made the decision about the inflection point. We can see the state in the SQL Monitor header:
Here my plan is resolved because the first execution is finished.
The plan with rowsource statistics show only the current plan, but the 'Plan Note' shows that it is an adaptive plan:
Now we have to go to the 'Plan' tab which show the equivalent of dbms_xplan.display_cursor:
Here the format is equivalent to format=>'adaptive'. It's the 'Full' plan where all branches are shown but inactive part is grayed. We have here the Statistics Collector after reading DEPARTMENTS, and we have the inactive full table scan hash join of EMPLOYEES.
Just choose the 'Final' Plan (or 'Current' if it is not yet resolved) to get only the active part:
I often prefer the Tabular format rather than the Graphical:
We have all information: the 7 rows from DEPARTMENTS have gone through STATISTICS COLLECTOR and NESTED LOOP with index access has been choosen. Note that it is different from the previous post where HASH JOIN with full table scan was choosen because the 7 rows were higher than the inflection point.
In my current example, because I have system statistics that costs full table scan higher:DP: Found point of inflection for NLJ vs. HJ: card = 8.35
which is higher than ny 7 rows from DEPARTMENTS.
Here is the whole sqlmon report: sqlmon.zip and how I got it:alter session set current_schema=HR;
select /*+ monitor */ distinct DEPARTMENT_NAME from DEPARTMENTS
join EMPLOYEES using(DEPARTMENT_ID)
where DEPARTMENT_NAME like '%ing' and SALARY>20000;
alter session set events='emx_control compress_xml=none';set pagesize 0 linesize 10000 trimspool on serveroutput off long 100000000 longc 100000000 echo off feedback off
select dbms_sqltune.report_sql_monitor(report_level=>'all',type=>'html') from dual;
Note that I used the script exposed here and I used the emx_event to get the uncompressed xml, which I got from Tyler Muth:July 25, 2014
A Guest Post by Meg Bear, Oracle Vice President
Last week, Oracle Social proudly announced the launch of LinkedIn
support features for the Oracle Social Relationship Management (SRM)
platform, and joined LinkedIn’s Certified Company Page Partners
Program. It’s undeniable how valuable LinkedIn is to companies with its
user base of more than 300 million from 200+ countries and territories.
Investis, an international digital corporate communications company,
says LinkedIn accounts for 64 percent of all visits to corporate
websites from social media sites.
In response to the announcement, Alan Lepofsky, VP & Principal Analyst at Constellation Research said, “LinkedIn Company and Showcase pages are an important destination for brands, so at a time where LinkedIn is removing access to their API from almost all partners, this new feature is an excellent addition for Oracle customers.”
From a customer perspective, this allows Oracle Social customers to now have the ability to publish, engage, automate, and analyze LinkedIn activities within the SRM platform. Customers can leverage SRM’s proprietary features including Dynamic Link Tracking and Smart Publishing, as well as existing SRM integrations with Oracle Marketing Cloud (Eloqua) for LinkedIn.
This new capability is in direct response to the field requesting LinkedIn support. Customers have enthusiastically welcomed this new feature by turning it on the same day as the release.
Social is Becoming a B2B Priority
From a business perspective this integration fills an important business need with B2B customers and companies allowing them to execute a multichannel social strategy on the network that has distinguished itself as the social network for business. The LinkedIn release coupled with integration with Oracle Marketing Cloud (Eloqua) provides B2B marketers the ability to promote Eloqua landing pages, generate leads, and benefit from digital body language via LinkedIn. I discussed in my Oracle Social Spotlight blog how LinkedIn support solidifies Oracle’s SRM platform “as the clear choice for B2B marketers.” This is the latest in our integration and Oracle Social revolutionary story.
For more details:
LinkedIn Sales Materials
Talent Management Excellence Essentials is a go-to e-publication that covers talent, performance, and compensation strategies.
Mark Bennett, Director of Product Strategy at Oracle, contributed an article entitled “Workforce Reputation Management: Is social media the vital skill you aren’t tracking?”
He starts with the premise that employees on social media present opportunities, as well as risks for organizations. When their presence is managed well, it can become a rich source of influence, collaboration, and brand recognition.
“Workforce reputation management technology helps organizations discover, mobilize and retain talent by providing insight into the social reputation and influence of the workforce while also helping organizations monitor employee social media policy compliance and mitigate social media risk.”
Bennett says it is a tool that can support HR by helping to harness social media. He cites using it to improve talent processes, such as hiring, training and development, and discovering hidden talent. It can aid in uncovering a more complete picture of an employee’s knowledge, skills and attributes. Workforce reputation technology provides a clearer picture of how a candidate or employee is viewed by peers and the communities he / she works across.
“Social media holds untapped insights about your talent, brand and business, and workforce reputation management can help unlock them,” says Bennett. He asks his readers to think about how much more productive and efficient their organizations could be with this valuable information.
Read the full article on page 15 of the publication. No need to register or login
Plenty of runway. “The company estimates there is a $74 billion addressable market for its cloud apps,” Hurd said.
Getting customers on board. It is anticipated that many of the thousands of Oracle on-premises customers will extend their installations by adding cloud modules. Others will migrate completely to the cloud.
Oracle’s cloud army. Hurd reported Oracle has thousands of SaaS sales reps in more than 60 countries and about 19,000 certified consultants.
Fusion taking hold? “The confidence our sales force has in our (Fusion) products now over the past couple of years has leaped exponentially.” That confidence is translating into sales for Fusion.
All ISVs are welcome. “Oracle has 19 data centers around the world. That coverage, combined with its unified technology stack makes a full-fledged move a no-brainer for ISVs.”
Ready for the world. Oracle supports its cloud apps in 33 languages with localizations for more than 100 countries. Hurd said Oracle will continue to expand its data center footprint.
That’s all great, but Hurd isn’t satisfied. “Make no mistake, we’re laser-focused on being No. 1 in the cloud.”
Read the full article, which includes Hurd’s presentation.
Jeb Dasteel, senior vice president and chief customer officer at Oracle, describes the value of user groups this way: “Over and over again, we see that user group members are by far the most active and satisfied customers in our entire customer base. One of the best things that any Oracle customer can do to maximize the value from the investment is to join a user group.”
Sixteen hundred members of OHUG gathered in Las Vegas June 9-13, 2014, to learn, network, and grow. This is a high-energy, relationship-rich conference.
We have two items of interest to share with you from OHUG:
1) Scan the digest of tweets (#ohugglobal) from the conference.
2) Read four mini interviews from Oracle HCM Cloud customers. The questions they were asked included What solutions are you using? What benefits are you realizing? and What advice do you have for others looking to move their system to the cloud?
- WAXIE Sanitary Supply―Q&A with Melissa Halverson, Benefits & HRIS Manager
- eVerge Group―Q&A with Bob Moser, Senior Director, Oracle HCM Applications
- Sonepar―Q&A with Jim Duran, Training Manager at Sonepar
- National Marrow Donor Program―Q&A with Tiffany Lyons, Senior HR Technology & Process Analyst
How far flung are the people with whom you work? It’s pretty rare in our
global workplace to find a team with all members sharing the same
location. In this new environment, the talent pool can literally be the
whole world, but of course that scenario comes with its own set of
challenges, according to Oracle’s Zach Thomas, Vice President of HCM
The challenges include: How to unify employees? How to attract the best new talent? How to manage the organization’s culture, values, and goals? How to stay compliant with laws and practices in various countries? And how to cultivate high performing teams?
“At the heart of the process is technology — specifically, consumer-designed, intuitive tools available in the cloud and accessible through a gamut of devices,” says Thomas in an article published by Talent Management Magazine entitled “The 7 Rules for Selecting Global HR Technology.”
Here are the seven tenets, minus the intelligent discourse for each that Thomas includes:
- Find, recruit and hire the best talent no matter the location.
- Know each employee like he or she works right next to you.
- Empower social collaboration.
- Prioritize great performance.
- Streamline global payroll.
- Deliver compensation and rewards equitably and competitively.
- Make security and compliance central to HR processes.
Organizations can follow these seven principles to effectively manage
global workplaces, and as a result, says Thomas, give them “a
competitive advantage by helping pull in the best talent, increase
retention, streamline organizational processes, stimulate productive
collaboration and avoid legal snafus.”
Read the full article.
A Guest Post by Oracle's Senior Director, Brian DaytonOracle Sales Cloud and Marketing Cloud customer Apex IT gained just that―a 724% return on investment (ROI) when it implemented these Oracle
Congratulations Apex IT!
Apex IT was just announced as a winner of the Nucleus Research 11th annual Technology ROI Awards. The award, given by the analyst firm, highlights organizations that have successfully leveraged IT deployments to maximize value per dollar spent.
- Return on Investment – 724%
- Payback – 2 months
- Average annual benefit – $91,534
- Cost : Benefit Ratio – 1:48
In addition to the ROI and cost metrics the award calls out improvements in Apex IT’s business operations—across both Sales and Marketing teams:
- Improved ability to identify new opportunities and focus sales resources on higher-probability deals
- Reduced administration and manual lead tracking—resulting in more time selling and a net new client increase of 46%
- Increased campaign productivity for both Marketing and Sales, including Oracle Marketing Cloud’s automation of campaign tracking and nurture programs
- Improved margins with more structured and disciplined sales processes—resulting in more effective deal negotiations
Please join us in congratulating Apex IT on this award and the business improvements it made.
Want More Details?
Don’t take our word for it. Read the full Apex IT ROI Case Study. You also can learn more about Apex IT’s business, including the company’s work with Oracle Sales and Marketing Cloud on behalf of its clients.
Cloud. Big data. Mobile. Social media.
These mega trends in technology have had a profound impact on our lives.
And now according to SVP Ravi Puri from North America Oracle Consulting Services, these trends are starting to converge and will affect us even more. His article, “Cloud, Analytics, Mobile, And Social: Convergence Will Bring Even More Disruption” appeared in Forbes on June 6.
For example, mobile and social are causing huge changes in the business world. Big data and cloud are coming together to help us with deep analytical insights. And much more.
These convergences are causing another wave of disruption, which can drive all kinds of improvements in such things as customer satisfaction, competitive advantage, and growth.
But, according to Puri, companies need to be prepared.
In this article, Puri urges companies to get out in front of the new innovations. H3 gives good directions on how to do so to accelerate time to value and minimize risk.
The post is a good thought leadership piece to pass on to your customers.
A Guest Post by Kathy Miedema, Oracle Market Research Analyst
Vice President Jeremy Ashley and his Oracle Applications UX team are
continually asking: “What’s the best way to deliver simplicity,
mobility, and extensibility so users can be even more productive?”
But didn’t we just release the simplified UI for Release 8? Isn’t that UI all about productivity?
Of course it is, but Ashley’s team is making regular, incremental improvements to the user experience.
“Our mobile devices and a need for simplicity are behind this continued evolution of the Oracle user experience design philosophy,” he explains. Ashley describes the newest approach to design as “glance, scan, commit.” And at the heart of that philosophy is the infolet.
In this post, Kathy Miedema, Oracle market research analyst, pulls back the curtain to reveal what’s going on in the Oracle UX research lab. It’s new and exciting. It also reflects the deep knowledge and investment we’re making in our user experience, which will keep us ahead of the competition.
Strategic design philosophy pushes Oracle cloud user experience to lofty new heights.
Thy rhythm of blog posts regarding database technology has remained consistent throughout the week. Few of those posts have been plucked by this Log Buffer Edition for your pleasure.
Sayan has shared a Standalone sqlplus script for plans comparing.
Gartner Analysis: PeopleSoft Update Manager Delivers Significant Improvements to the Upgrade Tools and Processes.
Timely blackouts, of course, are essential to keeping the numbers up and (more importantly) preventing Target Down notifications from being sent out.
Are you experiencing analytics pain points?
Bug with xmltable, xmlnamespaces and xquery_string specified using bind variable.
SQL Server 2012 introduced columnstore indexes, which can immensely improve the performance of OLAP queries.
Restoring the SQL Server Master Database Even Without a Backup .
There times when you need to write T-SQL code that creates specific T-SQL Code and executes it. When you do this you are creating dynamic T-SQL code.
A lot of numbers that we use everyday such as Bank Card numbers, Identification numbers, and ISBN codes, have check digits.
SQL-only ETL using a bulk insert into a temporary table (SQL Spackle).
How MariaDB makes Stored Procedures usable.
DBaaS, OpenStack and Trove 101: Introduction to the basics.
MySQL Fabric is a tool included on MySQL Utilities that helps you to manage your MySQL instances.
Showing all available MySQL data types when creating a new table with MySQL for Excel.
Why TokuDB hates Transparent HugePages.
Today is our day. July 25, 2014 marks the 15th annual System Administrator Appreciation Day. On this day we pause and take a moment to forget the impossible tasks, nonexistent budgets, and often unrealistic timelines to say thank you to those people who keeps everything working — system administrators.
So much of what has become a part of everyday life, from doing our jobs, to playing games online, shopping, and connecting with friends and family around the world is only possible due in large part to the tireless efforts of the system administrators who are in the trenches every hour of every day of the year keeping the tubes clear and the packets flowing. The fact that technology has become so common place in our lives, and more often than not “just works” has afforded us the luxury of forgetting (or not evening knowing) the immense infrastructure complexity which the system administrator works with to deliver the services we have come to rely on.
SysAdmin Appreciation Day started 15 years ago thanks to Ted Kekatos. According to Wikipedia, “Kekatos was inspired to create the special day by a Hewlett-Packard magazine advertisement in which a system administrator is presented with flowers and fruit-baskets by grateful co-workers as thanks for installing new printers. Kekatos had just installed several of the same model printers at his workplace.” Ever since then, SysAdmin Appreciation Day has been celebrated on the last Friday in July.
At Pythian, I have the privilege of being part of the Enterprise Infrastructure Services group. We are a SysAdmin dream team of the best of the best, from around the globe. Day in and day out, our team is responsible for countless servers, networks, and services that millions of people use every day.
To all my colleagues and to anyone who considers themselves a SysAdmin, regardless of which flavour – thank you, and know that you are truly doing work that matters.
Sample application, tested with iOS platform - MAFMobileLocalApp_v2.zip. Original version of this application brings Locations screen, where both City Name and Street Address are displayed:
We are going to customise original application with MDS, without changing source code directly. Customized application is based on two MDS tip layers - gold and silver level partners. Gold level partners are able to see State Name in addition to the Street Address:
While silver level partners are allowed to see only City Name:
To achieve MDS functionality behaviour for MAF application, you must define first MDS customisation class, this class must extend from standard MDS CustomizationClass and implement several methods. Important method is getName(), you must define MDS customisation name, the same name will be used for customisation (JDeveloper automatically reads this name during MAF application design time):
Customization class must be set in add-config.xml MDS section, in order to be registered for customisation use:
Once customisation class is registered, we can switch JDeveloper to the Customization Developer mode and start customizing application:
Make sure MDS layers are properly configured in CustomizationLayerValues.xml file. Customization layer must be set to the same name as you set in Customization class above. Layer values should specify different layers supported for customization:
I have defined two layers - gold and silver partnership. MDS layer can be selected for customisation, we start from Gold:
Locations page is updated to include State Name:
However, actual change is stored not in Location page, but in generated MDS file for Location page - it keeps delta for the changes made in customization mode. This file is generated under Gold Partnership profile folder:
State Name addition required to update Page Definition file for Locations, there is extra MDS file created with delta information for Locations Page Definition:
Next we can customize for Silver layer - change layer value:
Here we should keep only City Name and remove Street Address:
This change is reflected in MDS delta for Locations page, stored under Silver Partnership level:
It is well known thing and you can even find it on MOS, but I have a little more simple script for it, so I want to show little example.
First of all we need to start script on local database:
SQL> SQL> @transactions/global.sql Enter filters(empty for any)... Sid : Globalid mask : Remote_db mask: INST_ID SID SERIAL# USERNAME REMOTE_DB REMOTE_DBID TRANS_ID DIRECTION GLOBALID EVENT -------- ---- ---------- -------- --------- ----------- ---------------- ----------- -------------------------------------------------- --------------------------- 1 275 4469 XTENDER BAIKAL 1742630060 8.20.7119 FROM REMOTE 4241494B414C2E63616336656437362E382E32302E37313139 SQL*Net message from client
Then we need to copy GLOBALID of interested session and run script on database that shown in column REMOTE_DBID, but with specifieng GLOBALID:
SQL> SQL> conn sys/syspass@baikal as sysdba Connected. ====================================================================== ======= Connected to SYS@BAIKAL(baikal)(BAIKAL) ======= SID 203 ======= SERIAL# 38399 ======= SPID 6536 ======= DB_VERSION 18.104.22.168.0 ====================================================================== SQL> @transactions/global.sql Enter filters(empty for any)... Sid : Globalid mask : 4241494B414C2E63616336656437362E382E32302E37313139 Remote_db mask: INST_ID SID SERIAL# USERNAME REMOTE_DB REMOTE_DBID TRANS_ID DIRECTION GLOBALID STATE ------- ----- ---------- --------- ---------- ----------- ---------- ----------- -------------------------------------------------- -------------------------- 1 9 39637 XTENDER BAIKAL 1742630060 8.20.7119 TO REMOTE 4241494B414C2E63616336656437362E382E32302E37313139 [ORACLE COORDINATED]ACTIVE
It’s quite simple and fast.