My favourite language is hard to pin point; is it C or is it PL/SQL? My first language was C and I love the elegance and expression of C. Our product PFCLScan has its main functionallity written in C. The....[Read More]
Posted by Pete On 23/07/14 At 08:44 PM
We were asked by a customer whether PFCLScan can generate SQL reports instead of the normal HTML, PDF, MS Word reports so that they could potentially scan all of the databases in their estate and then insert either high level....[Read More]
Posted by Pete On 25/06/14 At 09:41 AM
Yesterday we released the new version 2.0 of our product PFCLObfuscate . This is a tool that allows you to automatically protect the intellectual property in your PL/SQL code (your design secrets) using obfuscation and now in version 2.0 we....[Read More]
Posted by Pete On 17/04/14 At 03:56 PM
I will be co-chairing/hosting a twitter chat on Thursday 6th March at 7pm UK time with Confio. The details are here . The chat is done over twitter so it is a little like the Oracle security round table sessions....[Read More]
Posted by Pete On 05/03/14 At 10:17 AM
We are going to start a reseller program for PFCLScan and we have started the plannng and recruitment process for this program. I have just posted a short blog on the PFCLScan website titled " PFCLScan Reseller Program ". If....[Read More]
Posted by Pete On 29/10/13 At 01:05 PM
We released version 1.3 of PFCLScan our enterprise database security scanner for Oracle a week ago. I have just posted a blog entry on the PFCLScan product site blog that describes some of the highlights of the over 220 new....[Read More]
Posted by Pete On 18/10/13 At 02:36 PM
We have just updated PFCLScan our companies database security scanner for Oracle databases to version 1.2 and added some new features and some new contents and more. We are working to release another service update also in the next couple....[Read More]
Posted by Pete On 04/09/13 At 02:45 PM
It has been a few weeks since my last blog post but don't worry I am still interested to blog about Oracle 12c database security and indeed have nearly 700 pages of notes in MS Word related to 12c security....[Read More]
Posted by Pete On 28/08/13 At 05:04 PM
Back in the early 90s I ventured into virtual reality and was sick for a whole day afterwards.
We have since learned that people become queazy when their visual systems and vestibular systems get out of sync. You have to get the visual response lag below a certain threshold. It’s a very challenging technical problem which Oculus now claims to have cracked. With ever more sophisticated algorithms and ever faster processors, I think we can soon put this issue behind us.
Anticipating this, there has recently been a resurgence of interest in VR. Google’s Cardboard project (and Unity SDK for developers) makes it easy for anyone to turn their smartphone into a VR headset just by placing it into a cheap cardboard viewer. VR apps are also popping up for iPhones and 3D side-by-side videos are all over YouTube.
Some of my AppsLab colleagues are starting to experiment with VR again, so I thought I’d join the party. I bought a cheap cardboard viewer at a bookstore. It was a pain to put together, and my iPhone 5S rattles around in it, but it worked well enough to give me a taste.
I downloaded an app called Roller Coaster VR and had a wild ride. I could look all around while riding and even turn 180 degrees to ride backwards! To start the ride I stared intently at a wooden lever until it released the carriage.
My first usability note: between rides it’s easy to get turned around so that the lever is directly behind you. The first time I handed it to my wife she looked right and left but couldn’t find the lever at all. So this is a whole new kind of discoverability issue to think about as a designer.
Despite appearances, my roller coaster ride (and subsequent zombie hunt through a very convincing sewer) is research. We care about VR because it is an emerging interaction that will sooner or later have significant applications in the enterprise. VR is already being used to interact with molecules, tumors, and future buildings, uses cases that really need all three dimensions. We can think of other uses cases as well; Jake suggested training for service technicians (e.g. windmills) and accident re-creation for insurance adjusters.
That said, both Jake and I remain skeptical. There are many problems to work through before new technology like this can be adopted at an enterprise scale. Consider the the idea of immersive virtual meetings. Workers from around the world, in home offices or multiple physical meeting rooms, could instantly meet all together in a single virtual room, chat naturally with each other, pick up subtle facial expressions, and even make presentations appear in mid air at the snap of a finger. This has been a holy grail for decades, and with Oculus being acquired by Facebook you might think the time has finally come.
Not quite yet. There will be many problems to overcome first, not all of them technical. In fact VR headsets may be the easiest part.
A few of the other technical problems:
- Bandwidth. I still can’t even demo simple animations in a web conference because the U.S. internet system is too slow. I could do it in Korea or Sweden or China or Singapore, but not here anytime soon. Immersive VR will require even more bandwidth.
- Cameras. If you want to see every subtle facial expression in the room, you’ll need cameras pointing at every face from every angle (or at least one 360 camera spinning in the center of the table). For those not in the room you’ll need more than just a web cam pointing at someone’s forehead, especially if you want to recreate them as a 3D avatar. (You’ll need better microphones too, which might turn out to be even harder.) This is technically possible now, Hollywood can do it, but it will be awhile before it’s cheap, effortless, and ubiquitous.
- Choreography. Movie directors make it look easy, and even as individuals we’re pretty good about scanning a crowded room and following a conversation. But in a 3-dimensional meeting room full of 3-dimensional people there are many angles to choose from every second. We will expect our virtual eyes to capture at least as much detail as our real eyes that instinctively turn to catch words and expressions before they happen. Even if we accept that any given participant will see a limited subset of what the overall system can see, creating a satisfying immersive presence will require at least some artificial intelligence. There are probably a lot of subtle challenges like this.
And a non-technical problem:
- Privacy. Any virtual meeting which can me transmitted can also be recorded and viewed by others not in the meeting. This includes off-color remarks (now preserved for the ages or at least for future office parties), unflattering camera angles, surreptitious nose picking, etc. We’ve learned from our own research that people *love* the idea of watching other people but are often uncomfortable about being watched themselves. Many people are just plain camera shy – and even less fond of microphones. Some coworkers are uncomfortable with our team’s weekly video conferences. “Glasshole” is now a word in the dictionary – and glassholes sometimes get beaten up.
So for virtual meetings to happen on an enterprise scale, all of the above problems will have to be solved and some of our attitudes will have to change. We’ll have to find the right balance as a society – and the lawyers will have to sign off on it. This may take awhile.
But that doesn’t mean our group won’t keep pushing the envelope (and riding a few virtual roller coasters). We just have to balance our nerdish enthusiasm with a healthy dose of skepticism about the speed of enterprise innovation.
What are your thoughts about the viability of virtual reality in the enterprise? Your comments are always appreciated!Possibly Related Posts:
- Street View Makes Immortals
- Scoring Topper on the Tablet
- Messing around with Glass and Fusion CRM for Kscope 13
- Are Electronics in Flight Dangerous?
- Scariest Ride Ever?
An Interview with Michelle Lapierre (pictured left) from Marriott Rewards conducted by Angela Wells, Oracle Social Product Manager
Have you checked out the best and brightest in marketing? The recent Global Markie Awards honored excellence in marketing across a whole range of marketing categories. We are so happy that Marriott Rewards, an Oracle Social Cloud customer, won the Markie award for Best Social Campaign. The category was based on: 1) Effective use of social marketing as a strategy to build brand awareness or turn customers and prospects into advocates; 2) Social media used in new and interesting ways or as the centerpiece of a successful new program; and 3) Seized social media opportunities and generated proven results.
So I (Angela Wells, pictured right) connected with Michelle Lapierre, Senior Director, Customer Experience and Social Media at Marriott Rewards, to hear about this award-winning campaign and what Marriott Rewards is doing next on social.
Oracle Social: Congratulations on recently winning a Markie Award for Best Social Campaign! Before we dive into the campaign specifics, can you describe your organization’s overall social media strategy? How has that strategy evolved?
Michelle: Marriott Rewards joined Facebook in December 2011 and quickly grew to be the largest and most engaged hotel loyalty brand on Facebook (www.facebook.com/marriottrewards). The continuing mission of Marriott Rewards is to engage target audiences around the world through social media channels in a consistent, authentic and meaningful way. The Marriott Rewards social media philosophy is to keep life at the center of the story, not hotels or programs or deals. We believe that our Facebook fans are not only our fans, but they are a diverse community of travelers, dreamers and storytellers. We see our social media channels as an outlet for their stories, not just our own.
Through an emphasis on compassionate and authentic community management and content development, we seek to engage, inspire and keep our Facebook friends coming back to the page on a regular basis. As such, we engage in extended conversations with our Facebook friends and listen to what they have to say, not just when they’re angry with us, but for all the reasons friends speak with each other.
Oracle Social: Can you tell us more about your award-winning campaign?
Michelle: The “30 Beds In 30 Days” sweepstakes was the second Facebook promotion in which Marriott Rewards gave away 30 Marriott Beds to 30 Facebook fans in 30 days. It’s great how much people love these beds! The sweepstakes was hosted on a Facebook enabled microsite, which was responsively designed for desktop, mobile and tablet users. It also lived on the Marriott Rewards Facebook page as a tab. As with the 2012 sweepstakes, it was co-sponsored by partners ShopMarriott.com and the Marriott Rewards Credit Card by Chase.
Oracle Social: What were your goals for this campaign?
Michelle: Our primary goal was to increase fan acquisition and engagement on Facebook and also to drive traffic to partner sites like Chase Marriott Rewards Credit Card and ShopMarriott.com. Secondarily, we hoped to drive enrollments into the Marriott Rewards program.
Oracle Social: Clearly, this campaign was successful – not just for the award you won, but for the connections you strengthened with your Fans and Rewards numbers. Can you tell us about the results of this campaign?
Michelle: The 2013 “30 Beds in 30 Days” sweepstakes surpassed the results of the 2012 campaign in virtually every way, including Facebook fan acquisition, Rewards program enrollments, and traffic to our partners websites (shopmarriott.com and the Marriott Rewards Credit Card by Chase).
The “30 Beds in 30 Days” sweepstakes increased our Share of Voice compared to our competitors during the campaign. It also generated more positive sentiment around the program in general outside of the contest. According to Oracle Social Cloud’s sentiment analysis, mentions of the Marriott Rewards program and the campaign outside of Marriott channels with a clearly defined sentiment ran 90% positive during the 30 Beds campaign.
Oracle Social: It was great to follow along with this campaign on your Marriott Rewards Facebook page. How did the campaign get started?
Michelle: The original idea actually came from a fan raving about our beds on the Marriott Rewards Facebook page. Our Marriott Rewards Facebook community is an extremely engaged group of fans. Since the idea for the “30 Beds in 30 Days” sweepstakes came from a Facebook fan, we decided to host the sweepstakes through Facebook. It was a perfect opportunity to thank the fans for their engagement, and give them the opportunity to engage with the brand every day on Facebook during the promotion, since fans could enter every day.
Oracle Social: That campaign was a great success. We’re happy for everyone who won a bed, and everyone who was more exposed to Marriott Rewards through this campaign. So what’s next for Marriott Rewards on social? We are followers of Marriott Rewards’ new Twitter handle: @MarriottRewards. What are your plans for that?
Michelle: It’s true—we are on Twitter now! We hope everyone reading this starts following us on Twitter, too. We know that Twitter is another great way for us to connect with our customers and hear their stories. We have also used it as a great way to get out news – like did you check out #SayHiToWifi? One of our first tweets from the new handle announced the news that Marriott Rewards members will receive free in-room WiFi. And, my best advice is to stay connected – something very fun is coming soon!
SLOB 2.2 Not Generating AWR reports? Testing Large User Counts With Think Time? Think Processes and SLOB_DEBUG.
I’ve gotten a lot of reports of folks branching out into SLOB 2.2 large user count testing with the SLOB 2.2 Think Time feature. I’m also getting reports that some of the same folks are not getting the resultant AWR reports one expects from a SLOB test.
If you are not getting your AWR reports there is the old issue I blogged about here (click here). That old issue was related to a Redhat bug. However, if you have addressed that problem, and still are not getting your AWR reports from large user count testing, it might be something as simple as the processes initialization parameter. After all, most folks have been accustomed to generating massive amounts of physical I/O with SLOB at low session counts.
I’ve made a few changes to runit.sh that will help future folks should they fall prey to the simple processes initialization parameter folly. The fixes will go into SLOB 188.8.131.52. The following is a screen shot of these fixes and what one should expect to see in such situation in the future. In the meantime, do take note of SLOB_DEBUG as mentioned in the screenshot:
Filed under: oracle
Let’s say you’ve narrowed your online search to two hotels near Times Square to celebrate New Year’s Eve. One of the websites gives you all the dimensions, distances, and details of the property. The other includes images of people having fun in the lobby, favorable quotes from customers, and a useful 'things to do and see' column. Which hotel has you, the customer, at the center of its marketing?
In this post published in Oracle Voice/Forbes, Jeb Dasteel, Oracle’s senior vice president and chief customer officer (pictured left), has a bit of wisdom to share with hotels and all other organizations selling products and services: “The difficult truth is that your customers don’t care about your innovation or your products; they care only about the result you can help them achieve.”
So in the scenario above, you’re not just booking a hotel room for December 31. You’re looking for a memorable experience in downtown New York City to ring in the new year. So you want the best place to achieve that result.
Dasteel teamed up with Amir Hartman of Mainstay on this article. Even though consumers want to focus on results rather than products, Mainstay's research shows that the majority of marketing dollars are spent on "developing assets and content describing product features." And Forrester Research reports that “close to 70% of business leaders find the materials companies provide them useless.” 70%? Wow!
So what to do? According to Dasteel, organizations need to communicate in a language that is meaningful to customers with a focus on business outcomes. Your customer (not your product) should be “the hero and centerpiece of the story you’re telling.”
We really hope you will study Dasteel’s insight and recommendations. It could prevent you from wasting money on meaningless marketing assets and contribute mightily to your success.
A Guest Post by Justin King, B2B E-Commerce Strategist (pictured left)E-commerce has a unique value proposition in B2B organizations. Of course, it is about customer acquisition, conversion, and average order value. But it also serves a bigger, long-term purpose.
Customers are in more control than ever before. With 43 percent of Americans retiring in the next eight years, the next generation of buyers is emerging. And B2B purchasers of all types want more online services and tools. As a result, we are witnessing the convergence of customer portals, marketing, social, service and shopping cart sites into e-commerce. E-commerce has become the digital conduit to your customers. B2B companies that deliver an exceptional e-commerce customer experience offer more control and access to their back office.
In fact, most everything we have done today in B2B e-commerce has unknowingly been to move functions from the back office to the customer. E-commerce is no longer just commerce. It is not just shopping carts and transactions. It is the primary customer facing channel between customers and your back office.
As I explained in my December 9, 2014 post, the role of the ERP is certainly increasing. However, there is more to the back office than just ERP.
It Takes an Ecosystem
If it takes a village to raise a child, it takes an ecosystem to support a new customer facing channel. A traditional ecosystem is a community of organisms linked together with nutrients and energy flows. The B2B e-commerce ecosystem is a community of systems connected together to deliver a user experience that adds value to your customers and helps them do their jobs easier. That includes:
- Enterprise Resource Planning (ERP)
- Configure, Price, and Quote (CPQ)
- Customer Relationship Management (CRM)
- Order Management System (OMS)
- Product information management (PIM)
- Content Management System (CMS)
- Experience management
- Marketing automation
Why is the Ecosystem So Big?
Everything you know about your customers and products sits in your back office, including order history, spending patterns, customer segmentation, product information and contracts to name a few. You need all of that data to build an excellent customer experience. Great customer experiences increase conversion and revenue. Most importantly, great customer experiences make B2B users’ job easier which yields loyalty. Loyal customers return to your site and will spend more.
Ecosystems are dynamic entities. They change. You will introduce new systems; upgrade some and depreciate others.
So How Do We Manage This Changing Ecosystem?
First, recognize the controlling factors. Ecosystems are controlled both by external and internal factors.
- External factors include conditioned expectations that customers bring to your site from at-home purchasing. With more customer control comes a proliferation of devices and types of experiences they choose to use.
- Internal factors include the complexity and readiness level of your ecosystem. I have a customer with more than 200 ERP systems as a result of multiple acquisitions. They are extremely sophisticated with various levels of readiness. Internal factors affect how fast an organization can move.
Next, start now and move quickly:
- Begin with the basics: If the goal is to add value and help your customers do their jobs easier, you must deliver on the basics. Help them find information on your website, focus on building great product information and supporting content, and make transacting intuitive. Give your customers a few tools outside of the purchase path like viewing invoices, purchase orders, or punchout.
- Make continuous improvements in your back office: The data you have in your back office may not be customer ready. Make it better bit by bit. Start planning the types of innovative services you might offer your customer in the future and put plans in place to ready your ecosystem for those future tools.
- Separate form from function: Integration will become a dirty word at some point in your e-commerce project. If you rely on hardcore integration techniques whenever you introduce a new system or platform, your time to market will slow to a screeching halt. By separating out the experience from the content, data, rules and workflows, you can acquire new systems and data effortlessly. And your internal staff can quickly create new experiences for all kinds of devices and form factors.
Finally, remember an engaging customer experience is about adding value to B2B buyers. Visit them, interview them, watch them work and do prototype testing in a usability lab. Innovate on behalf of your customer and then write me (firstname.lastname@example.org) and tell me about it.
Gary Lang, Blackboard’s senior vice president in charge of product development and cloud operations, has announced his resignation and plans to join Amazon. Gary took the job with Blackboard in June 2013 and, along with CEO Jay Bhatt and SVP of Product Management Mark Strassman, formed the core management team that had worked together previously at AutoDesk. Gary led the reorganization effort to bring all product development under one organization, a core component of Blackboard’s recent strategy.
Michael described Blackboard’s new product moves toward cloud computing and an entirely new user experience (UX) for the Learn LMS, and Gary was the executive in charge of these efforts. These significant changes have yet to fully roll out to customers (public cloud in pilot, new UX about to enter pilot). Gary was also added to the IMS Global board of directors in July 2014 – I would expect this role to change as well given the move to Amazon.
At the same time, VP Product Management / VP Market Development Brad Koch has also resigned from Blackboard. Brad came to Blackboard from the ANGEL acquisition. Given his long-term central role leading product definition and being part of Ray Henderson’s team, Brad’s departure will also have a big impact. Brad’s LinkedIn page shows that he has left Blackboard, but it does not yet show his new company. I’m holding off reporting until I can get public confirmation.
Blackboard provided the following statement from CEO Jay Bhatt.
The decision to leave Blackboard for an opportunity with Amazon was a personal one for Gary that allows him to return home to the West Coast. During his time here, Gary has made significant contributions to the strategic direction of Blackboard and the technology we deliver to customers. The foundation he has laid, along with other leaders on our product development team, will allow us to continue to drive technical excellence for years to come. We thank him for his leadership and wish him luck as he embarks on this new endeavor.
- The two resignations are unrelated as far as I can tell.
- Starting at Pearson, then at ANGEL, finally at Blackboard
The post Blackboard’s SVP of Product Development Gary Lang Resigns appeared first on e-Literate.
At Rittman Mead R&D, we have the privilege of solving some of our clients’ most challenging data problems. We recently built a set of customized data products that leverage the power of Oracle and Cloudera platforms and wanted to share some of the fun we’ve had in creating unique user experiences. We’ve been thinking about how we can lean on our efforts to help make the holidays even more special for the extended Rittman Mead family. With that inspiration, we had several questions on our minds:
- How can we throw an amazing holiday party?
- What gifts can we give that we can be sure our coworkers, friends, and family will enjoy?
- What gifts would we want for ourselves?
After a discussion over drinks, the answers became clear. We decided to create a tool that uses data analytics to help you create exceptional cocktails for the holidays.
Here is how we did it. First, we analyzed the cocktail recipes of three world-renowned cocktail bars: PDT, Employees Only, and Death & Co. We then turned their drink recipes into data and got to work on the Bar Optimizer, which uses analytics on top of that data to help you make the holiday season tastier than ever before.
To use the Bar Optimizer, enter the liquors and other ingredients that you have on hand to see what drinks you can make. It then recommends additional ingredients that let you create the largest variety of new drinks. You can also use this feature to give great gifts based on others’ liquor cabinets. Finally, try using one of our optimized starter kits to stock your bar for a big holiday party. We’ve crunched the numbers to find the fewest bottles that can make the largest variety of cocktails.
Click the annotated screenshot above for details, and contact us if you would like more information about how we build products that take your data beyond dashboards.
A conversation I have too often with vendors goes something like:
- “That confidential thing you told me is interesting, and wouldn’t harm you if revealed; probably quite the contrary.”
- “Well, I guess we could let you mention a small subset of it.”
- “I’m sorry, that’s not enough to make for an interesting post.”
That was the genesis of some tidbits I recently dropped about WibiData and predictive modeling, especially but not only in the area of experimentation. However, Wibi just reversed course and said it would be OK for me to tell more or less the full story, as long as I note that we’re talking about something that’s still in beta test, with all the limitations (to the product and my information alike) that beta implies.
As you may recall:
- WibiData started out with a rich technology stack …
- … but decided to cast itself as an application company …
- … whose first vertical market is retailing,
With that as background, WibiData’s approach to predictive modeling as of its next release will go something like this:
- There is still a strong element of classical modeling by data scientists/statisticians, with the models re-scored in batch, perhaps nightly.
- But of course at least some scoring should be done as real-time as possible, to accommodate fresh data such as:
- User interactions earlier in today’s session.
- Technology for today’s session (device, connection speed, etc.)
- Today’s weather.
- WibiData Express is/incorporates a Scala-based language for modeling and query.
- WibiData believes Express plus a small algorithm library gives better results than more mature modeling libraries.
- There is some confirming evidence of this …
- … but WibiData’s customers have by no means switched over yet to doing the bulk of their modeling in Wibi.
- WibiData will allow line-of-business folks to experiment with augmentations to the base models.
- Supporting technology for predictive experimentation in WibiData will include:
- Automated multi-armed bandit testing (in previous versions even A/B testing has been manual).
- A facility for allowing fairly arbitrary code to be included into otherwise conventional model-scoring algorithms, where conventional scoring models can come:
- Straight from WibiData Express.
- Via PMML (Predictive Modeling Markup Language) generated by other modeling tools.
- An appropriate user interface for the line-of-business folks to do certain kinds of injecting.
Let’s talk more about predictive experimentation. WibiData’s paradigm for that is:
- Models are worked out in the usual way.
- Businesspeople have reasons for tweaking the choices the models would otherwise dictate.
- They enter those tweaks as rules.
- The resulting combination — models plus rules — are executed and hence tested.
If those reasons for tweaking are in the form of hypotheses, then the experiment is a test of those hypotheses. However, WibiData has no provision at this time to automagically incorporate successful tweaks back into the base model.
What might those hypotheses be like? It’s a little tough to say, because I don’t know in fine detail what is already captured in the usual modeling process. WibiData gave me only one real-life example, in which somebody hypothesized that shoppers would be in more of a hurry at some times of day than others, and hence would want more streamlined experiences when they could spare less time. Tests confirmed that was correct.
That said, I did grow up around retailing, and so I’ll add:
- Way back in the 1970s, Wal-Mart figured out that in large college towns, clothing in the football team’s colors was wildly popular. I’d hypothesize such a rule at any vendor selling clothing suitable for being worn in stadiums.
- A news event, blockbuster movie or whatever might trigger a sudden change in/addition to fashion. An alert merchant might guess that before the models pick it up. Even better, she might guess which psychographic groups among her customers were most likely to be paying attention.
- Similarly, if a news event caused a sudden shift in buyers’ optimism/pessimism/fear of disaster, I’d test that a response to that immediately.
Finally, data scientists seem to still be a few years away from neatly solving the problem of multiple shopping personas — are you shopping in your business capacity, or for yourself, or for a gift for somebody else (and what can we infer about that person)? Experimentation could help fill the gap.
The default value of METHOD_OPT from 10g onwards is ‘FOR ALL COLUMNS SIZE AUTO’.
The definition of AUTO as per Oracle documentation is :
AUTO: Oracle determines the columns to collect histograms based on data distribution and the workload of the columns.
This basically implies that Oracle will automatically create histograms on those columns which have skewed data distribution and there are SQL statements referencing those columns.
However, this gives rise to the problem is that Oracle generates too many unnecessary histograms .
– Create a table with skewed data distribution in two columns
SQL>drop table hr.skewed purge; create table hr.skewed ( empno number, job_id varchar2(10), salary number); insert into hr.skewed select employee_id, job_id, salary from hr.employees;
– On gathering statistics for the table using default options, it can be seen that histogram is not gathered on any column although data
distribution in columns JOB_ID and SALARY is skewed
SQL>exec dbms_stats.gather_table_stats('HR','SKEWED'); col table_name for a10 col column_name for a10 select TABLE_NAME,COLUMN_NAME,HISTOGRAM from dba_tab_columns where table_name = 'SKEWED'; TABLE_NAME COLUMN_NAM HISTOGRAM ---------- ---------- --------------- SKEWED SALARY NONE SKEWED JOB_ID NONE SKEWED EMPNO NONE
– Let’s now issue some queries querying the table based on the three columns in the table followed by statistics gathering to verify that histograms get automatically created only on columns with skewed data distribution.
– No histogram gets created if column EMPNO is queried which
has data distributed uniformly
SQL>select * from hr.skewed where empno = 100; exec dbms_stats.gather_table_stats('HR','SKEWED'); col table_name for a10 col column_name for a10 select TABLE_NAME,COLUMN_NAME,HISTOGRAM from dba_tab_columns where table_name = 'SKEWED'; TABLE_NAME COLUMN_NAM HISTOGRAM ---------- ---------- --------------- SKEWED SALARY NONE SKEWED JOB_ID NONE SKEWED EMPNO NONE
– A histogram gets created on JOB_ID column as soon as we search for records with a JOB_ID as data distribution is non-uniform in JOB_ID column
SQL>select * from hr.skewed where job_id = 'CLERK'; exec dbms_stats.gather_table_stats('HR','SKEWED'); col table_name for a10 col column_name for a10 select TABLE_NAME,COLUMN_NAME,HISTOGRAM from dba_tab_columns where table_name = 'SKEWED'; TABLE_NAME COLUMN_NAM HISTOGRAM ---------- ---------- --------------- SKEWED SALARY NONE SKEWED JOB_ID FREQUENCY SKEWED EMPNO NONE
– A histogram gets created on SALARY column when search is made for employees drawing salary more than 10000 as data distribution is non-uniform in SALARY column.
SQL>select * from hr.skewed where salary < 10000; exec dbms_stats.gather_table_stats('HR','SKEWED'); col table_name for a10 col column_name for a10 select TABLE_NAME,COLUMN_NAME,HISTOGRAM from dba_tab_columns where table_name = 'SKEWED'; TABLE_NAME COLUMN_NAM HISTOGRAM ---------- ---------- --------------- SKEWED SALARY FREQUENCY SKEWED JOB_ID FREQUENCY SKEWED EMPNO NONE
Thus gathering statistics using default options, manually or as part of the automatic maintenance task, might lead to creation of histograms on all such columns which have skewed data distribution and had been part of the search clause even once. That is, Oracle makes even the histograms you didn’t ask for. Some of the histograms might not be needed by the application and hence are undesirable as computing histograms is a resource intensive operation and moreover they might degrade the performance as a result of their interaction with bind peeking.
Employ FOR ALL COLUMNS SIZE REPEAT option of METHOD_OPT parameter which prevents deletion of existing histograms and collects histograms only on the columns that already have histograms.
First step is to eliminate unwanted histograms and have histograms only on the desired columns.
Well, there are two options:
OPTION-I: Delete histograms from unwanted columns and use REPEAT option henceforth which Collects histograms only on the columns that already have histograms.
– Delete unwanted histogram for SALARY column
SQL>exec dbms_stats.gather_table_stats('HR','SKEWED', - METHOD_OPT => 'for columns salary size 1'); -- Verify that histogram for salary column has been deleted col table_name for a10 col column_name for a10 select TABLE_NAME,COLUMN_NAME,HISTOGRAM from dba_tab_columns where table_name = 'SKEWED'; TABLE_NAME COLUMN_NAM HISTOGRAM ---------- ---------- --------------- SKEWED SALARY NONE SKEWED JOB_ID FREQUENCY SKEWED EMPNO NONE
– Issue a SQL with salary column in where clause and verify that gathering stats using repeat option retains histogram on JOB_ID column and does not cause histogram to be created on salary column.
SQL>select * from hr.skewed where salary < 10000; exec dbms_stats.gather_table_stats('HR','SKEWED',- METHOD_OPT => 'for columns salary size REPEAT'); col table_name for a10 col column_name for a10 select TABLE_NAME,COLUMN_NAME,HISTOGRAM from dba_tab_columns where table_name = 'SKEWED'; TABLE_NAME COLUMN_NAM HISTOGRAM ---------- ---------- --------------- SKEWED SALARY NONE SKEWED JOB_ID FREQUENCY SKEWED EMPNO NONE
OPTION-II: Wipe out all histograms and manually add only the desired ones. Use REPEAT option henceforth which Collects histograms only on the columns that already have one.
– Delete histograms on all columns
SQL>exec dbms_stats.gather_table_stats('HR','SKEWED',- METHOD_OPT => 'for all columns size 1');
– Verify that histograms on all columns have been dropped
SQL>col table_name for a10 col column_name for a10 select TABLE_NAME,COLUMN_NAME,HISTOGRAM from dba_tab_columns where table_name = 'SKEWED'; TABLE_NAME COLUMN_NAM HISTOGRAM ---------- ---------- --------------- SKEWED SALARY NONE SKEWED JOB_ID NONE SKEWED EMPNO NONE
– Create histogram only on the desired JOB_ID column
SQL>exec dbms_stats.gather_table_stats('HR','SKEWED',- METHOD_OPT => 'for columns JOB_ID size AUTO');
– Verify that histogram has been created on JOB_ID
SQL>col table_name for a10 col column_name for a10 select TABLE_NAME,COLUMN_NAME,HISTOGRAM from dba_tab_columns where table_name = 'SKEWED'; TABLE_NAME COLUMN_NAM HISTOGRAM ---------- ---------- --------------- SKEWED SALARY NONE SKEWED JOB_ID FREQUENCY SKEWED EMPNO NONE
- Verify that gathering stats using repeat option creates histogram only on JOB_ID column on which it already exists
SQL>exec dbms_stats.gather_table_stats('HR','SKEWED',- METHOD_OPT => 'for columns salary size REPEAT'); SQL>col table_name for a10 col column_name for a10 select TABLE_NAME,COLUMN_NAME,HISTOGRAM from dba_tab_columns where table_name = 'SKEWED'; TABLE_NAME COLUMN_NAM HISTOGRAM ---------- ---------- --------------- SKEWED SALARY NONE SKEWED JOB_ID FREQUENCY SKEWED EMPNO NONE
That is, now Oracle will no longer make histograms you didn’t ask for.
– Finally, change the preference for METHOD_OPT parameter of automatic stats gathering job from default value of AUTO to REPEAT so that it will gather histograms only for the columns already having one.
– Get Current value –
SQL> select dbms_stats.get_prefs ('METHOD_OPT') from dual; DBMS_STATS.GET_PREFS('METHOD_OPT') ----------------------------------------------------------------------- FOR ALL COLUMNS SIZE AUTO
– Set preference to REPEAT–
SQL> exec dbms_stats.set_global_prefs ('METHOD_OPT','FOR ALL COLUMNS SIZE REPEAT');
– Verify –
SQL> select dbms_stats.get_prefs ('METHOD_OPT') from dual; DBMS_STATS.GET_PREFS('METHOD_OPT') ----------------------------------------------------------------------- FOR ALL COLUMNS SIZE REPEAT
From now onwards, gathering of statistics, manually or automatically will not create any new histograms while retaining all the existing ones.
I hope this post is useful.
Comments: 0 (Zero), Be the first to leave a reply!
You might be interested in this:
- Oracle Critical Patch Update Advisory - July 2014
- ORA-65023: active transaction exists in container CDB$ROOT
- 12c: ACCESS EM EXPRESS FOR CDB / PDB / Non-CDB
- 11g R2 RAC: REBOOT-LESS FENCING WITH MISSING DISK HEARTBEAT
- 11g R2 RAC: ADD A NODE
The post Create Histograms On Columns That Already Have One appeared first on ORACLE IN ACTION.
This is Part II in a series. Part I can be found here (click here). Part I in the series covered a very simple case of SLOB data loading. This installment is aimed at how one can use SLOB as a platform test for a unique blend of concurrent, high-bandwidth data loading, index creation and CBO statistics gathering.Put SLOB On The Box – Not In a Box
As a reminder, the latest SLOB kit is always available here: kevinclosson.net/slob .
Often I hear folks speak of what SLOB is useful for and the list is really short. The list is so short that a single acronym seems to cover it—IOPS, just IOPS and nothing else. SLOB is useful for so much more than just testing a platform for IOPS capability. I aim to make a few blog installments to make this point.SLOB for More Than Physical IOPS
I routinely speak about how to use SLOB to study host characteristics such as NUMA and processor threading (e.g., Simultaneous Multithreading on modern Intel Xeons). This sort of testing is possible when the sum of all SLOB schemas fit into the SGA buffer pool. When testing in this fashion, the key performance indicators (KPI) are LIOPS (Logical I/O per second) and SQL Executions per second.
This blog post is aimed at suggesting yet another manner of platform testing with SLOB–specifically concurrent bulk data loading.
The SLOB data loader (~SLOB/setup.sh) offers the ability to test non-parallel, concurrent table loading, index creation and CBO statistics collection.
In this blog post I’d like to share a “SLOB data loading recipe kit” for those who wish to test high performance SLOB data loading. The contents of the recipe will be listed below. First, I’d like to share a platform measurement I took using the data loading recipe. The host was a 2s20c40t E5-2600v2 server with 4 active 8GFC paths to an XtremIO array.
The tar archive kit I’ll refer to below has the full slob.conf in it, but for now I’ll just use a screen shot. Using this slob.conf and loading 512 SLOB schema users generates 1TB of data in the IOPS tablespace. Please note the attention I’ve drawn to the slob.conf parameters SCALE and LOAD_PARALLEL_DEGREE. The size of the aggregate of SLOB data is a product of SCALE and the number of schemas being loaded. I drew attention to LOAD_PARALLEL_DEGREE because that is the key setting in increasing the concurrency level during data loading. Most SLOB users are quite likely not accustomed to pushing concurrency up to that level. I hope this blog post makes doing so seem more worthwhile in certain cases.
The following is a screenshot of the output from the SLOB 2.2 data loader. The screenshot shows that the concurrent data loading portion of the procedure took 1,474 seconds. On the surface that would appear to be a data loading rate of approximately 2.5 TB/h. One thing to remember, however, is that SLOB data is loaded in batches controlled by LOAD_PARALLEL_DEGREE. Each batch loads LOAD_PARALLEL_DEGREE number of tables and then creates a unique indexes and performs CBO statistics gathering. So the overall “data loading” time is really data loading plus these ancillary tasks. To put that another way, it’s true this is a 2.5TB data loading use case but there is more going on than just simple data loading. If this were a pure and simple data loading processing stream then the results would be much higher than 2.5TB/h. I’ll likely blog about that soon.
As the screenshot shows the latest SLOB 2.2 data loader isolates the concurrent loading portion of setup.sh. In this case, the seed table (user1) was loaded in 20 seconds and then the concurrent loading portion completed in 1,474 seconds.That Sounds Like A Good Amount Of Physical I/O But What’s That Look Like?
To help you visualize the physical I/O load this manner of testing places on a host, please consider the following screenshot. The screenshot shows peaks of vmstat 30-second interval reporting of approximately 2.8GB/s physical read I/O combined with about 435 MB write I/O for an average of about 3.2GB/s. This host has but 4 active 8GFC fibre channel paths to storage so that particular bottleneck is simple to solve by adding another 4 port HBA! Note also how very little host CPU is utilized to generate the 4x8GFC saturating workload. User mode cycles are but 15% and kernel mode utilization was 9%. It’s true that 24% sounds like a lot, however, this is a 2s20c40t host and therefore 24% accounts for only 9.6 processor threads–or 5 cores worth of bandwidth. There may be some readers who were not aware that 5 “paltry” Ivy Bridge Xeon cores are capable of driving this much data loading!
NOTE: The SLOB method is centered on the sparse blocks. Naturally, fewer CPU cycles are required for loading data into sparse blocks.
Please note, the following vmstat shows peaks and valleys. I need to remind you that SLOB data loading consists of concurrent processing of not only data loading (Insert as Select) but also a unique index creation and CBO statistics gathering. As one would expect I/O will wane as the loading process shifts from the bulk data load to the index creation phase and then back again.
Finally, the following screenshot shows the very minimalist init.ora settings I used during this testing.
The recipe kit can be found in the following downloadable tar archive. The kit contains the necessary files one would need to reproduce this SLOB data loading time so long as the platform has sufficient performance attributes. The tar archive also has all output generated by setup.sh as the following screenshot shows:
The SLOB 2.2 data loading recipe kit can be downloaded here (click here). Please note, the screenshot immediately above shows the md5 checksum for the tar archive.Summary
This post shows how one can tune the SLOB 2.2 data loading tool (setup.sh) to load 1 terabyte of SLOB data in well under 25 minutes. I hope this is helpful information and that, perhaps, it will encourage SLOB users to consider using SLOB for more than just physical IOPS testing.
Filed under: oracle