Skip navigation.

Feed aggregator

Oracle OpenWorld and JavaOne 2014 Cometh

Oracle AppsLab - Mon, 2014-09-22 11:28

This time next week, we’ll be in the thick of the Oracle super-conference, the combination of Oracle OpenWorld and JavaOne.

This year, our team and our larger organization, Oracle Applications User Experience, will have precisely a metric ton of activities during the week.

For the first time, our team will be doing stuff at JavaOne too. Anthony (@anthonyslai) will be talking about the IFTTPi workshop we built for the Java team for MakerFaire back in May on Monday, and Tony will be showing those workshop demos in the JavaOne OTN Lounge at the Hilton all week.

If you’re attending either show or both, stop by, say hello and ask about our custom wearable.

Speaking of wearables, Ultan (@ultan) will be hosting a Wearables Meetup a.k.a. Dress Code 2.0 in the OTN Lounge at OpenWorld on Tuesday, September 30 from 4-6 PM. We’ll be there, and here’s what to expect:

  • Live demos of wearables proof-of-concepts integrated with the Oracle Java Cloud.
  • A wide selection of wearable gadgets available to try on for size.
  • OAUX team chatting about use cases, APIs, integrations, UX design, fashion and how you can use OTN resources to build your own solutions.

Update: Here are Bob (@OTNArchBeat) and Ultan talking about the meetup.

Here’s the list of all the OAUX sessions:

Oracle Applications Cloud User Experiences: Trends, Tailoring, and Strategy

Presenter: Jeremy Ashley, Vice President, Applications User Experience; Jatin Thaker, Senior Director, User Experience; and Jake Kuramoto, Director, User Experience

The Oracle Applications Cloud user experience design strategy is about simplicity, mobility, and extensibility. See what we mean by simplicity as we demo our latest cloud user experiences and show you only the essential information you need for your work. Learn how we are addressing mobility, by delivering the best user experience for each device as you access your enterprise data in the cloud. We’ll also talk about the future of enterprise experiences and the latest trends we see emerging in the consumer market. And finally, understand what we mean by extensibility after hearing a high-level overview of the tools designed for tailoring the cloud user experience. With this team, you will always get a glimpse into the future, so we know you will be inspired about the future of the cloud.

Session ID: CON7198
Date: Monday, September. 29, 2014
Time: 2:45 p.m. – 3:30 p.m.
Location: Moscone West – 3007

Learn How to Create Your Own Java and Internet of Things Workshop

Presenter: Anthony Lai, User Experience Architect, Oracle

This session shows how the Applications User Experience team created an interactive workshop for the Oracle Java Zone at Maker Faire 2014. Come learn how the combination of the Raspberry Pi and Embedded Java creates a perfect platform for the Internet of Things. Then see how Java SE, Raspi, and a sprinkling of user experience expertise engaged Maker Faire visitors of all ages, enabling them to interact with the physical world by using Java SE and the Internet of Things. Expect to play with robots, lights, and other Internet-connected devices, and come prepared to have some fun.

Session ID: JavaOne 2014, CON7056
Date: Monday, Sept. 29, 2014
Time: 4 p.m. – 5 p.m.
Location: Parc 55 – Powell I/II

Oracle HCM Cloud User Experiences: Trends, Tailoring, and Strategy

Presenters: Jeremy Ashley, Vice President, Applications User Experience, Oracle; Aylin Uysal, Director, Human Capital Management User Experience, Oracle

The Oracle Applications Cloud user experience design strategy is about simplicity, mobility, and extensibility. See what we mean by simplicity as we demo our latest cloud user experiences and show you only the essential information you need for your work. Learn how we are addressing mobility, by delivering the best user experience for each device as you access your enterprise data in the cloud. We’ll also talk about the future of enterprise experiences and the latest trends we see emerging in the consumer market. And finally, understand how you can extend with the Oracle tools designed for tailoring the cloud user experience. With this team, you will always get a glimpse into the future. Come and get inspired about the future of the Oracle HCM Cloud.

Session ID: CON8156
Date: Tuesday, Sept. 30, 2014
Time: 12:00 p.m. – 12:45 p.m.
Location: Palace – Presidio

Oracle Sales Cloud: How to Tailor a Simple and Efficient Mobile User Experience

Presenters: Jeremy Ashley, Vice President, Applications User Experience, Oracle; Killian Evers, Senior Director, Applications User Experience, Oracle

The Oracle Applications Cloud user experience design strategy is about simplicity, mobility, and extensibility. In this session, learn how Oracle is addressing mobility by delivering the best user experience for each device as you access your enterprise data in the cloud. Hear about the future of enterprise experiences and the latest trends Oracle sees emerging in the consumer market. You’ll understand what Oracle means by extensibility after getting a high-level overview of the tools designed for tailoring the cloud user experience, and you’ll also get a glimpse into the future of Oracle Sales Cloud.

Session ID: CON7172
Date: Wednesday, Oct. 1 2014
Time: 4:30 p.m. – 5:15 p.m.
Location: Moscone West – 2003

Oracle Applications Cloud: First-Time User Experience

Presenters: Laurie Pattison, Senior Director, User Experience; and Mindi Cummins, Principal Product Manager, both of Oracle

So you’ve bought and implemented Oracle Applications Cloud software. Now you want to get your users excited about using it. Studies show that one of the biggest obstacles to meeting ROI objectives is user acceptance. Based on working directly with thousands of real users, this presentation discusses how Oracle Applications Cloud is designed to get your users excited to try out new software and be productive on a new release ASAP. Users say they want to be productive on a new application without spending hours and hours of training, experiencing death by PowerPoint, or reading lengthy manuals. The session demos the onboarding experience and even shows you how a business user, not a developer, can customize it.

Session ID: CON7972
Date: Thursday, Oct. 2, 2014
Time: 12 p.m. – 12:45 p.m.
Location: Moscone West – 3002

Using Apple iBeacons to Deliver Context-Aware Social Data

Presenters: Anthony Lai, User Experience Architect, Oracle; and Chris Bales, Director, Oracle Social Network Client Development

Apple’s iBeacon technology enables companies to deliver tailored content to customers, based on their location, via mobile applications. It will enable social applications such as Oracle Social Network to provide more relevant information, no matter where you are. Attend this session to see a demonstration of how the Oracle Social Network team has augmented the mobile application with iBeacons to deliver more-context-aware data. You’ll get firsthand insights into the design and development process in this iBeacon demonstration, as well as information about how developers can extend the Oracle Social Network mobile applications.

Session ID: Oracle OpenWorld 2014, CON8918
Date: Thursday, Oct. 2, 2014
Time: 3:15 p.m. – 4 p.m.
Location: Moscone West – 2005

Hope to see you next week.Possibly Related Posts:

2014 Annual Bloggers Meetup

OTN TechBlog - Mon, 2014-09-22 11:22

The Annual Oracle Bloggers Meetup, one of our favorite events of OpenWorld, is happening at usual place and time thanks to Oracle Technology Network and Pythian.

What: Oracle Bloggers Meetup 2014

When: Wed, 1-Oct-2014, 5:30pm

Where: Main Dining Room, Jillian’s Billiards @ Metreon, 101 Fourth Street, San Francisco, CA 94103 (street view). Please comment with “COUNT ME IN” if coming — we need to know the attendance numbers.

Read more are at Alex Gorbachev's latest blog post. 


PeopleSoft and Web Browsers – The Guide

Duncan Davies - Mon, 2014-09-22 08:00

browsersThe topic of PeopleSoft/PeopleTools versions and web browsers is often a complicated one, yet it’s an issue that every client will face when they either upgrade PeopleTools or move to a new Application version that contains a Tools increase.

Cedar have recently been asked by a client for some assistance to get a definitive answer to the important questions and we thought it would be useful to share this information. We’ve put together a white paper that shows you the relevant browser versions for PeopleTools 8.54 and PeopleTools 8.53 (i.e. the versions that customers are likely to be upgrading to over the next year or so):

Cedar Consulting White Paper – PeopleSoft and Web Browsers

We hope that it saves you some time during your next upgrade.

 


OOW - Focus On Support and Services for Siebel CRM

Chris Warticki - Mon, 2014-09-22 08:00
Focus On Support and Services for Siebel CRM   Thursday, Oct 02, 2014

Conference Sessions

Customer Success Story: State of Michigan, MAGI Case Study
Beth Long, Project Manager, Cgi Technologies and Solutions Inc.
Sue Doby, Senior Director, Public Sector Consulting, Oracle
9:30 AM - 10:15 AM Marriott Marquis - Salon 1/2/3* CON2747 Best Practices for Maintaining Siebel CRM
PREM Lakshmanan, Senior Director Customer Support, Oracle
Iain Mcgonigle, Senior Director, Customer Support, Oracle
12:45 PM - 1:30 PM Moscone West - 3009 CON8314    My Oracle Support Monday Mix

Monday, Sep 29

Join us for a fun and relaxing happy hour at the annual My Oracle Support Monday Mix. This year’s gathering is Monday, September 29 from 6:00 to 8:00 p.m. at the ThirstyBear Brewing Company – just a 3 minute walk from Moscone Center. Admission is free for Premier Support customers with your Oracle OpenWorld badge. Visit our web site for more details: http://www.oracle.com/goto/mondaymix 6:00 PM - 8:00 PM ThirstyBear Brewing Company Oracle Support Stars Bar & Mini Briefing Center

Monday, Sep 29

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Tuesday, Sep 30

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Wednesday, Oct 01

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 3:45 PM Moscone West Exhibition Hall, 3461 and 3908

To secure a seat in a session, please use Schedule Builder to add to your Schedule.

What I felt sad about last night

FeuerThoughts - Mon, 2014-09-22 06:16
A few weeks ago, I moved my office into the basement. That was a big change. That room upstairs, with big windows looking out onto Pratt Ave was where I'd spent almost all of my professional career (we moved to the house in 1992, three months before leaving Oracle for a consulting gig), wrote my books (including the first, Oracle PL/SQL Programming, that changed the course of my life), built the software (Xray Vision for SQL Forms 3, QNXO, Qute, PL/Vision, Code Tester for Oracle, Quest CodeGen Utility, etc.), did the webinars, wrote 1000+ quizzes for the PL/SQL Challenge.

But you know what? Bye, bye, no big deal. Change is good (like this change: Veva and I are taking ballroom dancing classes. I will learn what to do with my feet when I dance!).

I like my cave, I mean, office. It's spacious, and I can make as much noise as I want. Which is very important, since I will be churning out lots of really noisy videos about PL/SQL and my latest dance moves. 
I'm getting my artwork up on the walls:

My father did the painting on the bottom left. It has a lot of power and feeling. My dry cleaner created the beautiful painting on top.
I re-established my sand table with beautiful pieces by Terry Hogan, and many other shells and coral from the sea:

And I put some of my awards and other mementos up on shelves that used to hold a small library of science fiction/fantasy books:


So, yes, settling in to my new office. And last night I started nailing up corkboard tiles to the thick wood paneling, so I could pin up photos of my granddaughter, Loey. Oh, I suppose other people, too. But Loey mainly, because she is the light of my life, and oh my she is a bright light.

In any case, as I hammered the tiny nails needed to hold up the corkboard, I became aware that I felt kind of down, as if the day had not gone well. Why would I be feeling that way? It had been a good day. And then I (the conscious part of me) realized that the non-conscious part of me was feeling bad about having broken a branch in the woods earlier in the day.
That sounds kind of weird, right? I mean, seriously, how bad are humans supposed to feel about breaking the branch of a tree? It's not like they'd notice, right?
But it made perfect sense to me, so I decided to share with you why a broken branch would set my brain to brooding, thereby giving you a sense of how I see the world these days.
As to why anyone should care what I think of the world, well, I leave that entirely up to the reader. No readers, then no one cares. :-) 
As soon as the thought (brooding about broken branch) broke into my consciousness, I immediately knew it was true (that happens to you, too, right? You can instantly sense that a thought is correct. Now try thinking about what is going on in your brain for this to happen and how much of your brain is the "I" that is you). 
You see, I had earlier been thinking back over to when I was in the woods this morning cutting down buckthorn. At one point a rather large tree came down hard against a nearby native tree I was working to rescue. 
To my great dismay, one of its branches was caught by the twisty, grabby buckthorn. It snapped and hung loosely. I did that. That was probably two years' new growth, hard work against buckthorn. And I killed it. 
That bummed me out (and still does), but I reminded myself that I have to accept that even when I move carefully and always safely, I cannot always control where a large tree will fall. I will make mistakes and there will be setbacks. But I just have to keep going.
"Going where?" you might ask. I have developed a new, very strong compulsion: to rescue trees. To do what I can with my own hands, with my own time, with, in other words, a solid chunk of my life, to heal some of the damage we humans inflict on our co-inhabitants and the planet itself.
I think about it as direct and positive action, a principle I attempt to follow in all aspects of my life these days.
Here in Chicago, buckthorn - an invasive import from northern Europe - grows aggressively, crowding out the native trees. In particular, they don't allow young trees, the saplings, the next generation of the natives, to survive. And as the buckthorn grows taller,  it also kills off the lower branches of the mature trees. 
Buckthorn is really an impressive, powerful, successful species. I admire it greatly - and I cut down on the order of 200 buckthorn trees a week (many of them quite small, but not all). Contradiction? Not at all. A necessary corrective action to human abuse of our world. We travel about, carrying with us the seeds (and ballast and larvae) of destruction for many ecosystems.
I do not want to lose our native trees (and even the non-invasive imports). I want my children and grandchildren to enjoy forests. I want to respect trees, since we could never have evolved to what we are today without trees. And even today the forests of the world are absolutely critical to the functioning of the global ecosystem(s).
I want to treat trees with respect and do penance for our cutting down 95% of the trees in the continental US. So I go out and rescue trees. It is now my only form of exercise and it keeps me in great shape - especially for picking up, carrying and playing with Loey. She loves for me to hang her upside down by her ankles and swing her like a pendulum. She trusts me implicitly. I love that.
Sorry, you must be wondering: what is the point of all this? 
To give me an opportunity to marvel at the current state of my life, in which I have quite an intimate relationship with trees. I study them, I read them. Really, it's quite amazing. I can go into the woods now, look at how a native tree's branch has withered, identify the buckthorn that is doing the damage, and actually play it out in my mind's eye: years of slow growth, of slow-motion battle, and of losing it to the buckthorn. Everywhere I look, I find the trees telling their stories.
My greatest joy is to uncover a small sapling that was so completely surrounded and covered by buckthorn I didn't even see it there when I started cutting. Then I open it to the sun and the wind. I did this with a lovely 15 foot tall maple sapling last week. I will be visiting it (and hundreds of other trees) each year now, making sure the buckthorn (and grapevine) leaves it alone, allowing it to grow to a big, thick, incredibly strong and life-giving tree.
There, right there, that's what I marvel at: I know that the 10+ hours I spend each week in the woods rescuing trees will mean that 20 years from now there will be trees with a diameter of a foot or more that simply would not be there if it hadn't been for my effort and my attention paid to something other than human stuff.
That makes me feel happy and less guilty about my consumption (and indirect killing of many, many trees). It gives me a purpose in life, besides family and work.
I plan to rescue trees for as long as my body is able to do the work.
Anyone care to join me?




Categories: Development

Where is my space on Linux filesystem?

Surachart Opun - Mon, 2014-09-22 05:06
Not Often, I checked about my space after made filesystem on Linux. Today, I have made Ext4 filesystem around 460GB, I found it 437GB only. Some path should be 50GB, but it was available only 47GB.
Thank You @OracleAlchemist and @gokhanatil for good information about it.
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup0-U01LV   50G   52M   47G   1% /u01
Reference  - It's for specify the percentage of the filesystem blocks reserved for the super-user. This avoids fragmentation, and allows root-owned daemons, such as syslogd(8), to continue to function correctly after non-privileged processes are prevented from writing to the  filesystem. The default percentage is 5%.After I found out more information. Look like we can set it to zero, but we should not set it to zero for /,/var,/tmp or which path has lots of file creates and deletes.Reference on RedHatIf you set the reserved block count to zero, it won't affect
performance much except if you run for long periods of time (with lots
of file creates and deletes) while the filesystem is almost full
(i.e., say above 95%), at which point you'll be subject to
fragmentation problems.  Ext4's multi-block allocator is much more
fragmentation resistant, because it tries much harder to find
contiguous blocks, so even if you don't enable the other ext4
features, you'll see better results simply mounting an ext3 filesystem
using ext4 before the filesystem gets completely full.
If you are just using the filesystem for long-term archive, where
files aren't changing very often (i.e., a huge mp3 or video store), it
obviously won't matter.
- TedExample: Changed reserved-blocks-percentage [root@mytest01 u01]# df -h /u01
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup0-U01LV   50G   52M   47G   1% /u01
[root@mytest01 u01]# tune2fs -m 1 /dev/mapper/VolGroup0-U01LV
tune2fs 1.43-WIP (20-Jun-2013)
Setting reserved blocks percentage to 1% (131072 blocks)
[root@mytest01 u01]# df -h /u01
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup0-U01LV   50G   52M   49G   1% /u01
[root@mytest01 u01]# tune2fs -m 5 /dev/mapper/VolGroup0-U01LV
tune2fs 1.43-WIP (20-Jun-2013)
Setting reserved blocks percentage to 5% (655360 blocks)
[root@mytest01 u01]# df -h /u01
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup0-U01LV   50G   52M   47G   1% /u01Finally, I knew it was reserved for super-user. Checked more for calculation.
[root@ottuatdb01 ~]# df -m /u01
Filesystem                  1M-blocks  Used Available Use% Mounted on
/dev/mapper/VolGroup0-U01LV     50269    52     47657   1% /u01
[root@ottuatdb01 ~]#  tune2fs -l /dev/mapper/VolGroup0-U01LV |egrep  'Block size|Reserved block count'
Reserved block count:     655360
Block size:               4096

Available = 47657MB
Used = 52M
Reserved Space = (655360 x 4096) / 1024 /1024 = 2560MB 
Total = 47657 + 2560 + 52 = 50269 

OK.. I felt good after it cleared for me. Somehow, I believe On Hug space, 5% of the filesystem blocks reserved that's too much. We can reduce it.

Other Links:
https://www.redhat.com/archives/ext3-users/2009-January/msg00026.html
http://unix.stackexchange.com/questions/7950/reserved-space-for-root-on-a-filesystem-why
http://linux.die.net/man/8/tune2fs
https://wiki.archlinux.org/index.php/ext4#Remove_reserved_blocks

Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Introduction to Oracle BI Cloud Service : Product Overview

Rittman Mead Consulting - Mon, 2014-09-22 05:02

Long-term readers of this blog will probably know that I’m enthusiastic about the possibilities around running OBIEE in the cloud, and over the past few weeks Rittman Mead have been participating in the beta program for release one of Oracle’s Business Intelligence Cloud Service (BICS). BICS went GA over the weekend and is now live on Oracle’s public cloud site, so all of this week we’ll be running a special five-part series on what BI Cloud Service is, how it works and how you go about building a simple application. I’m also presenting on BICS and our beta program experiences at Oracle Openworld this week (Oracle BI in the Cloud: Getting Started, Deployment Scenarios, and Best Practices [CON2659], Monday Sep 29 10:15 AM – 11.00 AM Moscone West 3014), so if you’re at the event and want to hear our thoughts, come along.

Over the next five days I’ll be covering the following topics, and I’ll update the list with hyperlinks once the articles are published:

So what is Oracle BI Cloud Service, and how does it relate to regular, on-premise OBIEE11g?

On the Oracle BI Cloud Service homepage, Oracle position the product as “Agile Business Intelligence in the Cloud for Everyone”, and there’s a couple of key points in this positioning that describe the product well.

NewImage

The “agile” part is referring to the point that being cloud-based, there’s no on-premise infrastructure to stand-up, so you can get started a lot quicker than if you needed to procure servers, get the infrastructure installed, configure the software and get it accepted by the IT department. Agile also refers to the fact that you don’t need to purchase perpetual or one/two-year term licenses for the software, so you can use OBIEE for more tactical projects without having to worry about expensive long-term license deals. The final way that BICS is “agile” is in the simplified, user-focused tools that you use to build your cloud-based dashboards, with BICS adopting a more consumer-like user interface that in-theory should mean you don’t have to attend a course to use it.

BICS is built around standard OBIEE 11g, with an updated user interface that’ll roll-out across on-premise OBIEE in the next release and the standard Analysis Editor, Dashboard Editor and repository (RPD) under the covers. Your initial OBIEE homepage is a modified version of the standard OBIEE homepage that lists standard developer functions down the left-hand side as a series of menu items, and the BI Administration tool is replaced with an online, thin-client repository editor that provides a subset of the full BI Administration tool functionality.

NewImage

Customers who license BICS in this initial release get two environments (or instances) to work with; a pre-prod or development environment to create their applications in initially, and a production environment into which they deploy each release of their work. BICS is also bundled with Oracle Database Schema Service, a single-schema Oracle Database service with an ApEx front-end into which you store the data that BICS reports on, and with ApEx and BICS itself having tools to upload data into it; this is, however, the only data source that BICS in version 1 supports, so any data that your cloud-based dashboards report on has to be loaded into Database Schema Service before you can use it, and you have to use Oracle’s provided tools to do this as regular ETL tools won’t connect. We’ll get onto the data provisioning process in the next article in this five-part series.

BICS dashboards and reports currently support a subset of what’s available in the on-premise version. The Analysis Editor (“Answers”) is the same as on-premise OBIEE with the catalog view on the left-hand side, tabs for Results and so on, and the same set of view types (and in fact a new one, for heat maps). There’s currently no access to Agents, Scorecards, BI Publisher or any other Presentation Services features that require a database back-end though, or any Essbase database in the background as you get with on-premise OBIEE 11.1.1.7+.

NewImage

What does become easier to deploy though is Oracle BI Mobile HD as every BICS instance is, by definition, accessible over the internet. Last time I checked the current version of BI Mobile HD on Apple’s App Store couldn’t yet connect, but I’m presuming an update will be out shortly to deal with BICS’s login process, which gets you to enter a BICS username and password along with an “identity domain” that specifics the particular company tenant ID that you use.

NewImage

I’ll cover the thin-client data modeller later in this series in more detail, but at a high-level what this does is remove the need for you to download and install Oracle BI Administration to set up your BI Repository, something that would have been untenable for Oracle if they were serious about selling a cloud-based BI tool. The thin-client data modeller takes the most important (to casual users) features of BI Administration and makes them available in a browser-based environment, so that you can create simple repository models against a single data source and add features like dimension hierarchies, calculations, row-based and subject-area security using a point-and-click environment.

NewImage

Features that are excluded in this initial release include the ability to define multiple logical table sources for a logical table, creating multiple business areas, creating calculations using physical (vs. logical) tables and so on, and there’s no way to upload on-premise RPDs to BICS, or download BICS ones to use on-premise, at this stage. What you do get with BICS is a new import and export format called a “BI Archive” which bundles up the RPD, the catalog and the security settings into a single archive file, and which you use to move applications between your two instances and to store backups of what you’ve created.

So what market is BICS aimed at in this initial release, and what can it be used for? I think it’s fair to say that in this initial release, it’s not a drop-in replacement for on-premise OBIEE 11g, with only a subset of the on-premise features initially supported and some fairly major limitations such as only being able to report against a single database source, no access to Agents, BI Publisher, Essbase and so on. But like the first iteration of the iPhone or any consumer version of a previously enterprise-only tool, its trying to do a few things well and aiming at a particular market – in this case, departmental users who want to stand-up an OBIEE environment quickly, maybe only for a limited amount of time, and who are familiar with OBIEE and would like to carry on using it. In some ways its target market is those OBIEE customers who might otherwise have use Qlikview, Tableau or one of the new SaaS BI services such as Good Data, who most probably have some data exports in the form of Excel spreadsheets or CSV documents, want to upload them to a BI service without getting all of IT involved and then share the results in the form of dashboards and reports with their team. Pricing-wise this appears to be who Oracle are aiming the service at (minimum 10 users, $3500/month including 50GB of database storage) and with the product being so close to standard OBIEE functionality in terms of how you use it, it’s most likely to appeal to customers who already use OBIEE 11g in their organisation.

That said, I can see partners and ISVs adopting BICS to deliver cloud-based SaaS BI applications to their customers, either as stand-alone analysis apps or as add-ons to other SaaS apps that need reporting functionality. Oracle BI Cloud Service is part of the wider Oracle Platform-as-a-Service (PaaS) that includes Java (WebLogic), Database, Documents, Compute and Storage, so I can see companies such as ourselves developing reporting applications for the likes of Salesforce, Oracle Sales Cloud and other SaaS apps and then selling them, hosting included, through Oracle’s cloud platform; I’ll cover our initial work in this area, developing a reporting application for Salesforce.com data, later in this series.

NewImage

Of course it’s been possible to deploy OBIEE in the cloud for some while, with this presentation of mine from BIWA 2014 covering the main options; indeed, Rittman Mead host OBIEE instances for customers in Amazon AWS and do most of our development and training in the cloud including our exclusive “ExtremeBI in the Cloud” agile BI service; but BICS has two major advantages for customers looking to cloud-deploy OBIEE:

  • It’s entirely thin-client, with no need for local installs of BI Administration and so forth. There’s also no need to get involved with Enterprise Manager Fusion Middleware Control for adding users to application roles, defining application role mappings and so on
  • You can license it monthly, including data storage. No other on-premise license option lets you do this, with the shortest term license being one year

such that we’ll be offering it as an alternative to AWS hosting for our ExtremeBI product, for customers who in-particular want the monthly license option.

So, an interesting start. As I said, I’ll be covering the detail of how BICS works over the next five days, starting with the data upload and provisioning process in tomorrow’s post – check back tomorrow for the next instalment.

Categories: BI & Warehousing

Extend linux partition on vmware

Surachart Opun - Mon, 2014-09-22 02:24
It was a quiet day, I worked as System Administrator and installed Oracle Linux on Virtual Machine guest. After installed Operating System, I wanted to extend disk on guest. So, I extended disk on guest. Anyway, I came back in my head what I was supposed to do on Linux then ? - Create new disk (and Physical Volume) and then add in Volume Group.http://surachartopun.com/2012/01/just-add-disk-to-volume-group-linux.htmlChecked my partition:[root@mytest01 ~]# fdisk -l /dev/sda
Disk /dev/sda: 697.9 GB, 697932185600 bytes
255 heads, 63 sectors/track, 84852 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00061d87
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1         131     1048576   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2             131       78326   628096000   8e  Linux LVM
[root@mytest01 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               VolGroup0
  PV Size               599.00 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              153343
  Free PE               0
  Allocated PE          153343
  PV UUID               AcujnG-5XVc-TWWl-O4Oe-Nv03-rJtc-b5jUlWI thought I should be able to extend (resize) /dev/sda2 - Found out on the Internet, get some example.http://unix.stackexchange.com/questions/42857/how-to-extend-centos-5-partition-on-vmware
- Extend Physical Volume (Chose this idea)
Started to do it: Idea is Deleting/Recreating/run "pvresize".[root@mytest01 ~]# fdisk /dev/sda
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').
Command (m for help): p
Disk /dev/sda: 697.9 GB, 697932185600 bytes
255 heads, 63 sectors/track, 84852 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00061d87
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1         131     1048576   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2             131       78326   628096000   8e  Linux LVM
Command (m for help): d
Partition number (1-4): 2
Command (m for help): p
Disk /dev/sda: 697.9 GB, 697932185600 bytes
255 heads, 63 sectors/track, 84852 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00061d87
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1         131     1048576   83  Linux
Partition 1 does not end on cylinder boundary.
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (131-84852, default 131):
Using default value 131
Last cylinder, +cylinders or +size{K,M,G} (131-84852, default 84852):
Using default value 84852
Command (m for help): p
Disk /dev/sda: 697.9 GB, 697932185600 bytes
255 heads, 63 sectors/track, 84852 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00061d87
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1         131     1048576   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2             131       84852   680524090   83  Linux
Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): L
 0  Empty           24  NEC DOS         81  Minix / old Lin bf  Solaris
 1  FAT12           39  Plan 9          82  Linux swap / So c1  DRDOS/sec (FAT-
 2  XENIX root      3c  PartitionMagic  83  Linux           c4  DRDOS/sec (FAT-
 3  XENIX usr       40  Venix 80286     84  OS/2 hidden C:  c6  DRDOS/sec (FAT-
 4  FAT16 <32m 85="" boot="" br="" c7="" extended="" inux="" nbsp="" prep="" yrinx=""> 5  Extended        42  SFS             86  NTFS volume set da  Non-FS data
 6  FAT16           4d  QNX4.x          87  NTFS volume set db  CP/M / CTOS / .
 7  HPFS/NTFS       4e  QNX4.x 2nd part 88  Linux plaintext de  Dell Utility
 8  AIX             4f  QNX4.x 3rd part 8e  Linux LVM       df  BootIt
 9  AIX bootable    50  OnTrack DM      93  Amoeba          e1  DOS access
 a  OS/2 Boot Manag 51  OnTrack DM6 Aux 94  Amoeba BBT      e3  DOS R/O
 b  W95 FAT32       52  CP/M            9f  BSD/OS          e4  SpeedStor
 c  W95 FAT32 (LBA) 53  OnTrack DM6 Aux a0  IBM Thinkpad hi eb  BeOS fs
 e  W95 FAT16 (LBA) 54  OnTrackDM6      a5  FreeBSD         ee  GPT
 f  W95 Ext'd (LBA) 55  EZ-Drive        a6  OpenBSD         ef  EFI (FAT-12/16/
10  OPUS            56  Golden Bow      a7  NeXTSTEP        f0  Linux/PA-RISC b
11  Hidden FAT12    5c  Priam Edisk     a8  Darwin UFS      f1  SpeedStor
12  Compaq diagnost 61  SpeedStor       a9  NetBSD          f4  SpeedStor
14  Hidden FAT16 <3 63="" ab="" arwin="" boot="" br="" f2="" hurd="" nbsp="" or="" secondary="" sys="">16  Hidden FAT16    64  Novell Netware  af  HFS / HFS+      fb  VMware VMFS
17  Hidden HPFS/NTF 65  Novell Netware  b7  BSDI fs         fc  VMware VMKCORE
18  AST SmartSleep  70  DiskSecure Mult b8  BSDI swap       fd  Linux raid auto
1b  Hidden W95 FAT3 75  PC/IX           bb  Boot Wizard hid fe  LANstep
1c  Hidden W95 FAT3 80  Old Minix       be  Solaris boot    ff  BBT
1e  Hidden W95 FAT1
Hex code (type L to list codes): 8e
Changed system type of partition 2 to 8e (Linux LVM)
Command (m for help): p
Disk /dev/sda: 697.9 GB, 697932185600 bytes
255 heads, 63 sectors/track, 84852 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00061d87
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1         131     1048576   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2             131       84852   680524090   8e  Linux LVM
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks. -- I chose to "Reboot" :-) --[root@mytest01 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               VolGroup0
  PV Size               599.00 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              153343
  Free PE               0
  Allocated PE          153343
  PV UUID               AcujnG-5XVc-TWWl-O4Oe-Nv03-rJtc-b5jUlW
[root@mytest01 ~]# pvresize /dev/sda2
  Physical volume "/dev/sda2" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized
[root@mytest01 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               VolGroup0
  PV Size               599.00 GiB / not usable 2.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              153343
  Free PE               0
  Allocated PE          153343
  PV UUID               AcujnG-5XVc-TWWl-O4Oe-Nv03-rJtc-b5jUlW
[root@mytest01 ~]#
[root@mytest01 ~]# reboot
.
.
.
[root@mytest01 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               VolGroup0
  PV Size               599.00 GiB / not usable 2.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              153343
  Free PE               0
  Allocated PE          153343
  PV UUID               AcujnG-5XVc-TWWl-O4Oe-Nv03-rJtc-b5jUlW
[root@mytest01 ~]# pvresize  /dev/sda2
  Physical volume "/dev/sda2" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized
[root@mytest01 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               VolGroup0
  PV Size               649.00 GiB / not usable 1.31 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              166143
  Free PE               12800
  Allocated PE          153343
  PV UUID               AcujnG-5XVc-TWWl-O4Oe-Nv03-rJtc-b5jUlWNote: This case I had 2 partitions (/dev/sda1, /dev/sda2). So, it was a good idea extending Physical Disk. However, I thought creating physical volume and adding in Volume Group, that might be safer. 
Finally, I had VolGroup0 with new size, then extended Logical Volume.[root@mytest01 ~]# df -h /u02
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup0-U02LV  460G   70M  437G   1% /u02
[root@mytest01 ~]# lvdisplay /dev/mapper/VolGroup0-U02LV
  --- Logical volume ---
  LV Path                /dev/VolGroup0/U02LV
  LV Name                U02LV
  VG Name                VolGroup0
  LV UUID                8Gdt6C-ZXQe-dPYi-21yj-Fs0i-6uvE-vzrCbc
  LV Write Access        read/write
  LV Creation host, time mytest01.pythian.com, 2014-09-21 16:43:50 -0400
  LV Status              available
  # open                 1
  LV Size                467.00 GiB
  Current LE             119551
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:2

[root@mytest01 ~]#
[root@mytest01 ~]# vgdisplay
  --- Volume group ---
  VG Name               VolGroup0
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  7
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                4
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               649.00 GiB
  PE Size               4.00 MiB
  Total PE              166143
  Alloc PE / Size       153343 / 599.00 GiB
  Free  PE / Size       12800 / 50.00 GiB
  VG UUID               thGxdJ-pCi2-18S0-mrZc-cCJM-2SH2-JRpfQ5
[root@mytest01 ~]#
[root@mytest01 ~]# -- Should use "e2fsck" in case resize (shrink). This case no need.
[root@mytest01 ~]# e2fsck -f  /dev/mapper/VolGroup0-U02LV 
e2fsck 1.43-WIP (20-Jun-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mapper/VolGroup0-U02LV: 11/30605312 files (0.0% non-contiguous), 1971528/122420224 blocks
[root@mytest01 ~]#
[root@mytest01 ~]# pvscan
  PV /dev/sda2   VG VolGroup0   lvm2 [649.00 GiB / 50.00 GiB free]
  Total: 1 [649.00 GiB] / in use: 1 [649.00 GiB] / in no VG: 0 [0   ]
[root@mytest01 ~]#
[root@mytest01 ~]#
[root@mytest01 ~]# lvextend -L +50G /dev/mapper/VolGroup0-U02LV
  Extending logical volume U02LV to 517.00 GiB
  Logical volume U02LV successfully resized
[root@mytest01 ~]#
[root@mytest01 ~]#  resize2fs /dev/mapper/VolGroup0-U02LV
resize2fs 1.43-WIP (20-Jun-2013)
Resizing the filesystem on /dev/mapper/VolGroup0-U02LV to 135527424 (4k) blocks.
The filesystem on /dev/mapper/VolGroup0-U02LV is now 135527424 blocks long.
[root@mytest01 ~]#
[root@mytest01 ~]#
[root@mytest01 ~]# lvdisplay /dev/mapper/VolGroup0-U02LV
  --- Logical volume ---
  LV Path                /dev/VolGroup0/U02LV
  LV Name                U02LV
  VG Name                VolGroup0
  LV UUID                8Gdt6C-ZXQe-dPYi-21yj-Fs0i-6uvE-vzrCbc
  LV Write Access        read/write
  LV Creation host, time mytest01.pythian.com, 2014-09-21 16:43:50 -0400
  LV Status              available
  # open                 0
  LV Size                517.00 GiB
  Current LE             132351
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:2
[root@mytest01 ~]#

[root@mytest01 ~]# df -h /u02
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup0-U02LV  509G   70M  483G   1% /u02https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/ext4grow.html
Note: resize2fs can use online, If the filesystem is mounted, it  can  be  used  to expand  the size of the mounted filesystem, assuming the kernel supports on-line resizing.  (As of this writing, the Linux 2.6 kernel supports on-line resize for filesystems mounted using ext3 and ext4.).
Look like today, I learned too much about linux partitioning. Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

The SQL Server DBA's essential toolkit list

Yann Neuhaus - Mon, 2014-09-22 02:01

This week, I attended the SQLSaturday 2014 in Paris. During the Pre-Conference on Thursday, I followed Isabelle Van Campenhoudt for her SQL Server Performances Audit session. This conference took the form of an experience sharing between attendees. Indeed, we tried to list together the most important software, tools, features or scripts which will help an SQL Server DBA during his work. In this blog, I want to share our final list with you.

 

Windows Server Level: Hardware & Applications


CrystalDiskMark

CrystalDiskMark is a free disk benchmark software. It can be downloaded here.

 

SQLIO

SQLIO is another free disk benchmark software. It can be downloaded here.

 

Windows Performance Monitor (PerfMon)

PerfMon is a Windows native tool which collects log data in real time in order to examine how programs running on the computer affect the performance.

PerfMon provides a lot of counters which measure the system state or the activity.

You can learn more on TechNet.

You can find the most important counters for SQL Server here.

 

Performance Analysis of Logs (PAL)

PAL is an Open Source tool based on the top of PerfMon. It reads and analyses the main counters looking for known thresholds.

PAL generates an HTML report which alerts when thresholds are reached.

PAL tool can be downloaded on CodePlex.

 

Microsoft Assessment and Planning (MAP)

MAP is a Microsoft toolkit which provides hardware and software information and recommendations for deployment or migration process for several Microsoft technologies (such as SQL Server or Windows Server).

MAP toolkit can be downloaded on TechNet.

 

SQL Server Level: Configuration & Tuning

 

Dynamic Management Views and Functions (DMV)

DMV are native views and functions of SQL Server which returns server state information of a SQL Server instance.

You can learn more on TechNet.

 

sp_Blitz (from Brent Ozar)

It is a free script which checks SQL Server configuration and highlights common issues.

sp_Blitz can be found on Brent Ozar website.

 

Glenn Berry's SQL Server Performance

It provides scripts to diagnostic your SQL Server since SQL Server 2005.

These scripts can be downloaded here.

 

Enterprise Policy Management (EPM) Framework

EPM Framework is based on Policy-Based Management. It is a reporting solution which tracks SQL Server states which do not meet the specified requirements. It works on all instances of SQL Server since SQL Server 2000.

You can learn more on CodePlex.

 

SQL Server Level: Monitoring & Troubleshooting

 

SQL Profiler

SQL Profiler is a rich interface integrated in SQL Server, which allows to create and manage traces to monitor and troubleshoot an SQL Server instance.

You can learn more on TechNet.

 

Data Collector

Data Collector is a SQL Server feature introduced in SQL Server 2008, and available in all versions.

It gathers performance information from multiple instances for performance monitoring and tuning.

You can learn more on TechNet.

 

Extended Events

Extended Events is a monitoring system integrated in SQL Server. It helps for troubleshooting or identifying a performance problem.

You can learn more on TechNet.

 

SQL Nexus

SQL Nexus is an Open Source tool that helps you for identifying the root cause of SQL Server performance issues.

It can be downloaded on CodePlex.

 

SQL Server Level: Maintenance

 

SQL Server Maintenance Solution

It a set of scripts for running backups, integrity checks, and index statistics maintenance on all editions of Microsoft SQL Server since SQL Server 2005.

This solution can be downloaded on Ola Hallengren's website.

 

 

Conclusion

This blog does not pretend to make a complete list of DBA needs, but it tries to cover most parts. You will notice that all softwares are free and recognized by the DBA community as reliable and powerful tools.

I hope this will help you.

For information, you can learn how to use these tools in our SQL Server DBA Essentials workshop.

Data as an asset

DBMS2 - Sun, 2014-09-21 21:49

We all tend to assume that data is a great and glorious asset. How solid is this assumption?

  • Yes, data is one of the most proprietary assets an enterprise can have. Any of the Goldman Sachs big three* — people, capital, and reputation — are easier to lose or imitate than data.
  • In many cases, however, data’s value diminishes quickly.
  • Determining the value derived from owning, analyzing and using data is often tricky — but not always. Examples where data’s value is pretty clear start with:
    • Industries which long have had large data-gathering research budgets, in areas such as clinical trials or seismology.
    • Industries that can calculate the return on mass marketing programs, such as internet advertising or its snail-mail predecessors.

*”Our assets are our people, capital and reputation. If any of these is ever diminished, the last is the most difficult to restore.” I love that motto, even if Goldman Sachs itself eventually stopped living up to it. If nothing else, my own business depends primarily on my reputation and information.

This all raises the idea – if you think data is so valuable, maybe you should get more of it. Areas in which enterprises have made significant and/or successful investments in data acquisition include: 

  • Actual scientific, clinical, seismic, or engineering research.
  • Actual selling of (usually proprietary) data, with the straightforward economic proposition of “Get once, sell to multiple customers more cheaply than they could get it themselves.” Examples start:
    • This is the essence of the stock quote business. And Michael Bloomberg started building his vast fortune by adding additional data to what the then-incumbents could offer, for example by getting fixed-income prices from Cantor Fitzgerald.*
    • Multiple marketing-data businesses operate on this model.
    • Back when there was a small but healthy independent paper newsletter and directory business, its essence was data.
    • And now there are many online data selling efforts, in niches large and small.
  • Internet ad-targeting businesses. Making money from your great ad-targeting technology usually involves access to lots of user-impression and de-anonymization data as well.
  • Aggressive testing by internet businesses, of substantive offers and marketing-display choices alike. At the largest, such as eBay, you’ll rarely see a page that doesn’t have at least one experiment on it. Paper-based direct marketers take a similar approach. Call centers perhaps should follow suit more than they do.
  • Surveys, focus groups, etc. These are commonly expensive and unreliable (and the cheap internet ones commonly irritate people who do business with you). But sometimes they are, or seem to be, the only kind of information available.
  • Free-text data. On the whole I’ve been disappointed by the progress in text analytics. Still — and this overlaps with some previous points — there’s a lot of information in text or narrative form out there for the taking.
    • Internally you might have customer emails, call center notes, warranty reports and a lot more.
    • Externally there’s a lot of social media to mine.

*Sadly, Cantor Fitzgerald later became famous for being hit especially hard on 9/11/2001.

And then there’s my favorite example of all. Several decades ago, especially in the 1990s, supermarkets and mass merchants implemented point-of-sale (POS) systems to track every item sold, and then added loyalty cards through which they bribed their customers to associate their names with their purchases. Casinos followed suit. Airlines of course had loyalty/frequent-flyer programs too, which were heavily related to their marketing, although in that case I think loyalty/rewards were truly the core element, with targeted marketing just being an important secondary benefit. Overall, that’s an awesome example of aggressive data gathering. But here’s the thing, and it’s an example of why I’m confused about the value of data — I wouldn’t exactly say that grocers, mass merchants or airlines have been bastions of economic success. Good data will rarely save a bad business.

Related links

Categories: Other

Documentum upgrade project: D2-Client, facets and xPlore

Yann Neuhaus - Sun, 2014-09-21 19:57

To enhance the search capability we had to configure xPlore to use the new customer attributes as facets and configure D2 to use the default and new facets.

  Configuring xPlore to use facets with the customer attributes
  • Stop the Index Agent and Server
  • Update indexserverconfig.xml by adding the following line (e. g.):

 

 xml-code

 

  • Keep only the indexserverconfig.xml file in $DSSEARCH_HOME/config
  • Remove $DSSEARCH_HOME/data/*
  • Start index and agent server
  • Start a full reindexing
  • Once all is indexed, set index to normal mode

 

Necessary tests

You should do two tests before configuring the D2-Client.

 

1. On the content server:

 

java com.emc.d2.api.config.search.D2FacetTest -docbase_name test67 -user_name admin -password xxxx -full_text -facet_names dbi_events

 

2. On the xPlore server:

  • Check if the new lines have been validated by executing $DSEARCH_HOME/dsearch/xhive/admin/XHAdmin
  • Navigate to xhivedb/root-library/dsearch/data/default
  • Under the Indexes Tab, click the "Add Subpaths" button to open the "Add sub-paths to index" window where you can see in the Path column the added customer attributes

 

Configure the search in D2-Config
  • Launch D2-Config
  • Select Interface and then the Search sub-menu
  • Tick  "Enable Facets" and enter a value for "Maximun of result by Facet"

 

D2-Config

 

Once this is done, you are able to use the facets with the D2-Client.

Improving your SharePoint performance using SQL Server settings (part 2)

Yann Neuhaus - Sun, 2014-09-21 17:36

Last week, I attended the SQLSaturday 2014 in Paris and participated in a session on SQL Server optimization for Sharepoint by Serge Luca. This session tried to list the best pratices and recommendations for Database Administrators in order to increase the SharePoint performance. This blog post is based on this session and is meant as a sequel to my previous post on Improving your SharePoint performance using SQL Server settings (part 1).

 

SQL Server instance

It is highly recommended to use a dedicated SQL Server instance for a SharePoint farm and to set LATIN1_GENERAL_CI_AS_KS_WS as the instance collation.

 

Setup Account permissions

You should give the Setup Account the following permissions in your SQL Server instance:

  • securityadmin server role

  • dbcreator server role

  • dbo_owner for databases used by the Setup Account

 

Alias DNS

It is recommended to use Alias DNS to connect to the SQL Server instance with your SharePoint server. It simplifies the maintenance and makes it easier to move SharePoint databases to another server.

 

Disk Priority

When you plan to allocate your SharePoint databases accross different databases, you might wonder how to maximize the performance of your system.

This is a possible disk organization (from faster to lower):

  • Tempdb data and transaction log files

  • Content database transaction log files

  • Search database data files (except Admin database)

  • Content database data files

 

Datafiles policy

You should use several datafiles for Content and Search databases, as follows:

  • distribute equally-sized data files accross separate disks

  • the number of data files should be lower than the number of processors

Multiple data files are not supported for other SharePoint databases.

 

Content databases size

You should avoid databases bigger than 200 GB. Databases bigger than 4 TB are not supported by Microsoft.

 

Conclusion

SharePoint is quite abstract for SQL Server DBAs because it requires specific configurations.

As a result, you cannot guess the answer: you have to learn on the subject.

★ Oracle to Unveil Database Cloud Service 2.0 at OpenWorld

Eddie Awad - Sun, 2014-09-21 17:22

Oracle Database Cloud Service

Michael Hickins:

At Oracle OpenWorld 2014, the company will roll out its new Database Cloud Service — a new multi-tenant database-as-a-service offering that will let customers migrate their existing apps and databases to the cloud “with the push of a button,” said Ellison. Data will be compressed ten to one and encrypted for secure and efficient transfer to the cloud, with no reprogramming. “Every single Oracle feature — even our latest high-speed in-memory processing — is included in the Oracle Cloud Database Service,” Ellison said. “Hundreds of thousands of customers and ISVs have been waiting for exactly this. Database is our largest software business and database will be our largest cloud service business.”

If you are attending Oracle OpenWorld this year and you are interested in the cloud (who isn’t nowadays?!) here are a few sessions focused on Database as a Service. I will be attending a few of them too.

© Eddie Awad's Blog, 2014. | Permalink | Add a comment | Topic: Oracle | Tags: ,

Related articles:

3 film non-meme

Greg Pavlik - Sun, 2014-09-21 15:09
Riffing off previous post - was discussing with my wife last evening what we thought the three best "recent" films we had seen were. Here's my list:

1) Jia Zhangke's A Touch of Sin.

Reason: this is a powerful, powerful film that explores the effects of radical individualism, and economic inequality and of the overturning of normal, local, rooted communities. Banned by the Chinese government, it is as much a critique of the values of neoliberalism globally as it is of the current Chinese economic experiment.

2) Alejandro González Iñárritu's Biutiful.

Reason: a moving exploration of responsibility and ethics in the face of poverty, hopelessness and impending death. What do we make of the human spirit and our obligations to each other - and our obligations in the face of The Other?  Javier Bardem was birthed for this role - fantastic acting.

3) Pavel Lungin's The Island.

Reason: who is guilty before whom and for what? Take a director of Jewish background, give him a story that is loosely inspired by a hagiography of the fool-for-Christ Feofil of the Kieven Caves, and cast a retired-rock-star-current-recluse (Pyotr Mamonov) as a Orthodox monastic in the far north of Russia, and I would have quite low expectations for the outcome. What Lungin produced is instead not only his best film but I think one of the best films of the last 20 years.

This is not my kingdom

FeuerThoughts - Sun, 2014-09-21 14:27
I don't know most people in Chicago on an individual basis, but of all the people I don't know, my favorite Chicagoans are scavengers. They roam the alleys in beat up pickup trucks, with various kinds of makeshift walls extended above the bed.
They grab anything made of metal and anything with the possibility of value. They reduce the amount of garbage going to landfills and I thank them very much for doing this.
Driving the other day, passed one such truck with a hand-lettered sign nailed to the wooden side wall. It said:
This is not my kingdom.Just passing through.

Categories: Development

Partitioned Clusters

Jonathan Lewis - Sun, 2014-09-21 12:28

In case you hadn’t noticed it, partitioning has finally reached clusters in 12c – specifically 12.1.0.2. They’re limited to hash clusters with range partitioning, but it may be enough to encourage more people to use the technology. Here’s a simple example of the syntax:


create cluster pt_hash_cluster (
        id              number(8,0),
        d_date          date,
        small_vc        varchar2(8),
        padding         varchar2(100)
)
-- single table
hashkeys 10000
hash is id
size 700
partition by range (d_date) (
        partition p2011Jan values less than (to_date('01-Feb-2011','dd-mon-yyyy')),
        partition p2011Feb values less than (to_date('01-Mar-2011','dd-mon-yyyy')),
        partition p2011Mar values less than (to_date('01-Apr-2011','dd-mon-yyyy'))
)
;

I’ve been waiting for them to appear ever since 11.2.0.1 and the TPC-C benchmark that Oracle did with them – they’ve been a long time coming (check the partition dates – that gives you some idea of when I wrote this example).

Just to add choice (a.k.a. confusion) 12.1.0.2 has also introduce attribute clustering so you can cluster data in single tables without creating clusters – but only while doing direct path loads or table moves. The performance intent is similar, though the technology and circumstances of use are different.


On False Binaries, Walled Gardens, and Moneyball

Michael Feldstein - Sat, 2014-09-20 10:08

D’Arcy Norman started a lively inter-blog conversation like we haven’t seen in the edublogosphere in quite a while with his post on the false binary between LMS and open. His main point is that, even if you think that the open web provides a better learning environment, an LMS provides a better-than-nothing learning environment for faculty who can’t or won’t go through the work of using open web tools, and in some cases may be perfectly adequate for the educational need at hand. The institution has an obligation to provide the least-common-denominator tool set in order to help raise the baseline, and the LMS is it. This provoked a number of responses, but I want to focus on Phil’s two responses, which talk at a conceptual level about building a bridge between the “walled garden” of the LMS and the open web (or, to draw on his analogy, keeping the garden but removing the walls that demarcate its border). There are some interesting implications from this line of reasoning that could be explored. What would be the most likely path for this interoperability to develop? What role would the LMS play when the change is complete? For that matter, what would the whole ecosystem look like?

Seemingly separately from this discussion, we have the new Unizin coalition. Every time that Phil or I write a post on the topic, the most common response we get is, “Uh…yeah, I still don’t get it. Tell me again what the point of Unizin is, please?” The truth is that the Unizin coalition is still holding its cards close to its vest. I suspect there are details of the deals being discussed in back rooms that are crucial to understanding why universities are potentially interested. That said, we do know a couple of broad, high-level ambitions that the Unizin leadership has discussed publicly. One of those is to advance the state of learning analytics. Colorado State University’s VP of Information Technology Pat Burns has frequently talked about “educational Moneyball” in the context of Unizin’s value proposition. And having spoken with a number of stakeholders at Unizin-curious schools, it is fair to say that there is a high level of frustration with the current state of play in commercial learning analytics offerings that is driving some of the interest. But the dots have not been connected for us. What is the most feasible path for advancing the state of learning analytics? And how could Unizin help in this regard?

It turns out that the walled garden questions and the learning analytics questions are related.

The Current State of Interoperability

Right now, our LMS gardens still have walls and very few doors, but they do have windows, thanks to the IMS LTI standard. You can do a few things with LTI, including the following:

  • Send a student from the LMS to someplace elsewhere on the web with single sign-on
  • Bring that “elsewhere” place inside the LMS experience by putting it in an iframe (again, with single sign-on)
  • Send assessment results (if there are any) back from that “elsewhere” to the LMS gradebook.

The first use case for LTI was to bring in a third-party tool (like a web conferencing app or a subject-specific test engine) into the LMS, making it feel like a native tool. The second use case was to send students out to a tool that needed to full control of the screen real estate (like an eBook reader or an immersive learning environment) but to make that process easier for students (through single sign-on) and teachers (through grade return). This is nice, as far as it goes, but it has some significant limitations. From a user experience perspective, it still privileges the LMS as “home base.” As D’Arcy points out, that’s fine for some uses and less fine for others. Further, when you go from the LMS to an LTI tool and back, there’s very little information shared between the tool. For example, you can use LTI to send a student from the LMS to a WordPress multiuser installation, have WordPress register that student and sign that student in, and even provision a new WordPress site for that student. But you can’t have it feed back information on all the student’s posts and comments into a dashboard that combines it with the student’s activity in the LMS and in other LTI tools. Nor can you use LTI to aggregate student posts from their respective WordPress blogs that are related to a specific topic. All of that would have to be coded separately (or, more likely, not done at all). This is less than ideal from both user experience and analytics perspectives.

Enter Uniz…Er…Caliper

There is an IMS standard in development called Caliper that is intended to address this problem (among many others). I have described some of the details of it elsewhere, but for our current purposes the main thing you need to know is that it is based on the same concepts (although not the same technical standards) as the semantic web. What is that? Here’s a high-level explanation from the Man Himself, Mr. Tim Berners-Lee:

Click here to view the embedded video.

The basic idea is that web sites “understand” each other. The LMS would “understand” that a blog provides posts and comments, both of which have authors and tags and categories, and some of which have parent/child relationships with others. Imagine if, during the LTI initial connection, the blog told the LMS about what it is and what it can provide. The LMS could then reply, “Great! I will send you some people who can be ‘authors’, and I will send you some assignments that can be ‘tags.’ Tell me about everything that goes on with my authors and tags.” This would allow instructors to combine blog data with LMS data in their LMS dashboard, start LMS discussion threads off of blog posts, and probably a bunch of other nifty things I haven’t thought of.

But that’s not the only way you could use Caliper. The thing about the semantic web is that it is not hub-and-spoke in design and does not have to have a “center.” It is truly federated. Perhaps the best analogy is to think of your mobile phone. Imagine if students had their own private learning data wallets, the same way that your phone has your contact information, location, and so on. Whenever a learning application—an LMS, a blog, a homework product, whatever—wanted to know something about you, you would get a warning telling you which information the app was asking to access and asking you to approve that access. (Goodbye, FERPA freakouts.) You could then work in those individual apps. You could authorize apps to share information with each other. And you would have your own personal notification center that would aggregate activity alerts from those apps. That notification center could become the primary interface for your learning activities across all the many apps you use. The PLE prototypes that I have seen basically tried to do a basic subset of this capability set using mostly RSS and a lot of duct tape. Caliper would enable a richer, more flexible version of this with a lot less point-to-point hand coding required. You could, for example, use any Caliper-enabled eBook reader that you choose on any device that you choose to do your course-related reading. You could choose to share your annotations with other people in the class and have their annotations appear in your reader. You could share information about what you’ve read and when you’ve read it (or not) with the instructor or with a FitBit-style analytics system that helps recommend better study habits. The LMS could remain primary, fade into the background, or go away entirely, based on the individual needs of the class and the students.

Caliper is being marketed as a learning analytics standard, but because it is based on the concepts underlying the semantic web, it is much more than that.

Can Unizin Help?

One of the claims that Unizin stakeholders make is that the coalition can can accelerate the arrival of useful learning analytics. We have very few specifics to back up this claim so far, but there are occasionally revealing tidbits. For example, University of Wisconsin CIO Bruce Mass wrote, “…IMS Global is already working with some Unizin institutions on new standards.” I assume he is primarily referring to Caliper, since it is the only new learning analytics standard that I know of at the IMS. His characterization is misleading, since it suggests a peer-to-peer relationship between the Unizin institutions and IMS. That is not what is happening. Some Unizin institutions are working in IMS on Caliper, by which I mean that they are participating in the working group. I do not mean to slight or denigrate their contributions. I know some of these folks. They are good smart people, and I have no doubt that they are good contributors. But the IMS is leading the standards development process, and the Unizin institutions are participating side-by-side with other institutions and with vendors in that process.

Can Unizin help accelerate the process? Yes they can, in the same ways that other participants in the working group can. They can contribute representatives to the working groups, and those representatives can suggest use cases. They can review documents. They can write documents. They can implement working prototypes or push their vendors to do so. The latter is probably the biggest thing that anyone can do to move a standard forward. Sitting around a table and thinking about the standard is good and useful, but it’s not a real standard until multiple parties implement it. It’s pretty common for vendors to tell their customers, “Oh yes, of course we will implement Caliper, just as soon as the specification is finalized,” while failing to mention that the specification cannot be finalized until there are implementers. What you end up with is a bunch of kids standing around the pool, each waiting for somebody else to jump in first. In other words, what you end up with is paralysis. If Unizin can accelerate the rate of implementation and testing of the proposed specification by either implementing themselves or pushing their vendor(s) to implement, then they can accelerate the development of real market solutions for learning analytics. And once those solutions exist, then Unizin institutions (along with everyone else) can use them and try to discover how to use all that data to actually improve learning. These are not unique and earth-shaking contributions that only Unizin could make, but they are real and important ones. I hope that they make them.

The post On False Binaries, Walled Gardens, and Moneyball appeared first on e-Literate.

JDeveloper 12c ADF View Token Performance Improvement

Andrejus Baranovski - Sat, 2014-09-20 05:37
There is known limitation in ADF 11g, related to accessing application in the same session from multiple browser tabs. While working with multiple browser tabs, eventually user is going to consume all view tokens, he will get timeout error once he returns back to the previous browser tab. Unused browser tab is producing timeout, because ADF 11g is sharing the same cache of view tokens for all browser tabs in the same session. This means the recent mostly used browser tab is going to consume all view tokens, other browser tab would loose the last token and screen state will be reset. This behaviour is greatly improved in ADF 12c with separate view token cache supported per each browser tab. If your application is designed to allow user access through multiple browser tabs in the same session, you should upgrade to ADF 12c for better performance.

I'm going to post results of a test with 11g and 12c. Firstly I'm going to present ADF 11g case and then ADF 12c.

ADF 11g view token usage:

Sample application contains one regular button, with PartialSubmit=false, to generate new view token on every submit:


Max Tokens parameter in web.xml is set to 2, to simulate token usage:



To see the debug output for view tokens usage on runtime, you should set special parameter - org.apache.myfaces.trinidadinternal.DEBUG_TOKEN_CACHE=true



On runtime try to open two browser tabs. You are going to see two view tokens consumed and reported in the log:



Press Test View Token button in the second tab, this would consume another view token. Remember, we have set maximum two tokens, no in the log it says - removing/adding token. This means, we have consumes both available tokens (for both tabs) and token from the first tab is replaced:



Go back to the first browser tab, try to press the same Test View Token button and you are going to get time out error - view token is lost and you need to reload the tab:


ADF 12c view token usage:

Sample application in the same way as in 11g, also implemented simple button set with PartialSubmit=false. This would force to use new view token on each submit:


Max Tokens parameter in web.xml, again is set to 2:


Two browser tabs are opened and two view tokens are consumed:


Press Test View Token in second browser tab, you are not going to see in the log information about removing/adding token (differently to 11g). This means, view token from the first browser tab still remains in the cache, second browser tab maintains its own view token cache:


Go back to the first browser tab, press Test View Token button - application works fine, no time out error as it was in 11g:


Download sample application ADF 11g and 12c examples - ViewTokensTest.zip.

Focus on Oracle Social Network at OpenWorld

David Haimes - Fri, 2014-09-19 22:49

This is the first of a series of posts I am planning leading up to Oracle OpenWorld which starts in less than a week.  I have a few different focus areas this year, so I’ll write a little about each of them.

I’ve been talking about collaboration in ERP for quite some time and was also very flattered to have TheAppsLab and Ulan for the UX team cover what we have done in their blogs too.  I call it Socializing the Finance Department, it isn’t about more Pot Luck Lunches and after work drinks, it is about using social tools in a secure and efficient manner, embedded in your ERP system, tied to your transactions and business flows to make you more productive.

The Oracle Social Network(OSN) is part of the infrastructure we build our cloud applications on, so it is pervasive in our cloud apps.  There are a lot of good sessions, see here for the complete OSN list.  I will be on a panel discussing the best use cases for social in enterprise applications, Tuesday September 30th 5pm – 5:45pm  - Moscone West – 3022, full details here.

We won’t be doing a demo, but here is one video to give you a taste of what we will discuss, or check out my post Can chatting make us more productive? for another video.  TO be honest, if you catch during the #oow week, I’m usually happy to show this off, so feel free to ask me.


Categories: APPS Blogs