Skip navigation.

Feed aggregator

LPAR and Oracle Database

Pakistan's First Oracle Blog - Tue, 2015-04-07 19:30
What is LPAR?

LPAR stands for Logical Partitioning and it's a feature of IBM's operating system AIX (Also available in Linux). By abstracting all the physical devices in a system, LPAR creates a virtualized computing environment.

In a server; the processor, memory, and storage are divided into multiple sets. Each set in a server consist of resources like processor, memory and storage. Each set is called as LPAR.

One server can have many LPARs operating at the same time. These LPARs communicate with each other as if they are on separate machines.

What is DLPAR?

DLPAR stands for Dynamic Logical Partitioning and with DLPAR the LPARs can be configured dynamically without restart. With DLPAR, memory, CPU and storage can be moved between LPARs on the fly.

What is HMC?

HMC stands for Hardware Management Console. The Hardware Management Console (HMC) is interface which is used to manage the LPARs. Its Java based and can be used to manage many systems.

If LPAR is in shared processor mode, without the following fix, LPAR may see excessive CPu usage: 


APARs for WAITPROC IDLE LOOPING CONSUMES CPU:
IV01111 AIX 6.1 TL05 if before SP08 (fixed in SP08)
IV06197 AIX 6.1 TL06 if before SP07 (fixed in SP07)
IV10172 AIX 6.1 TL07 if before SP02 (fixed in SP02)
IV09133 AIX 7.1 TL00 if before SP05 (fixed in SP05)
IV10484 AIX 7.1 TL01 if before SP02 (fixed in SP02)

This problem can effect POWER7 systems running any level of Ax720 firmware prior to Ax720_101. But it is recommended to update to the latest available firmware. If required, AIX and Firmware fixes can be obtained from IBM Support Fix Central:
http://www-933.ibm.com/support/fixcentral/main/System+p/AIX
Categories: DBA Blogs

APEX 5.0: Want to learn all about Universal Theme?

Patrick Wolf - Tue, 2015-04-07 15:29
And do you want to learn it straight from Shakeeb Rahman the Designer of Universal Theme? Then you have to sign up for the free ODTUG Webinar on Thursday, April 9 at 12:00PM EDT. That’s 17:00 London, 18:00 Vienna or … Continue reading →
Categories: Development

Finance and HR: A Marriage Made in Cloud Heaven

Linda Fishman Hoyle - Tue, 2015-04-07 14:09

800x600 A Guest Post by Dee Houchen (pictured left), senior principal product marketing director for Oracle ERP Cloud

What’s keeping CFOs up at night?

In conversations with chief financial officers, there are a number of business issues that are common across industries and geographies. One such issue—finding and retaining the best finance talent—is a topic that can dramatically impact the CFO’s effectiveness within his or her own organization.

Over the past several years, the role of the CFO has evolved from someone who keeps the books, to a more visionary and advisory role. Modern CFOs provide the essential reports, insight, and information needed to drive strategy around the boardroom table. This requires a new skill set—much different from the traditional accounting role. The best CFOs have learned to ask themselves the following questions:

  • How is the role of finance officer changing?
  • What sort of skills do finance professionals need today?
  • How do I attract and retain the best talent?
  • Do I have the right technology in place to keep my best and brightest engaged and intellectually challenged?

Wall Street Journal Custom Studios recently issued a report that addresses some of the above questions. Winning the War for Finance Talent: Game Plan for the Digital Age provides six recommendations on how finance leaders can improve bench strength within their own organizations. This infographic summarizes the report’s six recommendations:

  1. Recruit—and pay for—talent armed with a greater variety of skills
  2. Fill talent gaps by grooming from within
  3. Ask the right questions when analyzing data—you want answers that propel your business
  4. Share data with your team or it has no value
  5. Use technology and data insights to read your customers’ needs more accurately—making you a better business partner
  6. Upgrade your technology to attract and retain the best and the brightest. If you don’t, you may lose gifted people to more modern rivals.

On the last recommendation, many finance offices are looking to the cloud to not only update their technology, but improve efficiency between finance and HR. A modern ERP and HCM cloud—with built-in social capabilities, data analysis and dashboards, designed for today’s mobile workforce—can provide the technology edge that CFOs need to attract, and retain, the best talent. Read more in the recent Forbes.com articles, How to Win the Finance Talent You Need and Turning Bean Counters into Difference-Makers: How Corporate Finance is Changing with the Times.

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri",sans-serif;}

Get Recognition for your Knowledge of My Oracle Support by Becoming a My Oracle Support Accredited User

Joshua Solomin - Tue, 2015-04-07 08:55
.xContainer { width:99%; margin-left:auto; margin-right:auto; font-family:inherit; } .xContainer img { display:block; margin:0; padding:0; border:0; width:100%; } .xContainer .imgBox { margin:32px 0; width:100%; } .xContainer .imgStyle { border:1px solid #999; box-shadow:2px 2px 3px #bababa; } .xContainer .floatLeft { float:left; margin-right:20px; margin-bottom:12px; } .xContainer .floatRight { float:right; margin-left:20px; margin-bottom:12px; } .xContainer .oRed { color:#ff0000; } .xContainer .darkRed { color:#c00000; } .xContainer .bold { font-weight:bold; } .xContainer .italic { font-weight:italic; } .xContainer .underline { text-decoration:underline; } .xContainer .strikethrough { text-decoration:line-through; } Join us as part of our live Collaborate15 event.

If you are attending the Oracle Application User's Group (OAUG) Collaborate15, use the event Scheduler to locate Accreditation sessions.

Sessions are available for Oracle DB, Oracle E-Business Suite, JD Edwards, PeopleSoft and Primavera. Take the opportunity to attend expert Q&A sessions to learn more about the accreditation and ask questions; you can also attend the exam session to take your accreditation

(Expert sessions are not replacements for the pre-training refresher videos. Oracle Support staff will be on hand to assist in the exam session—but they will not answer the questions in the exam.)

Add sessions to your planner by entering the term Accreditation and click Search to locate them. You can refine your search further by selecting a date.

Session Scheduler What happens if I am not going to Collaborate15?

Don't miss out—Join us virtually by doing the pre work and watching the Accreditation Series for My Oracle Support and the Level 2 Products you use. If anything is new to you, take time to deep dive by reviewing the online help or taking the additional how to training modules listed in Document 603505.1

Share your experience—via Twitter and include @myoraclesupport to share your success story.

Join Us in the Accreditation Community—At times we will deconstruct questions similar to those in the exam, breaking down the purpose of the question and the answers the exam is looking for. Please note that posting exam questions in public forum is a violation of the Accreditation Program terms of use.

The community is the place to post your questions about the exam itself—share us what you thought about the exam experience:

  • Did you learn new Best Practices?
  • Any issues printing your certificate?
  • Did you vote in the Poll to have an Accreditation Community Badge?

Previewing Three Sessions at the Brighton Rittman Mead BI Forum 2015

Rittman Mead Consulting - Tue, 2015-04-07 03:00

As well as a one-day masterclass by myself and Jordan Meyer, a data visualisation challenge, keynotes and product update sessions from Oracle and our guest speaker from the Oracle Data Warehouse Global Leaders Program, the Brighton Rittman Mead BI Forum 2015 has of course a fantastic set of speakers and sessions on a wide range of topics around Oracle BI, data warehousing and big data. In this blog post I’m going to highlight three sessions at the Brighton BI Forum, and later in the week I’ll be doing the same with three sessions from the Atlanta event – so let’s start with a speaker who’s new to the BI Forum but very well-known to the UK OBIEE community – Steve Devine.

Steve is one of the most experienced OBIEE practitioners in the Europe, recently with Edenbrook / Hitachi Consulting, Claremont and now working with Altius in the UK. In his session at the Brighton BI Forum 2015 Steve’s going to talk to us about what’s probably the hottest topic around OBIEE at the moment in his session “The Art and Science of Creating Effective Data Visualisations”. Over to Steve:

NewImage

“These days, news publications and the internet are packed with eye-catching data visualisations and infographics – the New York Times, the Guardian or Information Is Beautiful to name but a few. Yet the scientists and statisticians tell us that everything could be a bar chart, and that nothing should ever be a pie chart! How do we make sense of these seemingly disparate, contrasting views?
My presentation provides an introduction on how graphic design principles complement the more science orientated aspects of data viz design. It will focus on a simple-to-apply design framework that brings all of these principles together, enabling you to create visualisations that have the right balance of aesthetics and function. By example, I’ll apply this framework to traditional BI scenarios such as operational and exploratory dashboards, as well as new areas that BI tools are just beginning to support such as commentary and storytelling. I’ll also look at how well Oracle’s BI tools address today’s data visualisation needs, and how they compare to the competition.”

On the topic of data visualisation, I’m also very pleased to have Daniel Adams from Rittman Mead’s US office coming over to the Brighton BI Forum to talk about effective dashboard design. Daniel’s been working with Rittman Mead clients in the US and Europe for the past year helping them apply data visualisation and dashboard design best practices to their dashboards and reports, and he’ll be sharing some of his methods and approaches in his session “User Experience First: Guided information and attractive dashboard design”:

NewImage

“Most front end OBI developers can give users exactly what they ask for, but will that lead to insightful dashboards that improve data culture and escalate the user xperience? One the biggest  mistakes I see as a designer, are dashboards that are a cluttered collection of tables and graphs. Poorly designed dashboards can prevent users from adopting a BI implementation, diminishing the ROI. 
In this session, attendees will learn to design dashboards that inform, instruct, and lead to smart discussion and decisions.  This includes learning to visualize data to convey meaning, implementing attractive visual design, and creating a layout that leads users through a target rich environment. We will walk through a series of “before” and “after” dashboards that demonstrate the difference between meeting a requirement, and using proven UX and UI design concepts to make OBIEE dashboards insightful and enjoyable to use.”

Finally, someone I’m very pleased to have over to the Brighton BI Forum for the first time is Gerd Aiglstorfer. I first met Gerd at an Oracle event in Germany several years ago, and since then I’ve noticed several of his blogs and the launch of his Oracle University Expert Sessions on OBIEE development, administration and RPD modelling. Gerd is one of Europe’s premier experts in OBIEE and Oracle BI, and for his inaugural BI Forum presentation he’ll be deep-diving into one of the most complex topics around repository modeling in his session “Driving OBIEE Join Semantics on Multi Star Queries as User”:

NewImage

“Multi star queries are a very useful and powerful functionality of OBIEE. But when I examine reports developed by business users or report developers I often find some misunderstandings on how it is working and queries are build by OBIEE. As additionally the execution strategy in OBIEE 11.1.1.7 has changed to generate SQL of multi star queries I had the idea to introduce the topic at the BI Forum. Thus, it’s a quite interesting topic to go into technical details of OBIEE SQL generator engine.
I’ll introduce how users can drive join semantics on common fields in multi star queries. You will get a full picture of the functionality for a better understanding of how report creation affects SQL generation. I recognized some inconsistencies during my tests of the new OBIEE 11.1.1.7 logic in January 2014. I will demonstrate the issues and would like to discuss if you would say: “It’s a defect within the SQL generator engine” – as I do.”

Full agenda details on the Brighton Rittman Mead BI Forum 2015 can be found on the event homepage, along with details of the optional one-day masterclass on Delivering the Oracle Information Management and Big Data Reference Architecture, and our first-ever Data Visualisation Bake-Off, using the DonorsChoose.org dataset.

Categories: BI & Warehousing

QlikSense – The easy way to create a QlikView report

Yann Neuhaus - Tue, 2015-04-07 00:57

Today, one of the most used Business Intelligence products is undoubtedly QlikView. This Swedish product is one of the best positioned in the Gartner Magic Quadrant for Business Intelligence and Analytics Platforms matrix and has been for a number of years.

BI-Gartner-201502.png

To make Business Intelligence even more accessible to Business users, the Qlik Company decided to add an additional product in its range. This product should be more accessible and easier to use to create analytics reports. That’s why QlikSense was created.

1.Build a report in a few mouse clicks

One of the great strengths of QlikView is that the use of an analytical report has become very simple. The user of a QlikView report need only to select the various criteria they wish to apply to these tables and graphs so that they are automatically adapted to the application.

report_qlikview.png 

 

In this case, the user has just clicked on the criteria "Argentina" and all report data has been filtered to show only the data for this country.

However, the construction of a QlikView’s report is not so simple. The multitude of options available for creating each artifact can quickly become quite complex. This makes the end user still dependent on the developer's report both for its construction than for any changes they would make.

To make the creation of these reports more accessible to the majority, Qlik decided to launch Qlik Sence. This product brings simplicity in building reports because it’s using the Drag and Drop technology. The user can now create his tables and charts using only the mouse, making the creation of a report as easy as in Excel.

To create a new report in QlikSense, click "Create a new App"

1.png

Give the new application a name and click « Create »

2_20150331-072751_1.png

Click on « Open»

3.png

As in any QlikView report, the first thing to do is to retrieve data. There are two options to do that:

"Quick Data Load" or " Load Data Editor."

  • “Quick Data Load" allows you to select any data coming from a flat file or located in a XLS sheet.
  • "Data Load Editor" allows you to create more complex data sources based on SQL -type queries

4_20150331-073800_1.png

For our example, we are using « Data Load Editor ». After opening the application, you should only select the data source connector you want to use and set up your connection to your data source.

5.png

I will created later a new blog focused on the creation and the setup of data connection in QlikView and QlikSense.

In this case, we will use a .qvx file type. Such files are created through the QlikView ETL application called "QlikView Expressor ".

The load script is then displayed on the screen in SQL format.

6.png

You just have to launch the data load.

7.png

When it’s done, you should activate the « App Overview » mode.

8.png

Then, the user can create a new sheet:

9.png

He gives it a name and click on it.

10.png

To display the « Edit » mode, you just have to click on this icon:

11.png

At this point, the user can create his graphic using the Drag and drop mode

drag_drop.png

After chosen the chart type, select your dimensions and measures

select_dim.png

And that’s the result.

graph.png

Despite its simplicity, Qlik Sence retains the power of creating and setting chart from QlikView. But the bulk of operations is now made via selection of options, making much easier to create different artifacts.

2.Type "Map" chart

Qlik has not only added the notion of Drag and Drop. One of the great innovations in QlikSense regards the possibility to create very easily a type "Map" chart, that is to say containing geographical information. It was quite difficult to do it in QlikView because it required the use of fairly complex calculated variables. With QlikSense, you just have to use the type "Map" object specially created for this purpose and complete the requested information.

To create a chart "Map", select the chart type "Map"

geo_graph_1.png

The only constraint to achieve this kind of graph is to have a GeoMakePoint type object. It is created by combining two geolocation information type objects (latitude and longitude).

In the example below, the object GeoMakePoint "Location" is the location of cities.

makepoint.png

Click in the " Layers " menu. In the "Add Layers " box, insert the GeoMakePoint object you just have created. And in the "Expression" box, add an aggregate expression.

geo_graph_7.png

Then, go to the "Background" menu, select the Slippy map server you want to use and copy the URL and the duties of the selected server
geo_graph_8.png

Small comment: All Slippy Map servers can be used by QlikSense. They are listed on the website of QlikView. To access it, simply click on the hyperlink "URL and Allocation".

geo_graph_4.png

To activate the different colors from your dimensions on the map, go in the menu « Apparence » and choose the following options:

geo_graph_9.png

Final result will be displayed on a map. Points are the dimension “city” and the size of the point will be the measure (Sum(Profit)).

map.png

This chart is as interactive as the other, it will also apply the filters selected by the user.

3.The mode "Story"

QlikSense also offers the possibility to create "Stories", a kind of Microsoft Powerpoint presentation. A "Story" is simply a result of various reports that have been fixed with the values that the user wishes to highlight. These presentations can also contain some kind of animation options from those used in Microsoft Powerpoint. These Stories are built from existing QlikSense reports on which the user takes a snapshot after having applied the desired filter.

To create a "Story" , simply activate the "App Overview" mode.

8.png

Then select the « story » mode and click on « new story ».

12_20150331-081429_1.png

Give it a name, push « Enter » and click on the story.

13.png

Before creating the story, you should have taken snapshots from other sheets.

Warning : You can’t insert snapshots located in some other QlikSense applications.

To create a snapshot, open a Qliksense sheet located in your application.

14.png

Open the story you have made and insert the snapshots you need.

15.png

You can add some text zones, Microsoft Powerpoint animations, special forms and media objects.

16.png

And that’s the final result.

17.png

In conclusion, we can say that QlikView and QlikSense are complementary products that are not reserved to the same type of user.

  • QlikSense is easier to handle and is designed especially for use as a dashboarding tool
  • Using HTML 5 technology, it allows you to create reports for any kind of devices
  • Map type graphs are easier to create
  • You can create Stories

So we can say that QlikSense will be more easily used by the Business than QlikView because of its friendliness to use.

APEX 5.0: Create Plug-ins as Subscription from Master Application

Patrick Wolf - Mon, 2015-04-06 15:00
In APEX 4.x it was a little bit cumbersome to use new Plug-ins in an application when you followed the best practice to subscribe Plug-ins from a Master Application which contained all Plug-ins of your workspace. Having such a Master Application has … Continue reading →
Categories: Development

Oracle Extends Integrations Between Marketing, Web, and Commerce Solutions To Help Marketers Enhance the Customer Experience

WebCenter Team - Mon, 2015-04-06 11:35

Latest integrations help marketers increase efficiencies, drive revenue and orchestrate personalized customer experiences

 

MODERN MARKETING EXPERIENCE – LAS VEGAS – April 1, 2015 – To help marketers deliver more personalized and engaging customer experiences across digital channels, Oracle today announced the integration of Oracle Marketing Cloud’s cross-channel marketing solution with Oracle Commerce and Oracle WebCenter Sites. The integrations help marketers increase conversions and drive revenue through cross-channel marketing efforts by enabling personalized customer experiences to be seamlessly orchestrated across digital channels.

Read the press release and CMSWire article

Collaborate 2015: WebCenter Discussions and Networking in Sunny Las Vegas

In less than a week, Fishbowl’s WebCenter experts will be heading to sunny Las Vegas for Collaborate 2015! We have a wide range of activities planned, and are looking forward to meeting and learning from other WebCenter users. If you’d like to view a full list of what Fishbowl will be participating in at Collaborate, download our Focus on Fishbowl guide. IOUG also has detailed information about Collaborate on their website.

Collaborate Image 1  Collaborate Image 2

Exhibit Information | Booth #948

Stop by Fishbowl’s booth for demos and discussions of Google Search for WebCenter, next-generation portals and intranets, image-enabled accounts payable solutions, and our newest product, ControlCenter, which provides an intuitive user interface along with workflow and review automation for controlled document management. We’ll be holding a special giveaway related to ControlCenter; stop by the booth for more details and to also register for an iPad drawing!

Presentation Information | Room Banyan F

Fishbowl will be holding three presentations at Collaborate, all in room Banyan F at Mandalay Bay. Be sure to attend to hear firsthand how our WebCenter team is working with customers to solve business problems.

Tuesday, April 14, 3:15-4:15 PM: Engaging Employees Through an Enterprise Portal: HealthPartners Case Study
Presented by Neamul Haque of HealthPartners and Tim Gruidl and Jerry Aber of Fishbowl Solutions

  • Issues HealthPartners had with previous portal sites
  • Benefits of deploying a content-centric, portal-focused framework
  • Improvement in end-user experience HealthPartners has seen with the new portal

Wednesday, April 15, 2:45-3:45 PM: The Doctors Company Creates Mobile-Ready Website Using Responsive and Adaptive Design
Presented by Paul Manno of The Doctors Company and Jimmy Haugen of Fishbowl Solutions

  • Importance of The Doctors’ website for educating customers and prospects
  • How responsive and adaptive design transformed user experience
  • Technologies leveraged to create a mobile-optimized site

Thursday, April 16, 8:30-9:30 AM: Using Oracle WebCenter Content for Document Compliancy in Food and Manufacturing
Presented by Kim Negaard and George Sokol of Fishbowl Solutions

  • Techniques for using revision control and automatic conversion
  • How to provide additional security and auditability around document approvals
  • How to increase efficiency and control over changes in documents

If you’d like to schedule a meeting with anyone on the Fishbowl team during Collaborate, feel free to contact us at cmsales@fishbowlsolutions.com. See you in Las Vegas!

The post Collaborate 2015: WebCenter Discussions and Networking in Sunny Las Vegas appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

The Fitbit Surge: Watching Where the Super Watch Puck Goes

Oracle AppsLab - Mon, 2015-04-06 10:26

Editor’s note: Here’s a review of the Fitbit Surge from Ultan (@ultan, @usableapps); if anyone can road-test a fitness tracker, it’s him. As luck would have it, the Surge is on my list of wearables to test as well. So, watch this space for a comparison review from a much less active person. Enjoy.

I’ve upgraded my Fitbit experience to the Fitbit Surge, the “Fitness Super Watch.”

Why?

I’ve been a Fitbit Flex user for about 18 months. I’ve loved its simplicity, unobtrusiveness, colourful band options, and general reliability. I’ve sported it constantly, worldwide. I’ve worn out bands and exhausted the sensor until it was replaced by the help of some awesome Fitbit global support. I’ve integrated it with the Aria Wi-Fi scales, synching diligently. I’ve loved the Fitbit analytics, visualization, the badges, and comparing experiences with others.

The human body makes more sense for me as a dashboard than a billboard, as Chris Dancy (@servicesphere) would say.

But I wanted more.

The Flex didn’t tell me very much on its own—or in the moment—other than when a daily goal was reached or the battery needed attention. I had to carry a smartphone to see any real information.

I am also a user of other fitness (mostly running) apps: Strava, MapMyRun, Runcoach, Runkeeper, and more. All have merits, but again, I still need to carry a smartphone with me to actually record or see any results. This also means that I need to run through a tiresome checklist daily to ensure the whole setup is functioning correctly. And given the increasing size of smartphones, I am constantly in need of new carrying accessories. I’m a mugger’s dream with twinkling phablets strapped to my arms at night, not to mention asking for technical grief running around in European rain.

The Surge seemed like a good move to make.

Spinning up the Fitbit Surge in the gym

Spinning up the Fitbit Surge in the gym

Onboarding the Superwatch Experience

I tested my new Fitbit Surge right out of the box in Finland on long snowy runs around Helsinki and have hammered it for weeks now with activities out in the Irish mist and in gyms, too. My impressions:

  • I love the fact that the Surge is standalone. I can record and glance at activity performance quickly, without the whole smartphone connectivity checklist thing.
  • The UI is intuitive with just three buttons (Home, Activity, and Select), and it incorporates swipe gestures and click interactions to get through the little cards that make up the UI paradigm. Familiar. Easy.
  • The Surge records and shows my heart rate, something that I realize should always be part of my fitness plan (duh). I discovered a resting heart rate bpm of around 50 BPM. Read. Weep.
  • The Surge has enhanced notifications capability, and I can see SMS messages or cell phone calls coming in. Nice.
  • The Surge has options for choosing between predefined activities. Fast.
  • The battery life (charging is via USB) is a major bonus over other smartwatches. The limited battery life of the Moto 360, for example, drives me crazy. The Surge battery life gives me about three days (although that is less than that advertised).
  • Having GPS is awesome, as I like to record and see where I have been, worldwide.
  • I am happy with the recorded data, and it seems comparable to the data quality I demand for my runs. I’ve had concerns about the Flex and other devices in this regard.
2_gps_surge 3_cardio_surge 4_ios_app_surge 5_ios_app_overview

On the downside:

  • I don’t like the fact that the Surge is available only in black (as of now), that the display is monochrome, and that there are no interchangeable band options. I’m a #fashtech kinda guy.
  • You can only use one Fitbit device at a time. (I’m like that; I might like to wear a different device on different occasion.)
  • The predetermined activities are slightly limiting. Who knows, maybe ironing in the nude burns lots of calories? (I don’t, by the way.)
  • The call notifications and text notifications are great, but to do anything in the moment with those alerts means that I need to turn to my phone, unlike say my Android Wear Moto Motorola 360 that lets me respond using voice.
  • Having to actually tell the watch what you’re doing first is pure Age of Context denial. Google Fit, for example, does a decent job of automatically sensing what activity I am up to, and where and when I am. Plus, it lets me enter data manually and plays nice with my Moto 360 for a glanceable UI.
  • And then there’s the “unlearning” of the Flex invisibility. I’ve walked off quite a few times forgetting Surge is still in action, and only hours later realized I needed to stop the thing.
 Fitbit Surge versus Motorola Moto 360

Relative glance: Fitbit Surge versus Motorola Moto 360

Thoughts on the Surge and Super Watch Approach

An emerging wearable technology analyst position is that upped smartwatches such as the Fitbit Surge or “super watches” will subsume the market for dedicated fitness bands. I think that position is broadly reasonable, but requires nuance.

Fitness bands (Flex, Jawbone Up, and so on), as they stand, are fine for the casual fitness type, or for someone who wants a general picture of how they’re doing, wellness-wise. They’ll become cheaper, giveaways even. More serious fitness types, such as hardcore runners and swimmers, will keep buying the upper-end Garmin-type devices and yes, will still export data and play with it in Microsoft Excel. In the middle of the market, there’s that large, broad set of serious amateurs, Quantified Self fanbois, tech heads, and the more competitive or jealous wannabe types who will take to the “super watches.”

And yet, even then, I think we will still see people carrying smartphones when they run or work out in the gym. These devices still have richer functionality. They carry music. They have a camera. They have apps to use during your workout or run (be they for Starbucks or Twitter). And you can connect to other people with them by voice, text, and so on.

I like the Fitbit Surge. Sure, it’s got flaws. But overall, the “super watch” approach is a sound one. The Surge eliminates a lot of complexity from my overall wearable experience, offers more confidence about data reliability, and I get to enjoy the results of my activity efforts faster, at a glance. It’s a more “in the moment.” experience. It’s not there on context and fashion, but it will be, I think.

Anyone wanna buy some colored Fitbit Flex bands?Possibly Related Posts:

Consumer Reports guide on which fruits and veggies to always buy organic

FeuerThoughts - Mon, 2015-04-06 09:32
First, I encourage everyone reading this (and beyond) to subscribe to Consumer Reports and make use of their unbiased, science-based reviews of products and services.

It is the best antidote to advertising you will ever find.

In their May 2015 issue, they analyze the "perils of pesticides" and offer a guide to fruits and vegetables. When you should buy organic? When might conventional be OK for you?

[or, as CR puts it, "Though we believe that organic is always the best choice because it promotes sustainable agriculture, getting plenty of fruits and vegetables - even if you can't obtain organic - takes precedence when it comes to your health.]

Here are the most important findings:

ALWAYS BUY ORGANIC

CR found that for these fruits and vegetables, you should always buy organic - the pesticide risk in conventional is too high.

Fruit

Peaches
Tangerines
Nectarines
Strawberries
Cranberries

Vegetables

Green Beans
Sweet Bell Peppers
Hot Peppers
Sweet Potatoes
Carrors
Categories: Development

Realtime BI Show with Kevin and Stewart – BI Forum 2015 Special!

Rittman Mead Consulting - Mon, 2015-04-06 08:00

Jordan Meyer and I were very pleased to be invited onto the Realtime BI Show podcast last week, run by Kevin McGinley and Stewart Bryson, to talk about the upcoming Rittman Mead BI Forum running in Brighton and Atlanta in May 2015. Stewart and Kevin are of course speaking at the Atlanta BI Forum event on May 13th-15th 2015 at the Renaissance Atlanta Midtown Hotel, Atlanta, and in the podcast we talk about the one-day masterclass that Jordan and I are running, some of the sessions at the event, and the rise of big data and data discovery within the Oracle BI+DW industry.

Full details on the two BI Forum 2015 events can be found on the event homepage, along with details of the optional one-day masterclass on Delivering the Oracle Information Management and Big Data Reference Architecture, the guest speakers and the inaugural Data Visualization Challenge. Registration is now open and can be done online using the two links below.

  • Rittman Mead BI Forum 2015, Brighton –  May 6th – 8th 2015 

We’ve also set up a special discount code for listeners to the Realtime BI Show, with 10%-off both registration and the masterclass fee for both the Brighton and Atlanta events – use code RTBI10 on the Eventbrite registration forms to qualify.

Categories: BI & Warehousing

Oracle Forms Migration to Formspider

Gerger Consulting - Mon, 2015-04-06 06:59
We’d like to invite you to our free webinar about migrating Oracle Forms applications to Formspider. Join our webinar and find out how the Formspider customer TEAM GmbH from Germany migrated their large (500+ forms) Oracle Forms based product to Formspider with a fraction of the cost compared to the alternatives.The webinar will be presented by Frank Zscherlich (Division Manager, TEAM GmbH), Michael Wibberg (Product Manager, TEAM GmbH) and Yalim K. Gerger (Founder, Formspider).During the webinar, the following topics will be covered:
  • What other products did TEAM look at?
  • Why did TEAM choose Formspider?
  • What is it like to work with the company behind Formspider?
  • What was TEAM’s approach to Forms Migration?
Watch a short demo of TEAM’s new application at this link. This application is built by 5 Oracle Forms Developers.

The webinar is free but space is limited. Sign up now.The Formspider Team
Categories: Development

APEX 5.0: Plug-in Enhancements – Overview

Patrick Wolf - Mon, 2015-04-06 05:54
In the next few days I plan to write a series of blog postings about the Plug-In Enhancements which will be available in Oracle APEX 5.0* The plan is to cover the following areas: Create Plug-ins as Subscription from Master … Continue reading →
Categories: Development

Nephophobia

Floyd Teter - Sun, 2015-04-05 14:48
They say that these are not the best of times
But they're the only times I've ever known
And I believe there is a time for meditation
In cathedrals of our own

Now I have seen that sad surrender in my mother's eyes
I can only stand apart and sympathize
For we are always what our situations hand us
Its either sadness or euphoria

                          -- From Billy Joel's "Summer, Highland Falls"

In working with Oracle customers every day, I'm seeing a common thread running through many internal IT departments:  Nephophobia.  That's right, fear of clouds.  In this case, I'm talking fear of clouds from a technology perspective (I'm admittedly having a bit of fun here and mean no offense to anyone with a true fear of clouds).

The fear shows up through either resistance or an avalanche of "what if" or "what about" questioning.  I suspect that the cause of that fear is rooted in the fear of change, as in "what happens to my job"?  So this post is for all those folks in all those internal IT departments faced with moving to the cloud, whether it be SaaS, PaaS, Hybrid, or whatever.

You are spot on in recognizing that your world is changing.  All the things you've spent your time doing - patches, upgrades, general maintenance - they're all going away.  The cloud vendor will be taking over that work as part of the service to which your institution will subscribe.  But, as those tasks disappear, new opportunities arise.  Some examples:

Network administration:  because your users are interacting with off-location servers, the performance of your own internal network becomes even more critical in a move to the cloud.

Integration:  as much as the major enterprise application vendors would like you to stick with one platform, odds are you won't.  You'll probably mix two or more vendors plus some in-house applications.  Getting all these apps to talk to each other is critical.

Development:  one of the keys for enterprise application cloud vendors is that, in order to scale (and thus make money, because cloud services are a volume business), the business processes have to be pretty basic so they can be easily shared across multiple industries.  If you work with an institution that has unique transactional and/or reporting needs (I see this frequently with public sector organizations), there will be some custom development involved.  Extensions, bolt-on applications, unique reporting...all will live on to some extent, although probably not as much as you've seen in the past.

Mobile:  everyone wants mobile and the cloud provides a great platform for delivering mobile applications.  So all those things about network administration, integration and development?  They apply here as well...maybe even more so.

All this discussion notwithstanding, let's get to the root of it:  this type of change can threaten your job.  It's scary.  So what do you do?  Update your skills to stay relevant.  The key to making a living in IT over the long-term is to be continually learning new things.  If you don't make the investment on your own, you'll find yourself on the outside looking in.  So do it.  Dig into this cloud thing.  Learn the technical underpinnings.  Figure out where you and your IT department fit...how can you add value?  And feel the fear go away.

Not In CTAS

Jonathan Lewis - Sun, 2015-04-05 11:49

Everyone gets caught out some of the time with NOT IN.

NOT IN is not the opposite of IN.

This came up in a (fairly typical) question on OTN recently where someone had the task of “deleting 6M rows from a table of 18M”. A common, and perfectly reasonable, suggestion for dealing with a delete on this scale is to consider creating a replacement table holding the data you do want rather than deleting the data you don’t want.  In this case, however, the query for deleting the data looked like this:


DELETE FROM EI.CASESTATUS
     WHERE CASEID NOT IN (SELECT CASEID FROM DO.STG_CASEHEADER);

The suggested code for creating the kept data was this:


CREATE TABLE newTable as
  SELECT * FROM EI.CASESTATUS
     WHERE CASEID IN (SELECT CASEID FROM DO.STG_CASEHEADER);

You might get the same result sets in the two tables after this process – but it depends on the CASEID in both tables never being NULL. (You might think that a column with ID in the name probably ought to be a primary key, or a foreign key with a NOT NULL declaration, but then again there’s that STG_ in the subquery table that might be indicative of a “staging” table, so who knows what might happen if the data’s allowed to start life as dirty data.) Here’s a quick demo to prove the point. First some simple data creation – with an optional insert so that you’ve got two tests in one script – followed by the two strategies for identifying data:


drop table t3;
drop table t2;
drop table t1;

create table t1 (n1 number);
create table t2 (n1 number);

insert into t1 values(null);
insert into t1 values(1);
insert into t1 values(2);

/* two tests available, with or without a null in t2 */

-- insert into t2 values(null);
insert into t2 values(1);

commit;

-- gather stats here

set null n/a
delete from t1
where  t1.n1 not in (select t2.n1 from t2);

prompt  Remainder after delete

select  *  from t1;

rollback;

prompt  Selected on create

create table t3 as
select  *  from t1
where   t1.n1 in (select t2.n1 from t2);

select * from t3;

Then the two sets of output from running the test, first with the the NULL insert into t2:


Remainder after delete

        N1
----------
n/a
         1
         2
Selected on create

        N1
----------
         1

We haven’t deleted ANY data from t1 when we were probably hoping that the 2 would disappear – after all, it’s not in t2; however since the equality comparison between a t1 row and every t2 row must evaluate to FALSE before a t1 row is deleted and the comparison of 2 and NULL evaluates to NULL the 2 row won’t be deleted (similarly the comparison for “t1.NULL = t2.anything” evaluates to NULL rather than FALSE, so the NULL isn’t deleted).

Still, perhaps the rewrite would have been okay for the data set where we don’t have a NULL in t2:


Remainder after delete

        N1
----------
n/a
         1
Selected on create

        N1
----------
         1

Oops – still doesn’t produce matching results . This time the row with the 2 has disappeared from t1 in both cases – which might have been closer to what the original OTN poster had hoped but we still have the difference in the survival of the NULLs from t1 – for the reason given for the previous data set

Footnote:

In passing, the execution plan subsequently supplied by the OP showed a “MERGE JOIN ANTI NA” with stg_caseheader (the subquery table) as the second table. The significance of the NA (Null-aware) is that it tells us that the join column in stg_caseheader definitely doesn’t have a NOT NULL constraint on it. (We can’t draw any conclusion about the join column in casestatus.)


EBS 12.2 do not ignore the database patches on top of AD Delta 5 and TXK Delta 5

Senthil Rajendran - Sun, 2015-04-05 08:13

Make sure all the recommended patches are in place as a part of the bundle patch. Your EBS 12.2 ADOP cycle could go unstable with out the database patches.

A world of confusion

Gary Myers - Sat, 2015-04-04 02:00
It has got to the stage where I often don't even know what day it is. No, not premature senility (although some may disagree). But time zones.

Mostly I've had it fairly easy in my career. When I worked in the UK, I just had the one time zone to work with. The only time things got complicated was when I was working at one of the power generation companies, and we had to make provision for the 23-hour and 25-hour days that go with Daylight Savings.

And in Australia we only have a handful of timezones, and when I start and finish work, it is the same day for any part of Australia. I did work on one system where the database clock was set to UTC, but dates weren't important on that application.

Now it is different. I'm dealing with events that happen all over the world. Again the database clock is UTC, with the odd effect that TRUNC(SYSDATE) 'flips over' around lunchtime. Now when I want to look at 'recent' entries (eg a log table) I've got into the habit of asking WHERE LOG_DATE > SYSDATE - INTERVAL '9' HOUR

And we also have columns that are TIMESTAMP WITH TIMEZONE. So I'm getting into the habit of selecting COL_TS AT TIME ZONE DBTIMEZONE . I could use sessiontimezone, but then the time component of DATE columns would be inconsistent.  This becomes just a little more confusing this time of year as various places slip in and out of Daylight Savings.

Now things are getting even more complicated for me.

Again, during my career, I've been lucky enough to be pretty oblivious to character set issues. Most things have squeezed in to my databases without any significant trouble. Occasionally I've had to look for some accented characters in people's names, but that's been it.

In the past few months, I've been working with some European data where the issues have been more pronounced. Aside from a few issues in emails, I've been coping quite well (with a lot of help from Google Translate). 

Now I get to work with some Japanese data. And things get complicated.

"The modern Japanese writing system is a combination of two character types: logographic kanji, which are adopted Chinese characters, and syllabic kana. Kana itself consists of a pair of syllabarieshiragana, used for native or naturalised Japanese words and grammatical elements, and katakana, used for foreign words and names, loanwordsonomatopoeia, scientific names, and sometimes for emphasis. Almost all Japanese sentences contain a mixture of kanji and kana. Because of this mixture of scripts, in addition to a large inventory of kanji characters, the Japanese writing system is often considered to be the most complicated in use anywhere in the world.[1][2]"Japanese writing system

Firstly I hit katakana. With some tables, I can get syllables corresponding to the characters and work out something that I can eyeball and match up to some English data. As an extra complication, there are also half-width characters which are semantically equivalent but occupy different codepoints in Unicode. That has parallels to upper/lower case in English, but is a modern development that came about from trying to fit the previously squarish forms into print, typewriters and computer screens.

Kanji is a different order of shock. Primary school children in Japan learn the first 1000 or so characters. Another thousand plus get taught in high school. The character set is significantly larger in total.

I will have to see if the next few months cause my head to explode. In the mean time, I can recommend reading this article about the politics involved in getting characters (glyphs ? letters ?) into Unicode.  I Can Text You A Pile of Poo, But I Can’t Write My Name

Oh, and I'm still trying to find the most useful character/font set I can have on my PC and  use practically in SQL Developer. My current choice shows the Japanese characters when I click in the field in the dataset, but only little rectangles when I'm not in the field. The only one I've found that does show up all the time is really UGLY. 

adoafmctl.sh hangs

Vikram Das - Fri, 2015-04-03 19:26
Rajesh and Shahed called me about this error where after a reboot of the servers, adoafmctl.sh wouldn't start.  It gave errors like these:

You are running adoafmctl.sh version 120.6.12000000.3 
Starting OPMN managed OAFM OC4J instance ... 
adoafmctl.sh: exiting with status 152 
adoafmctl.sh: check the logfile 
$INST_TOP/logs/appl/admin/log/adoafmctl.txt for more information

adoafmctl.txt showing:ias-component/process-type/process-set:
default_group/oafm/default_group/
Error
--> Process (index=1,uid=349189076,pid=15039)
time out while waiting for a managed process to start
Log:
$INST_TOP/logs/ora/10.1.3/opmn/default_group~oafm~default_group~1
07/31/09-09:14:28 :: adoafmctl.sh: exiting with status 152
================================================================================
07/31/09-09:14:40 :: adoafmctl.sh version 120.6.12000000.3
07/31/09-09:14:40 :: adoafmctl.sh: Checking the status of OPMN managed OAFM OC4J instance
Processes in Instance: SID_machine.machine.domain
-------------------+--------------------+---------+---------
ias-component | process-type | pid | status
-------------------+--------------------+---------+---------
default_group | oafm | N/A | Down
Solution:
1. Shutdown all Middle tier services and ensure no defunct processes exist running the following from the operating system:# ps -ef | grep
If one finds any, kill these processes.2. Navigate to $INST_TOP/ora/10.1.3/opmn/logs/states directory. It contains hidden file .opmndat:# ls -lrt .opmndat3. Delete this file .opmndat after making a backup of it:# rm .opmndat4. Restart the services.

5. Re-test the issue.
This resolved the issue.
Categories: APPS Blogs

Market Segmentation and Data Mining

Dylan's BI Notes - Fri, 2015-04-03 18:30
1. Market Segmentation in the academic world  Market Segmentation is part of marketing process. It is described in Philip Kotler’s book as part of the step of defining the market strategy.The idea is to segment consumer market by some variables and to divide the market into different segments. Selecting the segments for your products is the result of the […]
Categories: BI & Warehousing