Skip navigation.

Feed aggregator

Look What We Made

Oracle AppsLab - Thu, 2014-11-20 17:53

As a team-building activity for our newly merged team of research, design and development, someone, who probably wishes to remain nameless, organized a glass mosaic and welding extravaganza at The Crucible in Oakland.

We split into two teams, one MIG welding, the other glass breaking, and here’s the result.

Original image, glass before firing.

Original image, glass before firing.

Finished product, including frame.

Finished product, including frame.

All-in-all an interesting and entertaining activity. Good times were had by all, and no one was cut or burned, so bonus points for safety.Possibly Related Posts:

From Concept to Code

Oracle AppsLab - Thu, 2014-11-20 17:42

Editor’s note: Here’s a repost of a wonderful write-up of an event we did a couple weeks ago, courtesy of Friend of the ‘Lab Karen Scipi (@KarenScipi).

What Karen doesn’t mention is that she organized, managed and ran the event herself. Additional props to Ultan (@ultan) on the idea side, including the naming, Sandra Lee (@SandraLee0415) on the execution side and to Misha (@mishavaughan) for seeing the value. Without the hard work of all these people, I’m still just talking about a great idea in my head that I’m too lazy to execute. You guys all rock. 

Enjoy the read.

Concept to Code: Shaping and Shipping Innovative User Experience Solutions for the Enterprise

By Karen Scipi

It was an exciting event here at Oracle Headquarters as our User Experience AppsLab (@theappslab) Director Jake Kuramoto (@jkuramot) recently hosted an internal design jam called Shape and ShipIt. Fifteen top-notch members of the newly expanded team got together for two days with a packed schedule to research and innovate cutting-edge enterprise solutions, write use cases, create wireframes, and build and code solutions. They didn’t let us down.

The goal: Collaborate and rapidly design practical, contextual, mobile Oracle Applications Cloud solutions that address real-world user needs and deliver enterprise solutions that are streamlined, natural, and intuitive user experiences.

The result: Success! Four new stellar user experience solutions were delivered to take forward to product development teams working on future Oracle Application Cloud simplified user interface releases.

banner_final_a

Shape and ShipIt event banner

While I cannot share the concepts or solutions with you as they are under strict lock and key, I can share our markers of the event’s success with you.

The event was split into two days:

Day 1: A “shape” day during which participants received invaluable guidance from Bill Kraus on the role of context and user experience, then researched and shaped their ideas through use cases and wireframes.

Day 2: A “ship” day during which participants coded, reviewed, tested, and presented their solutions to a panel of judges that included Jeremy Ashley (@jrwashley), Vice President of the Oracle Applications User Experience team.

It was a packed two days full of ideas, teamwork, and impressive presentations.

IMG_1590

Participants Anthony Lai, Bill Kraus, and Luis Galeana [photo: Sandra Lee (@SandraLee0415)]

The participants formed four small teams that comprised managers, architects, researchers, developers, and interaction designers whose specific perspectives proved to be invaluable to the tasks at hand. Their blend of complementary skills enabled the much needed collaboration and innovation.

IMG_0917

Diversity drives more innovation at Oracle. Participants Mark Vilrokx, Osvaldo Villagrana, Raymond Xie Julia Blyumen, and Joyce Ohgi hard at work. [photo: Karen Scipi (@KarenScipi)]

Although participants were charged with a short timeframe for such an assignment, they were quick to adapt and refine their concepts and produce solutions that could be delivered and presented in two days. Individual team agility was imperative for designing and delivering solutions within a two-day timeframe.

Participants were encouraged to brainstorm and design in ways that suited them. Whether it was sitting at tables with crayons, paper, notebooks and laptops, or hosting walking meetings outside, the participants were able to discuss concepts and ideate in their own, flexible ways.

 Sandra Lee]

Brainstorming with notebooks and pens: Cindy Fong and Tony Orciuoli [photo: Sandra Lee]

 Karen Scipi]

Brainstorming with laptops: Noel Portugal and Ben Bendig
[photo: Karen Scipi]

As with all of our simplified user interface design efforts, participants kept a “context produces magic” perspective front and center throughout their activities. In the end, team results yielded responsive, streamlined, context-driven user experience solutions that were simple yet powerful.

Healthy “brain food” and activity breaks were encouraged, and both kept participants engaged and focused on the important tasks at hand. Salads, veggies, dips, pastas, wraps, and sometimes a chocolate chip cookie (for the much needed sugar high) were on the menu. The activity break of choice was an occasional competitive game of table tennis at the Oracle Fitness Center, just a stone’s throw from the event location. The balance of think-mode and break-mode worked out just right for participants.

 Karen Scipi]

Healthful sustenance: Lunch salads [photo: Karen Scipi]

Our biggest marker of success, though, was how wrong we were. Yes. Wrong. While we expected one team’s enterprise solution to clearly stand out from among all of the others, we were pleasantly surprised as all four were equally impressive, viable, and well-received by the design jam judges. Four submissions, four winners. Nice job!

 Karen Scipi]

Participants (standing) Cindy Fong, Sarahi Mireles, and Tony Orciuoli present their enterprise solution to the panel of judges (seated): Jake Kuramoto, Jatin Thaker, Tim Dubois, Jeremy Ashley, and Bill Kraus [photo: Karen Scipi]

Stay tuned to the Usable Apps Blog to learn more about such events and what happens to the innovative user experiences that emerge!Possibly Related Posts:

Upcoming Webinar: Innovation in Managing the Chaos of Everyday Project Management

On Thursday, December 4th from 1 PM-2 PM CST, Fishbowl Solutions will hold a webinar in conjunction with Oracle about our new solution for enterprise project management. This solution transforms how project-based tools, like Oracle Primavera, and project assets, such as documents and diagrams, are accessed and shared.

With this solution:

  • Project teams will have access to the most accurate and up to date project assets based on their role within a specific project
  • Through a single dashboard, project managers will gain new real-time insight to the overall status of even the most complex projects
  • The new mobile workforce will now have direct access to the same insight and project assets through an intuitive mobile application

With real-time insight and enhanced information sharing and access, this solution can help project teams increase the ability to deliver on time and on budget. To learn more about our Enterprise Information Portal for Project Management, visit Fishbowl’s website.

Fishbowl’s Cole Orndorff, who has 10+ years in the engineering and construction industry, will keynote and share how a mobile-ready portal can integrate project information from Oracle Primavera and other sources to serve information up to users in a personalized, intuitive user experience.

Register here

The post Upcoming Webinar: Innovation in Managing the Chaos of Everyday Project Management appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Fresh and Frozen Fruit consumption – U.S. Bureau of Labor Statistics

Nilesh Jethwa - Thu, 2014-11-20 14:47

The south and the West consume highest amount of Fruits.

Image

Here is more individual breakdown by Quarterly expenditure on Fruits (figures in 100 million)

Image

Image

Image

Concept to Code: Shaping and Shipping Innovative User Experience Solutions for the Enterprise

Usable Apps - Thu, 2014-11-20 14:33

It was an exciting event here at Oracle Headquarters as our User Experience AppsLab (@theappslab) Director Jake Kuramoto (@jkuramot) recently hosted an internal design jam called Shape and ShipIt. Fifteen top-notch members of the newly expanded team got together for two days with a packed schedule to research and innovate cutting-edge enterprise solutions, write use cases, create wireframes, and build and code solutions. They didn’t let us down.

The goal: Collaborate and rapidly design practical, contextual, mobile Oracle Applications Cloud solutions that address real-world user needs and deliver enterprise solutions that are streamlined, natural, and intuitive user experiences.

The result: Success! Four new stellar user experience solutions were delivered to take forward to product development teams working on future Oracle Application Cloud simplified user interface releases.

Design jam event banner

Design jam event banner

While I cannot share the concepts or solutions with you as they are under strict lock and key, I can share our markers of the event’s success with you.

The event was split into two days:

  • Day 1: A “shape” day during which participants received invaluable guidance from Bill Kraus on the role of context and user experience, then researched and shaped their ideas through use cases and wireframes.
  • Day 2: A “ship” day during which participants coded, reviewed, tested, and presented their solutions to a panel of judges that included Jeremy Ashley (@jrwashley), Vice President of the Oracle Applications User Experience team.

It was a packed two days full of ideas, teamwork, and impressive presentations.

 Sandra Lee]

Participants Anthony Lai, Bill Kraus, and Luis Galeana [photo: Sandra Lee (@SandraLee0415)]

The participants formed four small teams that comprised managers, architects, researchers, developers, and interaction designers whose specific perspectives proved to be invaluable to the tasks at hand. Their blend of complementary skills enabled the much needed collaboration and innovation.

 Karen Scipi]

Diversity drives more innovation at Oracle. Participants Mark Vilrokx, Osvaldo Villagrana, Raymond Xie Julia Blyumen, and Joyce Ohgi hard at work. [photo: Karen Scipi (@KarenScipi)]

Although participants were charged with a short timeframe for such an assignment, they were quick to adapt and refine their concepts and produce solutions that could be delivered and presented in two days. Individual team agility was imperative for designing and delivering solutions within a two-day timeframe.

Participants were encouraged to brainstorm and design in ways that suited them. Whether it was sitting at tables with crayons, paper, notebooks and laptops, or hosting walking meetings outside, the participants were able to discuss concepts and ideate in their own, flexible ways.

 Sandra Lee]

Brainstorming with notebooks and pens: Cindy Fong and Tony Orciuoli [photo: Sandra Lee]

 Karen Scipi]

Brainstorming with laptops: Noel Portugal and Ben Bendig 
[photo: Karen Scipi]

As with all of our simplified user interface design efforts, participants kept a “context produces magic” perspective front and center throughout their activities. In the end, team results yielded responsive, streamlined, context-driven user experience solutions that were simple yet powerful.

Healthy “brain food” and activity breaks were encouraged, and both kept participants engaged and focused on the important tasks at hand. Salads, veggies, dips, pastas, wraps, and sometimes a chocolate chip cookie (for the much needed sugar high) were on the menu. The activity break of choice was an occasional competitive game of table tennis at the Oracle Fitness Center, just a stone’s throw from the event location. The balance of think-mode and break-mode worked out just right for participants.

 Karen Scipi]

Healthful sustenance: Lunch salads [photo: Karen Scipi]

Our biggest marker of success, though, was how wrong we were. Yes. Wrong. While we expected one team’s enterprise solution to clearly stand out from among all of the others, we were pleasantly surprised as all four were equally impressive, viable, and well-received by the design jam judges. Four submissions, four winners. Nice job!

 Karen Scipi]

Participants (standing) Cindy Fong, Sarahi Mireles, and Tony Orciuoli present their enterprise solution to the panel of judges (seated): Jake Kuramoto, Jatin Thaker, Tim Dubois, Jeremy Ashley, and Bill Kraus [photo: Karen Scipi]

Stay tuned to the Usable Apps Blog to learn more about such events and what happens to the innovative user experiences that emerge!

NetApp Plug-in for Oracle RMAN

Bas Klaassen - Thu, 2014-11-20 08:03
Nice feature for Oracle dba's to handle the backups using NetApp technology. Check this demo.. http://community.netapp.com/t5/FAS-Data-ONTAP-and-Related-Plug-ins-Articles-and-Resources/Video-NetApp-Plug-in-2-0-for-Oracle-RMAN-MML-Demo/ta-p/87711Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com0
Categories: APPS Blogs

Quantum Data

Jonathan Lewis - Thu, 2014-11-20 05:30

That’s data that isn’t there until you look for it, sort of, from the optimizer’s perspective.

Here’s some code to create a sample data set:


create table t1
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		level <= 1e4
)
select
	rownum					id,
	mod(rownum-1,200)			mod_200,
	mod(rownum-1,10000)			mod_10000,
	lpad(rownum,50)				padding
from
	generator	v1,
	generator	v2
where
	rownum <= 1e6
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1',
		method_opt 	 => 'for all columns size 1'
	);
end;
/

Now derive the execution plans for a couple of queries noting, particularly, that we are using queries that are NOT CONSISTENT with the current state of the data (or more importantly the statistics about the data) – we’re querying outside the known range.


select * from t1 where mod_200  = 300;
select * from t1 where mod_200 >= 300;

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |  2462 |   151K|  1246   (5)| 00:00:07 |
|*  1 |  TABLE ACCESS FULL| T1   |  2462 |   151K|  1246   (5)| 00:00:07 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("MOD_200"=300)

SQL> select * from t1 where mod_200 >=300;

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |  2462 |   151K|  1246   (5)| 00:00:07 |
|*  1 |  TABLE ACCESS FULL| T1   |  2462 |   151K|  1246   (5)| 00:00:07 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("MOD_200">=300)

The predicted cardinality for mod_200 = 300 is the same as that for mod_200 >= 300. So, to be self-consistent, the optimizer really ought to report no rows (or a token 1 row) for any value of mod_200 greater than 300 – but it doesn’t.


SQL> select * from t1 where mod_200 = 350;

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |  1206 | 75978 |  1246   (5)| 00:00:07 |
|*  1 |  TABLE ACCESS FULL| T1   |  1206 | 75978 |  1246   (5)| 00:00:07 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("MOD_200"=350)

SQL> select * from t1 where mod_200 =360;

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |   955 | 60165 |  1246   (5)| 00:00:07 |
|*  1 |  TABLE ACCESS FULL| T1   |   955 | 60165 |  1246   (5)| 00:00:07 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("MOD_200"=360)

SQL> select * from t1 where mod_200 =370;

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |   704 | 44352 |  1246   (5)| 00:00:07 |
|*  1 |  TABLE ACCESS FULL| T1   |   704 | 44352 |  1246   (5)| 00:00:07 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("MOD_200"=370)

SQL> select * from t1 where mod_200 =380;

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |   452 | 28476 |  1246   (5)| 00:00:07 |
|*  1 |  TABLE ACCESS FULL| T1   |   452 | 28476 |  1246   (5)| 00:00:07 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("MOD_200"=380)

SQL> select * from t1 where mod_200 in (350, 360, 370, 380);

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |  3317 |   204K|  1275   (7)| 00:00:07 |
|*  1 |  TABLE ACCESS FULL| T1   |  3317 |   204K|  1275   (7)| 00:00:07 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("MOD_200"=350 OR "MOD_200"=360 OR "MOD_200"=370 OR "MOD_200"=380)

The IN-list results are consistent with the results for the individual values – but the result for the IN-list is NOT consistent with the result for the original mod_200 >= 300. The optimizer uses a “linear decay” strategy for handling predicates that go out of range, but not in a consistent way. It seems that, as far as out-of-range, range-based predicates are concerned, the data doesn’t exist until the wave front collapses.

Footnote:

This type of anomaly COULD affect some execution plans if your statistics haven’t been engineered to avoid the problems of “out of range” traps.


Rittman Mead Founder Member of the Red Expert Alliance

Rittman Mead Consulting - Thu, 2014-11-20 05:24

At Rittman Mead we’re always keen to collaborate with our peers and partners in the Oracle industry, running events such as the BI Forum in Brighton and Atlanta each year and speaking at user groups in the UK, Europe, USA and around the world. We were pleased therefore to be contacted by our friends over at Amis in the Netherlands just before this year’s Openworld with an idea they had about forming an expert-services Oracle partner network, and to see if we’d be interested in getting involved.

Fast-forward a few weeks and thanks to the organising of Amis’ Robbrecht van Amerongen we had our first meeting at San Francisco’s Thirsty Bear, where Amis and Rittman Mead were joined by the inaugural member partners Avio Consulting, E-VIta AS, Link Consulting, Opitz Consulting and Rubicon Red.

Gathering of the expert alliance partners at #oow14 @Oracle open world. pic.twitter.com/PNK8fpt7iU

— Red Expert Alliance (@rexpertalliance) November 3, 2014

The Red Expert Alliance now has a website, news page and partner listing, and as the “about” page details, it’s all about collaborating between first-class Oracle consulting companies:

NewImage

“The Red Expert Alliance is an international  network of first-class Oracle consulting companies. We are  working together to deliver the maximum return on our customers investment in Oracle technology. We do this by collaborating, sharing and challenging each other to improve ourselves and our customers.

Collaborating with other companies is a powerful way to overcome challenges of today’s fast-paced world and improve competitive advantage. Collaboration provides participants mutual benefits such as shared resources, shared expertise and enhanced creativity; it gives companies an opportunity to improve their performance and operations, achieving more flexibility thanks to shared expertise and higher capacity. Collaboration also fuels innovation by providing more diversity to the workplace which can result in better-suited solutions for customers.”

There’s also a press release, and a partner directory listing out our details along with the other partners in the alliance. We’re looking forward to collaborating with the other partners in the Red Expert Alliance, increasing our shared knowledge and collaborating together on Oracle customer projects in the future.

Categories: BI & Warehousing

In Which I (Partially) Disagree with Richard Stallman on Kuali’s AGPL Usage

Michael Feldstein - Wed, 2014-11-19 18:32

Since Michael is making this ‘follow-up blog post’ week, I guess I should jump in.

In my latest post on Kuali and the usage of the AGPL license, the key argument is that this license choice is key to understanding the Kuali 2.0 strategy – protecting KualiCo as a new for-profit entity in their future work to develop multi-tenant cloud hosting code.

What I have found interesting is that in most of my conversations with Kuali community people ,even for those who are disillusioned, they seem to think the KualiCo creation makes some sense. The real frustration and pushback has been on how decisions are made, how decisions have been communicated, and how the AGPL license choice will affect the community.

In the comments, Richard Stallman chimed in.

As the author of the GNU General Public License and the GNU Affero General Public License, and the inventor of copyleft, I would like to clear up a possible misunderstanding that could come from the following sentence:

“Any school or Kuali vendor, however, that develops its own multi-tenant cloud-hosting code would have to relicense and share this code publicly as open source.”

First of all, thinking about “open source” will give you the wrong idea about the reasons why the GNU AGPL and the GNU GPL work as they do. To see the logic, you should think of them as free software licenses; more specifically, as free software licenses with copyleft.

The idea of free software is that users of software deserve freedom. A nonfree program takes freedom away from its users, so if you want to be free, you need to avoid it. The aim of our copyleft licenses is to make sure all users of our code get freedom, and encourage release of improvements as free software. (Nonfree improvements may as well be discouraged since we’d need to avoid them anyway.) See http://gnu.org/philosophy/free-software-even-more-important.html.

I don’t use the term “open source”, since it rejects these ethical ideas. (http://gnu.org/philosophy/open-source-misses-the-point.html.) Thus I would say that the AGPL requires servers running modified versions of the code to make the source for the running version available, under the AGPL, to their users.

The license of the modifications themselves is a different question, though related. The author of the modifications could release the modifications under the AGPL itself, or under any AGPL-compatible free software license. This includes free licenses which are pushovers, such as the Apache 2.0 license, the X11 license, and the modified BSD license (but not the original BSD license — see
http://gnu.org/licenses/license-list.html).

Once the modifications are released, Kuali will be able to get them and use them under whatever license they carry. If it is a pushover license, Kuali will be able to incorporate those modifications even into proprietary software. (That’s what makes them pushover licenses.)

However, if the modifications carry the AGPL, and Kuali incorporates them into a version of its software, Kuali will be bound by the AGPL. If it distributes that version, it will be required to do so under the AGPL. If it installs that version on a server, it will be required by the AGPL to make the whole of the source code for that version available to the users of that server.

To avoid these requirements, Kuali would have to limit itself to Kuali’s own code, others’ code released under pushover licenses, plus code for which it gets special permission. Thus, Kuali will not have as much of a special position as some might think.

See also http://gnu.org/philosophy/assigning-copyright.html
and http://gnu.org/philosophy/selling-exceptions.html.

Dr Richard Stallman
President, Free Software Foundation (gnu.org, fsf.org)
Internet Hall-of-Famer (internethalloffame.org)
MacArthur Fellow

I appreciate this clarification and Stallman’s participation here at e-Literate, and it is useful to understand the rationale and ethics behind AGPL. However, I disagree with the statement “Thus, Kuali will not have as much of a special position as some might think”. I do not think he is wrong, per se, but the combination of both the AGPL license and the Contributor’s License Agreement (CLA) in my view does ensure that KualiCo has a special position. In fact, that is the core of the Kuali 2.0 strategy, and their approach would not be possible without the AGPL usage.

Note: I have had several private conversations that have helped me clarify my thinking on this subject. Besides Michael with his comment to the blog, Patrick Masson and three other people have been very helpful. I also interviewed Chris Coppola from KualiCo to understand and confirm the points below. Any mistakes in this post, however, are my own.

It is important to understand two different methods of licensing at play – distributing code through an APGL license and contributing code to KualiCo through a CLA (Kuali has a separate CLA for partner institutions and a Corporate CLA for companies).

  • Distribution – Anyone can download the Kuali 2.0 code from KualiCo and make modifications as desired. If the code is used privately, there is no requirement for distributing the modified code. If, however, a server runs the modified code, the reciprocal requirements of AGPL kick in and the code must be distributed (made available publicly) with the AGPL license or a pushover license. This situation is governed by the AGPL license.
  • Contribution – Anyone who modifies the Kuali 2.0 code and contributes it to KualiCo for inclusion into future releases of the main code grants a license with special permission to KualiCo to do with the code as they see fit. This situation is governed by the CLA and not AGPL.

I am assuming that the future KualiCo multi-tenant cloud-hosting code is not separable from the Kuali code. In other words, the Kuali code would need modifications to allow multi-tenancy.

For a partner institution, their work is governed by the CLA. For a company, however, the choice on whether to contribute code is mutual between that company and KualiCo, in that both would have to agree to sign a CLA. Another company may choose to do this to ensure that bug fixes or Kuali enhancements get into the main code and do not have to be reimplemented with each new release.

For any contributed code, KualiCo can still keep their multi-tenant code proprietary as their special sauce. For distributed code under AGPL that is not contributed under the CLA, the code would be publicly available and it would be up to KualiCo whether to incorporate any such code. If KualiCo incorporated any of this modified code into the main code base, they would have to share all of the modified code as well as their multi-tenant code. For this reason, KualiCo will likely never accept any code that is not under the CLA – they do not want to share their special sauce. Chris Coppola confirmed this assumption.

This setup strongly discourages any company from directly competing with KualiCo (vendor protection) and is indeed a special situation.

The post In Which I (Partially) Disagree with Richard Stallman on Kuali’s AGPL Usage appeared first on e-Literate.

Modifying the Oracle Alta Skin

Shay Shmeltzer - Wed, 2014-11-19 14:38

In the previous blog entries I showed you how to create an ADF project that uses the new Alta UI and then showed you an example of implementing one of the design patterns for a flip card. In this blog/video I'm going to show you how you can further fine tune the look and feel of your Alta application by modifying and extending your skin with CSS.

At the end of the day, this is going to be done in a similar way to how you skinned previous ADF applications. (If you have never done this before, you might want to watch the videos in these two blog entries).

But since the skinning design time is not completely there for Alta in JDeveloper 12.1.3 there are a couple of tricks. Specifically when you create the new skin, you'll need to change the trinidad-skins.xml file to indicate it is extending the alta-v1 and not skyros-v1  - <extends>alta-v1.desktop</extends>

Then the rest of your tasks would be basically the same (although you won't see the overview tab in your skin editor).

So here we go:

Categories: Development

A Weird but True Fact about Textbook Publishers and OER

Michael Feldstein - Wed, 2014-11-19 13:44

As I was perusing David Kernohan’s notes on Larry Lessig’s keynote at the OpenEd conference, one statement leapt out at me:

Could the department of labour require that new education content commissioned ($100m) be CC-BY? There was a clause (124) that suggested that the government should check that no commercial content should exist in these spaces. Was argued down. But we were “Not important” enough to be defeated.

It is absolutely true that textbook publishers do not currently see OER as a major threat. But here’s a weird thing that is also true:

These days, many textbook publishers like OER.

Let me start with the full disclosure. For 18 months, I was an employee of Cengage Learning, one of the Big Three textbook publishers in US higher education. Since then, I have consulted for textbook publishers on and off. Pearson is a current client, and there have been others. Make of that what you will in terms of my objectivity on this subject, but I have been in the belly of the beast. I have had many conversations with textbook publisher employees at all levels about OER, and many of them truly, honestly like it. They really, really like it. As a rule, they don’t understand it. But some of them actually see it as a way out of the hole that they’re in.

This is a relatively recent thing. Not so very long ago, you’d get one of two reactions from employees at these companies, depending on the role of the person you were talking to. Editors would tend to dismiss OER immediately because they had trouble imagining that content that didn’t go through their traditional editorial vetting process could be good (fairly similarly to the way academics would dismiss Wikipedia as something that couldn’t be trusted without traditional peer review). There were occasional exceptions to this, but always for very granular content. Videos, for example. Sometimes editors saw (or still see) OER as extra bits—or “ancillary materials,” in their vernacular—that could be bundled with their professionally edited product. That’s the most that editors typically thought about OER. At the executive level, every so often they would trot out OER on their competitive threat list, look at it for a bit, and decide that no, they don’t see evidence that they are losing significant sales to OER. Then they would forget about it for another six months or so. Publishers might occasionally fight OER at a local level, or even at a state level in places like Washington or California where there was legislation. But in those cases the fight was typically driven by the sales divisions that stood to lose commissions, and they were treated like any other local or regional competition (such as home-grown content development). It wasn’t viewed as anything more than that. For the most part, OER was just not something publishers thought a lot about.

That has changed in US higher education as it has become clear that textbook profits are collapsing as student find more ways to avoid buying the new books. The traditional textbook business is clearly not viable in the long term, at least in that market, at least at the scale and margins that the bigger publishers are used to making. So these companies want to get out of the textbook business. A few of them will say that publicly, but many of them say it among themselves. They don’t want to be out of business. They just want to be out of the textbook business. They want to sell software and services that are related to educational content, like homework platforms or course redesign consulting services. But they know that somebody has to make the core curricular content in order to for them to “add value” around that content. As David Wiley puts it, content is infrastructure. Increasingly, textbook publishers are starting to think that maybe OER can be their infrastructure. This is why, for example, it makes sense for Wiley (the publisher, not the dude) to strike a licensing deal with OpenStax. They’re OK about not making a lot of money on the books as long as they can sell their WileyPlus software. Which, in turn, is why I think that Wiley (the dude, not the publisher) is not crazy at all when he predicts that “80% of all US general education courses will be using OER instead of publisher materials by 2018.” I won’t be as bold as he is to pick a number, but I think he could very well be directionally correct. I think many of the larger publishers hope to be winding down their traditional textbook businesses by 2018.

How particular OER advocates view this development will depend on why they are OER advocates. If your goal is to decrease curricular materials costs and increase the amount of open, collaboratively authored content, then the news is relatively good. Many more faculty and students are likely to be exposed to OER over the next four or five years. The textbook companies will still be looking to make their money, but they will have to do so by selling something else, and they will have to justify the value of that something else. It will no longer be the case that students buy closed textbooks because it never occurs to faculty that there is another viable option. On the other hand, if you are an OER advocate because you want big corporations to stay away from education, then Larry Lessig is right. You don’t currently register as a significant threat to them.

Whatever your own position might be on OER, George Siemens is right to argue that the significance of this coming shift demands more research. There’s a ton that we don’t know yet, even about basic attitudes of faculty, which is why the recent Babson survey that everybody has been talking about is so important. And there’s a funny thing about that survey which few people seem to have noticed:

It was sponsored by Pearson.

The post A Weird but True Fact about Textbook Publishers and OER appeared first on e-Literate.

Why You Should Create BPM Suite Business Object Using element (and not complexType)

Jan Kettenis - Wed, 2014-11-19 12:16
In this article I describe why you always should base your business object based upon an element, instead of a complexType.

With the Oracle BPM Suite your process data consists of project or process variables. Whenever the variable is based on a component, that component is either defined by some external composite (like a service), or is defined by the BPM composite itself, in which case it will be a Business Object. That Business Object is created directly or is based upon an external schema. Still with me?

When using an external schema you should define the business object based upon an element instead of a complexType. Both will be possible, but when you define it based upon a complexType, you will find that any variable using it, cannot be used in (XSLT) transformations nor can be used as input to Business Rules.

As an example, see the following schema:


The customer variable (that is based on an element) can be used in an XSLT transformation, whereas the order variable cannot:

The reason being that XSLT works on elements, and not complexTypes.

For a similar reason, the customer variable can be used as input to a Business Rule but the order variable cannot:

Of course, if you are a BPEL developer, you probably would already know, as there you can only create variables based on elements ;-)

Table TXK_TCC_RESULTS needs to be installed by running the EBS Technology Codelevel Checker (available as patch 17537119).

Vikram Das - Wed, 2014-11-19 12:06
I got this error while trying to apply a patch in R12.2:

 [EVENT]     [START 2014/11/19 09:18:39] Performing database sanity checks
   [ERROR]     Table TXK_TCC_RESULTS needs to be installed by running the EBS Technology Codelevel Checker (available as patch 17537119).

This table TXK_TCC-RESULTS is created in APPLSYS schema, by the latest version of the checkDBpatch.sh script delivered by 17537119.
So go ahead, download patch 17537119.  
Login as oracle user on your database node.Source environmentcd $ORACLE_HOME/appsutilunzip p17537119*$ ./checkDBpatch.sh 
+===============================================================+ | Copyright (c) 2005, 2014 Oracle and/or its affiliates. | | All rights reserved. | | EBS Technology Codelevel Checker | +===============================================================+ 
Executing Technology Codelevel Checker version: 120.18 
Enter ORACLE_HOME value : /exampler122/oracle/11.2.0 
Enter ORACLE_SID value : exampler122
Bugfix XML file version: 120.0.12020000.16 
Proceeding with the checks... 
Getting the database release ... Setting database release to 11.2.0.3 
DB connectivity successful. 
The given ORACLE_HOME is RAC enabled. NOTE: For a multi-node RAC environment - run this tool on all non-shared ORACLE_HOMEs. - run this tool on one of the shared ORACLE_HOMEs. 

Created the table to store Technology Codelevel Checker results. 
STARTED Pre-req Patch Testing : Wed Nov 19 10:53:00 EST 2014 
Log file for this session : ./checkDBpatch_7044.log 
Got the list of bug fixes to be applied and the ones to be rolled back. Checking against the given ORACLE_HOME 

Opatch is at the required version. 
Found patch records in the inventory. 
All the required one-offs are present in Oracle Database Home 
Stored Technology Codelevel Checker results in the database successfully. 
FINISHED Pre-req Patch Testing : Wed Nov 19 10:53:03 EST 2014 
========================================================= 
1 select owner,table_name from dba_tables 2* where table_name='TXK_TCC_RESULTS' SQL> / 
OWNER TABLE_NAME ------------------------------ ------------------------------ APPLSYS TXK_TCC_RESULTS 
SQL> 
Once you have done this, restart your patch with adop with additional parameter restart=yes
Categories: APPS Blogs

Visualization shows hackers behind majority of data breaches

Chris Foot - Wed, 2014-11-19 10:18

Transcript

Hi, welcome to RDX! Amid constant news of data breaches, ever wonder what's causing all of them? IBM and Ponemon's Global Breach Analysis can give you a rundown. 

While some could blame employee mishaps or poor security, hacking is the number one cause of many data breaches, most of which are massive in scale. For example, when Adobe was hacked, approximately 152 million records were compromised.

As you can imagine, databases were prime targets. When eBay lost 145 million records to perpetrators earlier this year, hackers used the login credentials of just a few employees and then targeted databases holding user information.

To prevent such trespasses from occurring, organizations should employ active database monitoring solutions that scrutinize login credentials to ensure the appropriate personnel gain entry.

Thanks for watching! Visit us next time for more news and tips about database protection!

The post Visualization shows hackers behind majority of data breaches appeared first on Remote DBA Experts.

The Case for migrating your Oracle Forms applications to Formspider

Gerger Consulting - Wed, 2014-11-19 08:37
We’ve been getting a lot of inquiries asking whether we have a tool that will automatically convert Oracle Forms applications to Formspider.We don’t have an automatic converter. We don’t view this as a disadvantage at all. We are solving the modernization problem with a different approach:Formspider does not automatically converts your Forms applications to web apps but it converts your Forms developers to first rate web developers.We think this approach produces the best results and lowest cost in conversion projects. We’ve seen this many times.Formspider is an application development framework just like Oracle Forms and just like Forms its programming language is 100% PL/SQL. (You can think of it as Oracle Forms built for 21st century.) Because Formspider works very similar to Oracle Forms (event driven architecture, Formspider built-ins instead of Forms built-ins etc…) it is an order of magnitude easier to learn for Oracle Forms developers compared to any other tool.A conversion project using Formspider is not a complete rewrite where you start with a blank page. This is absolutely not true. Just to give few examples; All of your existing business logic implemented in PL/SQL can be reused in your new application. And because both Forms and Formspider are event driven and use similar API’s, code that manages the UI can be translated fairly easily. In other words, there is no impedance mismatch between the two products unlike between Forms and (say) ADF, .NET or any other popular web development framework.There are several problems with automatically converting Oracle Forms applications to another tech stack:1) The new target tech stack is not known by your team
I cannot overstate how important this is. You end up with an application that you cannot maintain. Your team, who knows the business, who knows what your customers want, who obviously can deliver a successful application to the users, needs training in the new tech stack. This means they go back to becoming junior developers for quite some time. (Even the most zealous ADF proponents admit to a year or longer learning curve.) This feeling of helplessness is very frustrating for experienced developers who know exactly what they wants to implement. It hurts the team moral and motivation during the project. It’s also very costly because, well, training costs money and the output of the developers lower significantly for months but their salaries do not.2) The output of an automatic converter is usually of low quality
Almost always the target tech stack uses the web page paradigm instead of the event driven architecture of Forms. This impedance mismatch is very very difficult to overcome automatically and the customer ends up with a low quality application design that no engineer would architect by himself. This makes the application very difficult and expensive to maintain.Moreover, if the target tech stack uses an object oriented programming language, this adds another magnitude of complexity because PL/SQL is not object oriented. This is why most automatic conversion projects start with the manual process of moving as much code to the database as possible.3) Automated converters are expensive
Best to my knowledge these converter tools are not cheap at all. These tools come with bundled services (they never get the job done 100% automatically) so you also pay for consulting services on top of the conversion fees.Formspider customers around the World have been upgrading their Forms applications with Formspider successfully for years. The same team who built and maintained the Forms applications builds the application in Formspider. They get excited and motivated because finally they have a tool that they can use to build what they want. They feel empowered instead of helpless. The cost savings we provide might be up to hundreds of thousands of dollars depending on the size of your application. I have seen this over and over again many times:Put Formspider in the hands your Forms developers and they will modernize your Forms applications with the highest return on investment.Yalim K. Gerger
Founder
Categories: Development

Comparisons

Jonathan Lewis - Wed, 2014-11-19 06:47

“You can’t compare apples with oranges.”

Oh, yes you can! The answer is 72,731,533,037,581,000,000,000,000,000,000,000.


SQL> 
SQL> create table fruit(v1 varchar2(30));
SQL> 
SQL> insert into fruit values('apples');
SQL> insert into fruit values('oranges');
SQL> commit;
SQL> 
SQL> 
SQL> begin
  2  	     dbms_stats.gather_table_stats(
  3  		     ownname	      => user,
  4  		     tabname	      =>'FRUIT',
  5  		     method_opt       => 'for all columns size 2'
  6  	     );
  7  end;
  8  /
SQL> 
SQL> select
  2  	     endpoint_number,
  3  	     endpoint_value,
  4  	     to_char(endpoint_value,'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx') hex_value
  5  from
  6  	     user_tab_histograms
  7  where
  8  	     table_name = 'FRUIT'
  9  order by
 10  	     endpoint_number
 11  ;

ENDPOINT_NUMBER                                   ENDPOINT_VALUE HEX_VALUE
--------------- ------------------------------------------------ -------------------------------
              1  505,933,332,254,715,000,000,000,000,000,000,000  6170706c65731ad171a7dca6e00000
              2  578,664,865,292,296,000,000,000,000,000,000,000  6f72616e67658acc6c9dcaf5000000
SQL> 
SQL> 
SQL> 
SQL> select
  2  	     max(endpoint_value) - min(endpoint_value) diff
  3  from
  4  	     user_tab_histograms
  5  where
  6  	     table_name = 'FRUIT'
  7  ;

                                            DIFF
------------------------------------------------
  72,731,533,037,581,000,000,000,000,000,000,000
SQL> 
SQL> spool off



Oracle 12c Unified Auditing - Pure Mode

Continuing our blog series on Oracle 12 Unified Auditing is a discussion of Pure  Mode. Mixed mode is intended by Oracle to introduce Unified Auditing and provide a transition from the traditional Oracle database auditing.  Migrating to PURE Unified Auditing requires the database be stopped, the Oracle binary linked to uniaud_on, and then restarted.  This operation can be reversed if auditing needs to be changed back to Mixed Mode. 

When changing from Mixed to pure Unified Audit, two key changes occur.  The first is the audit trails are no longer written to their traditional pre-12c audit locations.  Auditing is consolidated into the Unified Audit views and stored using Oracle SecureFiles.  Oracle Secured Files use a proprietary format which means that Unified Audit logs cannot be viewed using editors such vi and may preclude or affect the use of third party logging solutions such as Splunk or HP ArcSight.  As such, Syslog auditing is not possible with Pure Unified Audit.

Unified Audit Mixed vs. Pure Mode Audit Locations

System Tables

Mixed Mode

Pure Unified Audit Impact

SYS.AUD$

Same as 11g

Exists, but will only have pre-unified audit records

SYS.FGA_LOG$

Same as 11g

Exists, but will only have pre-unified audit records

The second change is that the traditional audit configurations are no longer used.  For example, traditional auditing is largely driven by the AUDIT_TRAIL initialization parameter.  With pure Unified Audit, the initialization parameter AUDIT_TRAIL is ignored.

Unified Audit Mixed vs. Pure Mode Audit Configurations

System Parameters

Mixed Mode

Pure Unified Audit Impact

AUDIT_TRAIL

Same as 11g

Exists, but will not have any effect

AUDIT_FILE_DEST

Same as 11g

Exists, but will not have any effect

AUDIT_SYS_OPERATIONS

Same as 11g

Exists, but will not have any effect

AUDIT_SYSLOG_LEVEL

Same as 11g

Exists, but will not have any effect

UNIFIED_AUDIT_SGA_QUEUE_SIZE

Same as 11g

Yes

If you have questions, please contact us at mailto:info@integrigy.com

Reference Tags: AuditingOracle Database
Categories: APPS Blogs, Security Blogs

SQL Server 2014: buffer pool extension & corruption

Yann Neuhaus - Wed, 2014-11-19 01:49

I had the opportunity to attend Paul Randal’s session on advanced data recovery techniques at the Pass Summit. During this session one attendee asked Paul if a page that has just been corrupted can remain in the buffer pool extension (BPE). As you probably know, BPE only deals with clean pages. Paul hesitated a lot and asked us to test and this is exactly what I will do in the blog post.

First, let’s start by limiting the maximum memory that can be used by the buffer pool:

 

-- Configure SQL Server max memory to 1024 MB EXEC sp_configure'show advanced options', 1; GO RECONFIGURE; EXEC sp_configure'max server memory (MB)', 1024; GO RECONFIGURE; GO

 

Then we can enable the buffer pool extension feature:

 

ALTER SERVER CONFIGURATION SET BUFFER POOL EXTENSION ON (           -- change the path if necessary        FILENAME = N'E:SQLSERVERMSSQLSERVERDATAssdbuffer_pool.bpe',        SIZE = 4096 MB );

 

I configured a buffer pool extension size with 4X the max memory value for the buffer cache

At this point I need a database with a big size in order to have a chance to retrieve some data pages in the buffer pool extension part. My AdventureWorks2012 database will fit this purpose:

 

USE AdventureWorks2012; GO   EXEC sp_spaceused;

 

blog_23_-_1_-_adventureworks2012_size

 

I have also 3 big tables in this database: dbo.bigTransactionHistory_rs1 (2.2GB), dbo.bigTransactionHistory_rs2 (2.1 GB) and BigTransactionHistory (1.2GB)

 

blog_23_-_2_-_adventureworks2012_top_tables_size

 

I have a good chance to find out some pages related on these tables in the BPE, if I perform a big operation like a DBCC CHECKDB on the AdventureWorks2012 database.

After performing a complete integrity check of this database and executing some queries as well, here it is the picture of my buffer pool:

 

SELECT        CASE is_in_bpool_extension              WHEN 1 THEN 'SSD'              ELSE 'RAM'        END AS location,        COUNT(*) AS nb_pages,        COUNT(*) * 8 / 1024 AS size_in_mb,        COUNT(*) * 100. /(SELECT COUNT(*) FROM sys.dm_os_buffer_descriptors(nolock)) AS percent_ FROM sys.dm_os_buffer_descriptors(nolock) GROUP BY is_in_bpool_extension ORDER BY location;

 

blog_23_-_21_-_buffer_pool_overview

 

Is it possible to find some pages in the buffer pool extension part that concerns the table bigTransactionHistory_rs1?

 

SELECT        bd.page_id, da.page_type, bd.is_modified FROM sys.dm_os_buffer_descriptors AS bd        JOIN sys.dm_db_database_page_allocations(DB_ID('AdventureWorks2012'), OBJECT_ID('dbo.bigTransactionHistory_rs1'), NULL, NULL, DEFAULT) AS da              ON bd.database_id = da.database_id                     AND bd.file_id = da.allocated_page_file_id                            AND bd.page_id = da.allocated_page_page_id WHERE bd.database_id = DB_ID('AdventureWorks2012')                     AND bd.is_in_bpool_extension = 1                            AND da.page_type IS NULL

 

blog_23_-_3_-_bigTransactionHistory_rs1_pages

 

I chose the first page 195426 and I finally corrupted it

 

DBCC WRITEPAGE(AdventureWorks2012, 1, 195426, 0, 2, 0x0000);

 

blog_23_-_4_-_corrupt_page

 

Then, let's take a look at the page with ID 195426 to see if it still remains in the BPE:

 

SELECT        page_id,        is_in_bpool_extension,        is_modified FROM sys.dm_os_buffer_descriptors AS bd WHERE bd.page_id = 195426

 

blog_23_-_5_-_check_location_page_after_corruption

 

Ok (fortunately) not :-) However we can notice that the page has not been tagged as "modified" by looking at the sys.dm_os_buffer_descriptors DMV. Hum my guess at this point is that using DBCC WRITEPAGE is not a classic process for modifying a page but in fact the process used by the BPE extension is not what we can imagine at the first sight.

Indeed, moving a page from BPE is almost orthogonal to the dirty nature of a page because the buffer manager will move a page into the memory because it becomes hot due to the access attempt. Modifying a page needs first access to the page (a particular thanks to Evgeny Krivosheev - SQL Server Program Manager - for this clarification).

We can verify if the page with ID 195426 is really corrupted (remember this page belongs to the bigTransactionHistory_rs1 table):

 

DBCC CHECKTABLE(bigTransactionHistory_rs1) WITH NO_INFOMSGS, ALL_ERRORMSGS, TABLERESULTS;

 

blog_23_-_6_-_check_corruption_for_data_page

 

Note some other corruptions but in this context it doesn't matter because I performed some other corruption tests in this database :-)

So the next question could be the following: Do you think a corrupted page can be moved from the buffer pool into the memory? … The following test will give us the response:

 

CHECKPOINT GO DBCC DROPCLEANBUFFERS; GO -- Perform queries in order to full fill the buffer cache and its extension

 

We flush dirty pages to disk and the we clean the buffer cache. Afterward, I perform some others queries in order to populate the buffer cache (memory and BPE) with database pages. At this point we have only clean pages. A quick look at the buffer cache with the sys.dm_os_buffer_descriptor DMV give us the following picture (I recorded into a temporary table each time I found out the page ID 195426 into the buffer cache (either memory or BPE):

 

blog_23_-_7_-_find_out_a_corrupted_page_in_the_BPE

 

We can notice that a corrupted page can be part of the buffer pool extension and this is an expected behavior because the page ID 195426 is not dirty or modified but corrupted only at this point.

Enjoy!