Skip navigation.

Feed aggregator

Oracle Database In-Memory Advisor Released

Asif Momen - Tue, 2015-02-24 16:22
Oracle Database In-Memory option was released with Oracle Database 12c (12.1.0.2) and the In-Memory Advisor (IMA) has been much awaited since then. The Oracle Database In-Memory is designed to achieve the following goals:
  1.  Speed up analytical queries
  2.  Speed up OLTP transactions
  3.  NO application changes


Without the In-Memory Advisor, a DBA has to manually identify the tables to be placed in the In-Memory Column Store (IMCS). This manual task is no more required as the IMA, analyzes the analytical workload of the database and produces a recommendation report (which includes SQL commands to place the tables in IMCS).

For more information on IMA please refer to MOS: 1965343.1 and you may also download the best practices white paper from here.



How will the BI industry progress in 2015? [VIDEO]

Chris Foot - Tue, 2015-02-24 14:38

Transcript

Hi, welcome to RDX! Nowadays, almost every company uses business intelligence tools. Whether measuring return on investment or identifying your most popular products, BI can be an integral part of your operation.

But how will the technology progress in 2015? For one thing, it’s likely that new iterations of relational databases will receive integrated analytics functions. SQL Server is one particular solution that has become more compatible with Power BI, Microsoft’s signature BI application.

Mobile analytics has garnered much attention, but, in general, most implementations aren’t as flashy as some users would like them to be. However, many companies are engineering their apps to perform data analysis on the backend. This means servers running SQL databases will do the heavy lifting.

Thanks for watching! If you want to know how BI tools can be integrated into your databases, consult a team of DBAs.

The post How will the BI industry progress in 2015? [VIDEO] appeared first on Remote DBA Experts.

Oracle Linux and Database Smart Flash Cache

Wim Coekaerts - Tue, 2015-02-24 14:07
One, sometimes overlooked, cool feature of the Oracle Database running on Oracle Linux is called Database Smart Flash Cache.

You can find an overview of the feature in the Oracle Database Administrator's Guide. Basically, if you have flash devices attached to your server, you can use this flash memory to increase the size of the buffer cache. So instead of aging blocks out of the buffer cache and having to go back to reading them from disk, they move to the much, much faster flash storage as a secondary fast buffer cache (for reads, not writes).

Some scenarios where this is very useful : you have huge tables and huge amounts of data, a very, very large database with tons of query activity (let's say many TB) and your server is limited to a relatively small amount of main RAM - (let's say 128 or 256G). In this case, if you were to purchase and add a flash storage device of 256G or 512G (example), you can attach this device to the database with the Database Smart Flash Cache feature and increase the buffercache of your database from like 100G or 200G to 300-700G on that same server. In a good number of cases this will give you a significant performance improvement without having to purchase a new server that handles more memory or purchase flash storage that can handle your many TB of storage to live in flash instead of rotational storage.

It is also incredibly easy to configure.

-1 install Oracle Linux (I installed Oracle Linux 6 with UEK3)
-2 install Oracle Database 12c (this would also work with 11g - I installed 12.1.0.2.0 EE)
-3 add a flash device to your system (for the example I just added a 1GB device showing up as /dev/sdb)
-4 attach the storage to the database in sqlplus
Done.

$ ls /dev/sdb
/dev/sdb

$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Tue Feb 24 05:46:08 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL>  alter system set db_flash_cache_file='/dev/sdb' scope=spfile;

System altered.

SQL> alter system set db_flash_cache_size=1G scope=spfile;

System altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.

Total System Global Area 4932501504 bytes
Fixed Size		    2934456 bytes
Variable Size		 1023412552 bytes
Database Buffers	 3892314112 bytes
Redo Buffers		   13840384 bytes
Database mounted.
Database opened.

SQL> show parameters flash

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_flash_cache_file		     string	 /dev/sdb
db_flash_cache_size		     big integer 1G
db_flashback_retention_target	     integer	 1440

SQL> select * from v$flashfilestat; 

FLASHFILE#
----------
NAME
--------------------------------------------------------------------------------
     BYTES    ENABLED SINGLEBLKRDS SINGLEBLKRDTIM_MICRO     CON_ID
---------- ---------- ------------ -------------------- ----------
	 1
/dev/sdb
1073741824	    1		 0		      0 	 0

You can get more information on configuration and guidelines/tuning here. If you want selective control of which tables can use or will use the Database Smart Flash Cache, you can use the ALTER TABLE command. See here. Specifically the STORAGE clause. By default, the tables are aged out into the flash cache but if you don't want certain tables to be cached you can use the NONE option.

alter table foo storage (flash_cache none);
This feature can really make a big difference in a number of database environments and I highly recommend taking a look at how Oracle Linux and Oracle Database 12c can help you enhance your setup. It's included with the database running on Oracle Linux.

Here is a link to a white paper that gives a bit of a performance overview.

MAF 2.1 Alta Mobile UI - Running On iPad Device

Andrejus Baranovski - Tue, 2015-02-24 14:03
I have installed MAF application (described and available for download here: MAF 2.1 Alta Mobile UI and Oracle Mobile Suite) on iPad device, iOS 8 and would like to share couple of tips and tricks about it. I was installing on iPhone/iPad previous MAF versions (it was called ADF Mobile) - ADF Mobile - Live on iPhone Device. It is always forth to read Oracle Developer guide for MAF - 27.4.2 How to Deploy an Application to an iOS-Powered Device.

You would need to get Apple Development Provisioning Profile (this costs around $100), in order to be able to install MAF application on iPad device for testing. Provisioning Profile creation process is streamlined in iOS 8 and is simple to follow. Here is the example of our Apple Development Provisioning Profile entry, this can be download and installed on Mac OS with one click:


Sample MAF application I'm going to deploy is connecting to the REST service. Make sure to set proper IP address for the REST connection entry in MAF. IP must point to the Service Bus service with published REST connection:


JDeveloper 12c is fetching Provisioning Profile information automatically. You only need to copy paste Common Name from iOS development certificate into Signing Identity field (created and registered during Provisioning Profile creation process):


Make sure to specify the same Application Bundle Id prefix as the one registered in Provisioning Profile. Documentation states you can test MAF application on iPhone/iPad device only in Debug mode, however this is not true - it works fine in Release mode as well:


Thats it with configuration. Choose to deploy MAF application into IPA distribution package:


IPA distribution package file must be generated in deploy folder. Double click on it and it will get installed into iTunes:


Open iPhone/iPad section in iTunes and go to the App category. You should see your new MAF application listed there. Click on Install button and then press Synch - this will install application into the device:


Application is successfully loaded and dashboard screen is displayed. Service Bus provides REST data to the MAF application running on the iPad, data is rendered in Tree Map graph (MAF component):


User could switch to Employees screen:


Alta UI look and feel - we could search for employees and browse through a list with shortcuts:


Switch to cards view, instead of default list view:


Select employee who is a manager:


Pie graph with compensation of managed employees is displayed:


List of managed employees is also present:


I have tested AirPlay and connected iPad with Mac. This is useful to display iPad screen on projector, when you want to demonstrate your app to the audience. AirPlay synchronisation works pretty well, without configuration headache (you may require additional utility application for this). You must enable mirroring on your iPad device:


We are getting iPad screen view on the Mac. This is pretty useful for the presentations and demos:

Obat Bius Serbuk Soporific Powder

Kristian Jones - Tue, 2015-02-24 12:33
Obat Bius Serbuk - Di indonesia telah banyak sekali beredar berbagai jenis obat bius mulai dari "berbentuk cair" yang di pakai dengan percampuran obat bius tersebut dengan air, "berbentuk semprot" dipakai dengan sistem semprot, dan "obat bius hirup" dipakai dengan sistem hirup. Dan ini ada lagi obat bius dimana sedikit berbeda yaitu berbentuk bubuk bernama soporific powder. Obat bius ini seringkali di gunakan orang untuk mengobati penyakit insomnia atau gangguan dalam atifitas tidurnya.


Bagi mereka yang mengidap penyakit insomnia akan merasakan sulit ketika ingin tidur, bahkan sudah mencoba mengkonsumsi berbagai macam obat tidur namun tidak ada yang berfungsi. Jika anda adalah pelakunya sudahkan anda mencoba obat bius serbuk ini ? Cara Penggunaan Obat Bius SerbukUntuk penggunaan obat bius jenis serbuk ini terbilang cukup mudah, cukup campurkan soporific powder dengan berbagai macam minuman ataupun makanan, lalu biarkan hingga 3-5 menit dan biarkan obat ini bereaksi. Anda akan mulai merasakan kantuk yang sangat dahsyat sehingga membuat anda tertidur dengan sangat pulsa. Ingat ikuti aturan penggunaan ketika mengkonsumsi obat ini karena bila terlalu banyak mengkonsumsinya akan mengakibatkan overdosis.Kelebihan Obat Bius SerbukObat bius serbuk ini memiliki beberapa kelebihan diantaranya :
  • Penggunaan yang cukup mudah, bisa digunakan di makanan ataupun minuman.
  • Reaksi terbilang cepat, berkisar diantara 3-5 menit dan obat ini akan segera bereaksi dan membuat target tertidur pulas.
  • Target akan tertidur pulas sehingga apapun yang terjadi padanya tidak akan diketahuinya ketika ia bangun.
  • Harga relatif lebih murah dibandingkan obat bius lainnya.
  • Memiliki reaksi yang berbeda dimana obat bius pada umumnya akan membuat target pingsan, namun pada obat ini hanya akan merasakan tidur yang teramat sangat pulas.
  • Dan masih banyak kelebihan lainnya yang tidak bisa saya sebutkan satu persatu.
Dalam 1 bungkus obat ini memiliki isi lebih kurang 5 ml dimana dapat digunakan dalam 5x penggunaan. Jadi bila di akumulasikan hanya dengan 1 bungkus obat ini anda bisa merasakan tidur hingga 3-4 hari lamanya, tentu harus dibagi bagi ya jangan di konsumsi sekaligus, dengan mengkonsumsi obat bius serbuk ini diharapkan dapat memberikan kesegaran serta kebugaran jasmani untuk anda penderita insomnia.

Thirsty 'Tuesday' – Are You Ready for SharePoint 2016?

WebCenter Team - Tue, 2015-02-24 11:42

Most organizations now have SharePoint in one form or another within their organization, but 63% are somewhat stalled in their adoption and progress. Top biggest ongoing issues are persuading staff to use it, poor governance, and lack of internal expertise. It can’t just be left to IT, it requires the business managers and information workers to get involved to maximize the value.



Join your peers from organizations like Target, 3M, Medtronic and US Bank at this free 90 min meetup to learn how to avoid common mistakes and how to ensure success with SharePoint. Connect with local SharePoint experts and customers, and get the latest AIIM (Association for Information and Image Management) research from 400+ SharePoint deployments. Bring your tough questions and ask your colleagues and SharePoint experts for their advice and assistance.

Don't miss this opportunity to meet local SharePoint experts and customers. 

  • Identify the best way to get user adoption, governance, and business value
  • Discuss how to best re-energize a stalled implementation 
  • Plan the role of SharePoint vs. 3rd party extensions and applications
  • Describe best practices for upgrading and migrating to latest version

"If you work with your organization’s information or collaboration resources and technologies, you’ll surely find AIIM a treasure trove of resources."- Andrew McAfee, Professor and author, Enterprise 2.0 and Race Against the Machine

"I find AIIM one of the very best resources for my job." - Larry Sanders, Supervisor at Woodmen of the World Life Insurance Society

“The range of information that AIIM is providing to our industry is nothing short of impressive and the Professional Membership sits at the heart of it.” - Hanns Köhler-Krüner, Research Director at Gartner

Register now to secure your spot - don't miss this free opportunity for education and networking!

Tuesday, March 3, 2015, 3:30-5:30PM CST

Location Tin Whiskers Brewery
125 East 9th Street
Saint Paul, MN 55101
Event Image

cannot set user id: Resource temporarily unavailable or Fork: Retry: Resource Temporarily Unavailable

Vikram Das - Tue, 2015-02-24 10:01
Amjad reported this error while trying to login to the server:

cannot set user id: Resource temporarily unavailable

In the past he had reported this error:

Fork: Retry: Resource Temporarily Unavailable

This is due to the fact that the user has run out of free stacks.  In OEL 6.x , the stack setting is not done in /etc/security/limits.conf but in the file:

/etc/security/limits.d/90-nproc.conf

The default content in the file is:

cat /etc/security/limits.d/90-nproc.conf
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.

*          soft    nproc     1024
root       soft    nproc     unlimited

I changed this to:
After$ cat /etc/security/limits.d/90-nproc.conf# Default limit for number of user's processes to prevent# accidental fork bombs.# See rhbz #432903 for reasoning.
*          soft    nproc     16384root       soft    nproc     unlimited$
As soon as this change was made, Amjad was able to login.

Categories: APPS Blogs

Mobile My Oracle Support: Learn More!

Joshua Solomin - Tue, 2015-02-24 09:05
Untitled Document Mobile My Oracle Support (MMOS) allows access to support information whenever needed, right from a smartphone.
Access Service Requests, knowledge documents, and bugs.
View and update Service Requests.
Search for Service Requests using Advanced Search or saved searches.
Manage, schedule and approve Change Requests (RFCs) for Managed Cloud Service customers.
Search the Knowledge Base, bugs, and the Oracle System Handbook.
Explore content about Accreditation, Advisor Webcasts, Social Media, Instrumentation, and other proactive services.
User Administrators (CUAs) can manage pending users.

Watch the video below for more information.


Who enjoys the feather display of a male peacock?

FeuerThoughts - Tue, 2015-02-24 08:25


Who appreciates the display of feathers by a male peacock?
Female peacocks seem to get a kick out of them. They seem to play a role in mating rituals.
Who else? Why, humans, of course!
We know that humans greatly appreciate those displays, because of the aaahing and ooohing that goes on when we see them. We like those colors. We like the irridescence. We like the shapes and patterns.
If one were to speculate on why a female peacock gets all worked up about a particular male's feather display, we would inevitably hear about instinctual responses, hard-wiring, genetic determinism, and so on.
And if one were to speculate on why a human goes into raptures, we would then experience a major shift in explanation. 
Time to talk about anything but a physiological, hard-wired sort of response.
No, for humans, the attraction has to do with our big brains, our ability to create and appreciate "art". And that is most definitely not something other animals do, right?
Oh, sure, right. Like these instinctive, hard-wired bowerbird mating nests:

That clearly has nothing to do with an aesthetic sense or "art". Just instinct.
Why? Because we humans say so. We just assert this "fact."
Most convenient, eh?
Categories: Development

New Rewards and Recognition Program for Oracle Community

Joshua Solomin - Tue, 2015-02-24 08:04
Community Rewards and Recognition in 15.1 .mainContainer { max-width:680px; min-width:320px; margin:1px auto; font-family:Arial,Helvetica,sans-serif; } .mainContainer p { padding: 0; } .mainContainer a { color: #ff0000; text-decoration: underline; } .mainContainer td { padding:5px; border-collapse:collapse; font-family:Arial,Helvetica,sans-serif; } New Community Rewards and Recognition Program:
Building Better Content and Engagement The Leaders

Click the image for more details

From a simple leaderboard written on a whiteboard to the sophisticated stats tracking of Oracle Fusion CRM, we are surrounded daily by "gamification" concepts.

In competitive games and sports, comparing stats against opponents and peers is all part of the fun. Organized chess play has long had an intricate rankings systems based on match performance. And how many of you are right now slipping in a quick peek at Words With Friends or Clash of Clans? (Tip: don't answer that.)

Gamification in Business

"Gamification" has been something of a corporate buzzword for several years now. At its simplest it is a set of management tools designed to encourage employee and customer behaviors that add business value—but do it in a way that feels natural, intuitive, and fun.

It integrates the dynamics of games—scorekeeping, reward feedback, missions and goals—to an existing process or system by motivating member participation, engagement and loyalty.

Oracle Community - 15.1 Rewards and Recognition Update

The Oracle Community platform uses a gamification system designed to:

  • Broaden scope of knowledge (breadth and depth)
  • Encourage participation by rewarding users for completing mission-based goals and objectives
  • Recognize users when they add quality content
  • Make it easier for other participants to find and evaluate highly rated content
The New Program

The 15.1 release enhanced the existing system by adding new user "levels," visual perks, badges, and achievements. It gives participants a more flexible, fun way to share knowledge and work within in the community.

Benefits Gamification Principles Learn More

Great support communities derive the most value from the contributions of its users. The enhanced Rewards and Recognition program, makes it easier to recognize quality contributions and increases the value of the community for all involved.

If you're an Oracle customer or employee, we highly recommend checking out the new program.

Resources Engage the Community If a video link does not play on first click, refresh the newly-opened browser page. Comments and Feedback

We'd love to hear from you about the new program!

If you're an Oracle customer, give us a heads up in the Community discussion thread.

If you're an Oracle employee, make your voice heard in the MOS Community employee feedback site, with the category: Gamification.


-The Oracle Community Team

First View of Bridge: The new corporate LMS from Instructure

Michael Feldstein - Tue, 2015-02-24 04:41

By Phil HillMore Posts (291)

Last week I covered the announcement from Instructure that they had raised another $40 million in venture funding and were expanding into the corporate learning market. Today I was able to see a demo of their new corporate LMS, Bridge. While Instructure has very deliberately designed a separate product from Canvas, their education-focused LMS, you can see the same philosophy of market strategy and product design embedded in the new system. In a nutshell, Bridge is designed to a simple, intuitive platform that moves control of the learning design away from central HR or IT control and closer to the end user.

While our primary focus at e-Literate is on higher ed and even some K-12 learning, the development of professional development and corporate training markets are becoming more important even in the higher ed context. At the least, this is important for those who are tracking Instructure and how their company plans might affect the future of education platforms.

The core message of Instructure regarding Bridge – just as with Canvas – is that it is focused on ease-of-use whereas the entrenched competition has fallen prey to feature bloat based on the edge cases. Despite this claim and despite Instructure’s track record with Canvas, what does this mean? I’m pretty sure every vendor out there claims ease-of-use whether or not there are elegant or terrible designs[1].

Based on the demo, Bridge appears to define ease-of-use in three distinct areas – streamlined, clutter-free interface for learners, simple tools for content creation by business units, and simple tools for managing learners and content.

Learner User Experience

Bridge has been designed over the past year based on Instructure’s design to avoid force-fitting Canvas into corporate learning markets. The core use cases of this new market are far simpler than education use cases, and the resultant product has fewer bells and whistles than Canvas. In Instructure’s view, the current market has such cumbersome products that learning platforms are mostly used just for compliance – take this course or you lose your job – and not at all for actual learning. The Bridge interface (shown alongside the mobile screen and on laptop) is simple.

Mobile_same_as_laptop

Learner_progress

While this is a clean interface, I don’t see it as being that big of a differentiator or rationale for a new product line.

Content Creation

The content creation tools, however, start to show Instructure’s distinctive approach. They have made their living on being able to say no – refusing to let user requests for additional features to change their core design principle.  The approach for Bridge is to assume that content creators have no need to have web design or instructional design experience, providing them with simple formatting and suggestion-based tools to make content creation easy. The formatting looks to be on the level of Google Docs, or basic WordPress, rather than Microsoft Word.

Content_authoring_tool

When creating new content, the Bridge LMS even puts up prompts for pre-formatted content types.

Content_prompts

When creating quizzes, they have an interesting tool that adds natural language processing to facilitate simple questions that can be randomized. The author could write a simple sentence of what they are trying to convey to users, such as “Golden Gate Bridge is in San Francisco”. The tool selects each word and allows the author to add alternative objects that can serve in a quiz, such as suggesting San Mateo or San Diego (it is not clear if you can group words to replace the full “San Francisco” rather than “Francisco”). The randomized quiz questions could then be automatically created.

Quiz Creation

For content that is more complex, Instructure is taking the approach of saying ‘no’ – go get that content from a SCORM/AICC import coming from a more complex authoring tool.

Learner Administration Tools

Rather than relying on complex HR systems to manage employees, Bridge goes with a CSV import tool that reminds me of Tableau in that it pre-imports, shows the fields, and allows a drag-and-drop selection and re-ordering of fields for the final import[2].

CSV_Learner_Import

The system can also create or modify groups based on rules.

Group_creation_tool

To pull this together, Bridge attempts to automate as much of the background process as is feasible. To take one example, when you hire a new employee or change the definition of groups, the system retroactively adds the revised list of learners or groups to assigned courses.

For live training, you can see where Bridge takes the opposite approach to Canvas. In Canvas (as with most education LMSs), it is assumed that more time in the system means more time learning – the core job of learners. In Bridge, however, the assumption is that LMS time-on-task should be minimized. For compliance training in particular, you want the employee to spend as little time as reasonable training so they can get their real job done. Bridge focuses not on the live training itself but rather on the logistics tasks in setting up the course (scheduling, registering, taking attendance).

Live_training_tools_1

Prospects and Implications

Taken together, the big story here is that Instructure seeks to change the situation where learning management in corporations is cloistered within HR, IT and instructional design units.. As they related today, they want to democratize content creation and center learning in the business units where the subject matter experts reside.

Their future plans focus on engagement – getting feedback and dialogue from employees rather than just one-way content dissemination and compliance. If they are successful, this is where they will gain lasting differentiation in the market.

What does this mean from a market perspective? Although I do not have nearly as much experience with corporate training as I do with higher education, this LMS seems like a real system and a real market entry into corporate learning. The primary competitors in this space are not Blackboard, as TechCrunch and Buzzfeed implied, but are Saba, SumTotal, SuccessFactors, Cornerstone, etc. Unlike education, this is a highly fragmented market. I suspect that this means that the growth prospects for Instructure will be slower than in education, but real nonetheless. Lambda Solutions shared the Bersin LMS study to give a view of the market.

lms-market

This move is clearly timed to help with Instructure’s planned IPO that could happen as soon as November 2015[3]. Investors can now see potential growth in an adjacent market to ed tech where they have already demonstrated growth.

I mentioned in my last post that the biggest risk I see is management focus and attention. I suspect with their strong fund-raising ($90 million to date) that the company has enough cash to hire staff for both product lines, but senior management will oversee both the Canvas and the Bridge product lines and markets.

  1. Although I would love to see the honest ad: “With a horrible, bloated user interface based on your 300-item RFP checklist!”
  2. I assume they can integrate with HR systems as well, but we did not discuss this aspect.
  3. Note this is based on my heuristic analysis and not from Instructure employees.

The post First View of Bridge: The new corporate LMS from Instructure appeared first on e-Literate.

Spring Boot - Hello World from the command line to IBM Bluemix in 1 minute

Pas Apicella - Tue, 2015-02-24 04:23
Here is how simple Spring Boot makes saying Hello World web based application with no IDE and no no need to package it up. Nearly as easy as NodeJS

1. Firstly install the Spring Boot CLI. From mac use brew as follows

pas@192-168-1-4:~$ brew tap pivotal/tap
Cloning into '/usr/local/Library/Taps/pivotal/homebrew-tap'...
remote: Counting objects: 366, done.
remote: Total 366 (delta 0), reused 0 (delta 0), pack-reused 366
Receiving objects: 100% (366/366), 60.09 KiB | 84.00 KiB/s, done.
Resolving deltas: 100% (195/195), done.
Checking connectivity... done.
Tapped 8 formulae
pas@192-168-1-4:~$ brew install springboot
==> Installing springboot from pivotal/homebrew-tap
==> Downloading https://repo.spring.io/release/org/springframework/boot/spring-boot-cli/1.2.1.RELEASE/spring-boot-cli-1.2.1.RELEASE-bin.tar.gz
######################################################################## 100.0%
==> Caveats
Bash completion has been installed to:
  /usr/local/etc/bash_completion.d

zsh completion has been installed to:
  /usr/local/share/zsh/site-functions
==> Summary
Categories: Fusion Middleware

Introducing Oracle Big Data Discovery Part 1: “The Visual Face of Hadoop”

Rittman Mead Consulting - Mon, 2015-02-23 20:38

Oracle Big Data Discovery was released last week, the latest addition to Oracle’s big data tools suite that includes Oracle Big Data SQL, ODI and it’s Hadoop capabilities and Oracle GoldenGate for Big Data 12c. Introduced by Oracle as “the visual face of Hadoop”, Big Data Discovery combines the data discovery and visualisation elements of Oracle Endeca Information Discovery with data loading and transformation features built on Apache Spark to deliver a tool aimed at the “Discovery Lab” part of the Oracle Big Data and Information Management Reference Architecture.

NewImage

Most readers of this blog will probably be aware of Oracle Endeca Information Discovery, based on the Endeca Latitude product acquired as part of the Endeca aquisition. Oracle positioned Endeca Information Discovery (OEID) in two main ways; on the one hand as a data discovery tool for textual and unstructured data that complemented the more structured analysis capabilities of Oracle Business Intellligence, and on the other hand, as a fast click-and-refine data exploration tool similar to Qlikview and Tableau.

The problem for Oracle though was that data discovery against files and documents is a bit of a “solution looking for a problem” and doesn’t have a naturally huge market (especially considering the license cost of OEID Studio and the Endeca Server engine that stores and analyzes the data), whereas Qlikview and Tableau are significantly cheaper than OEID (at least at the start) and are more focused on BI-type tasks, making OEID a good too but not one with a mass market. To address this, whilst OEID will continue as a standalone tool the data discovery and unstructured data analysis parts of OEID are making their way into this new product called Oracle Big Data Discovery, whilst the fast click-and-refine features will surface as part of Visual Analyzer in OBIEE12c.

More importantly, Big Data Discovery will run on Hadoop making it a solution for a real problem – how to catalog, explore, refine and visualise the data in the data reservoir, where data has been landed that might be in schema-on-read databases, might need further analysis and understanding, and users need large-scale tooling to extract the nuggets of information that in time make their way into the “Execution” part of the Big Data and Information Management Reference Architecture. As some who’s admired the technology behind Endeca Information Discovery but sometimes struggled to find real-life use-cases or customers for it, I’m really pleased to see its core technology applied to a problem space that I’m encountering every day with Rittman Mead’s customers.

NewImage

In this first post, I’ll look at how Big Data Discovery is architected and how it works with Cloudera CDH5, the Hadoop distribution we use with our customers (Hortonworks HDP support is coming soon). In the next post I’ll look at how data is loaded into Big Data Discovery and then cataloged and transformed using the BDD front-end; then finally, we’ll take a look at exploring and analysing data using the visual capabilities of BDD evolved from the Studio tool within OEID. Oracle Big Data Discovery 1.0 is now GA (Generally Available) but as you’ll see in a moment you do need a fairly powerful setup to run it, at least until such time as Oracle release a compact install version running on VM.

To run Big Data Discovery you’ll need access to a Hadoop install, which in most cases will consist of 6 (minumum 3 or 4, but 6 is the minimum we use) to 18 or so Hadoop nodes running Cloudera CDH5.3. BDD generally runs on its own server nodes and itself can be clustered, but for our setup we ran 1 BDD node alongside 6 CDH5.3 Hadoop nodes looking like this:

NewImage

Oracle Big Data Discovery is made up of three component types highlighted in red in the above diagram, two of which typically run on their own dedicated BDD nodes and another which runs on each node in the Hadoop cluster (though there are various install types including all on one node, for demo purposes)

  • The Studio web user interface, which combines the faceted search and data discovery parts of Endeca Information Discovery Studio with a lightweight data transformation capability
  • The DGraph Gateway, which brings Endeca Server search/analytics capabilities to the world of Hadoop, and
  • The Data Processing component that runs on each of the Hadoop nodes, and uses Hive’s HCatalog feature to read Hive table metadata and Apache Spark to load and transform data in the cluster

The Studio component can run across several nodes for high-availability and load-balancing, which the DGraph element can run on a single node as I’ve set it up, or in a cluster with a single “leader” node and multiple “follower” nodes again for enhanced availability and throughput. The DGraph part them works alongside Apache Spark to run intensive search and analytics on subsets of the whole Hadoop dataset, with sample sets of data being moved into the DGraph engine and any resulting transformations then being applied to the whole Hadoop dataset using Apache Spark. All of this then runs as part of the wider Oracle Big Data product architecture, which uses Big Data Discovery and Oracle R for the discovery lab and Oracle Exadata, Oracle Big Data Appliance and Oracle Big Data SQL to take discovery lab innovations to the wider enterprise audience.

NewImage

So how does Oracle Big Data Discovery work in practice, and what’s a typical workflow? How does it give us the capability to make sense of structured, semi-structured and unstructured data in the Hadoop data reservoir, and how does it look from the perspective of an Oracle Endeca Information Discovery developer, or an OBIEE/ODI developer? Check back for the next parts in this three part series where I’ll first look at the data transformation and exploration capabilities of Big Data Discovery, and then look at how the Studio web interface brings data discovery and data visualisation to Hadoop.

Categories: BI & Warehousing

Fronting Oracle Maven Repository with Sonatype Nexus

Steve Button - Mon, 2015-02-23 16:44
The Sonatype team have announced the release of Nexus 2.1.1 which is a minor update that now works with the Oracle Maven Repository.

I was going to write a bit up about it but Manfred Moser from Sonatype has already put together a blog and video on it:
With the new Nexus 2.11.2 release we are supporting the authentication mechanism used for the Oracle Maven repository in both Nexus OSS and Nexus Pro. This allows you to proxy the repository in Nexus and makes the components discoverable via browsing the index as well as searching for components. You will only need to set this up once in Nexus and all your projects. Developers and CI servers get access to the components and the need for any manual work disappears.  On the Nexus side, the configuration changes can be done easily as part of your upgrade to the new release.
Check it out @ Using the Oracle Maven Repository with Nexus








APEX 5.0: the way to use Theme Roller

Dimitri Gielis - Mon, 2015-02-23 15:52
Once you have your new application created using the Universal Theme it's time to customise your theme with Theme Roller.

Run your application and click the Theme Roller link in the APEX Developer Toolbar:


Theme Roller will open. I won't go in every section, but want to highlight the most important sections in Theme Roller in this post:
  1. Style: there're two styles that come with APEX 5.0: Blue and Gray. You can start from one of those and see how your application changes color. It will set predefined colors for the different parts of the templates.

  2. Color Wheel: when you want to quickly change your colors, an easy way to see different options is by using the Color Wheel. You've two modes: monochroom (2 points) and dual color (3 points - see screenshot). By changing one point it will map the other point to a complementary color. Next you can move the third point to play more with those colors.

  3. Global Colors: if the Color Wheel is not specific enough for what you need, you can start by customising the Global Colors. Those are the main colors of the Universal Theme and used to drive the specific components. You can still customise the different components e.g. the header by clicking further down in the list (see next screenshot).

  4. Containers etc. will allow you to define the specific components. A check icon will say it's the standard color coming with the selected style. An "x" means the color was changed and an "!" means the contrast is probably not great.
Original with styleAfter changing colors

This is just awesome... but what if you don't like the changes you did?

Luckily you can Reset either your entire style or you can refresh the specific section by clicking the refresh icon. There's also an undo and redo button. But that is not all... for power users when you hit "ALT" when hovering a color you can just reset that color! (only that color will get a refresh icon in it and clicking it will reset it)

Note that all changes you're making are locally stored on your computer in your browsers cache (HTML5 local storage), so you don't effect other users by playing with the different colors.

Finally when you are done editing your color scheme you can hit the Save As button to save all colors to a new Style. When you close Theme Roller the style will go back how it was.
The final step to apply the new style so everybody sees that version, is to go to User Interface Details (Shared Components) and set the style to the new one.

Note that this blog post is written based on APEX 5.0 EA3, in the final version of APEX 5.0 (or 5.1) you might apply the new style from Theme Roller directly.

Want to know more about Theme Roller and the Universal Theme - we're doing an APEX 5.0 UI Training May 12th in the Netherlands.
Categories: Development

Using Windows 2012 R2 & dynamic witness feature with minimal configurations

Yann Neuhaus - Mon, 2015-02-23 15:30

Do you have ever seen the following message while you’re trying to validate your cluster configuration with your availability groups or FCI’s and Windows Server 2012 R2?


blog_32_-_0_-_cluster_validation


 

Microsoft recommends to add a witness even if you have only two cluster members with dynamic weights. This recommendation may make sense regarding the new witness capabilities. Indeed, Windows 2012 R2 improves the quorum resiliency with the new dynamic witness behavior. However, we need to take care about it and I would like to say at this point that I’m reluctant to recommend to meet this requirement with a minimal cluster configuration with only 2 nodes. In my case, it’s very usual to implement SQL Server AlwaysOn and availability groups or FCI’s architectures with only two cluster nodes at customer places. Let’s talk about the reason in this blog post.

 

First of all, let’s demonstrate why I don’t advice my customers to implement a witness by following the Microsoft recommendation. In my case it consists in adding a file share witness on my existing lab environment with two cluster nodes that use the dynamic weight behavior:


blog_32_-_1_-_cluster_2_nodes_configuration_nodeweight


 

Now let’s introduce a file share witness (\DC2WINCLUST-01) as follows:


blog_32_-_2_-_adding_FSW

 

We may notice after introducing the FSW that the node weight configuration has changed:

 

blog_32_-_3_-_cluster_new_configuration

 

 

blog_32_-_4_-_cluster_fsw_config

 

The total number of votes equals 3 here because we are in the situation where we have an even number of cluster members plus the witness. As a reminder, we are supposed to use a dynamic witness feature according to the Microsoft documentation here.

 

In Windows Server 2012 R2, if the cluster is configured to use dynamic quorum (the default), the witness vote is also dynamically adjusted based on the number of voting nodes in current cluster membership. If there is an odd number of votes, the quorum witness does not have a vote. If there is an even number of votes, the quorum witness has a vote.

 

The quorum witness vote is also dynamically adjusted based on the state of the witness resource. If the witness resource is offline or failed, the cluster sets the witness vote to "0."

 

The last sentence draws my attention and now let’s introduce a failure of the FSW. In my case I will just turn off the share used by my WFSC as follows:

 

blog_32_-_5_-_disable_fileshare


 

As expected, the file share witness state has changed from online to failed state by the resource control manager:

 

blog_32_-_6_-_fileshare_witness_failed

 

At this point, according to the Microsoft documentation, we may expect that the WitnessDynamicWeight property will change by the cluster but to my surprise, this was not the case:

 

blog_32_-_62_-_fileshare_witness_configuration


 

In addition, after taking a look at the cluster log I noticed this sample among the entire log records:

000014d4.000026a8::2015/02/20-12:45:43.594 ERR   [RCM] Arbitrating resource 'File Share Witness' returned error 67 000014d4.000026a8::2015/02/20-12:45:43.594 INFO [RCM] Res File Share Witness: OnlineCallIssued -> ProcessingFailure( StateUnknown ) 000014d4.000026a8::2015/02/20-12:45:43.594 INFO [RCM] TransitionToState(File Share Witness) OnlineCallIssued-->ProcessingFailure. 000014d4.00001ea0::2015/02/20-12:45:43.594 INFO [GEM] Node 1: Sending 1 messages as a batched GEM message 000014d4.000026a8::2015/02/20-12:45:43.594 ERR   [RCM] rcm::RcmResource::HandleFailure: (File Share Witness) 000014d4.000026a8::2015/02/20-12:45:43.594 INFO [QUORUM] Node 1: PostRelease for ac9e0522-c273-4da8-99f5-3800637db4f4 000014d4.000026a8::2015/02/20-12:45:43.594 INFO [GEM] Node 1: Sending 1 messages as a batched GEM message 000014d4.000026a8::2015/02/20-12:45:43.594 INFO [QUORUM] Node 1: quorum is not owned by anyone 000014d4.000026a8::2015/02/20-12:45:43.594 INFO [RCM] resource File Share Witness: failure count: 0, restartAction: 0 persistentState: 1. 000014d4.00001e20::2015/02/20-12:45:43.594 INFO [GUM] Node 1: executing request locally, gumId:281, my action: qm/set-node-weight, # of updates: 1 000014d4.000026a8::2015/02/20-12:45:43.594 INFO [RCM] numDependents is zero, auto-returning true 000014d4.00001e20::2015/02/20-12:45:43.594 WARN [QUORUM] Node 1: weight adjustment not performed. Cannot go below weight count 3 in a hybrid configuration with 2+ nodes

 

The latest line (highlighted in red) is the most important. I guess here that “hybrid configuration” means my environment includes 2 cluster nodes and one witness (regarding its type). An interesting thing to notice is a potential limitation that exists for the dynamic witness behavior that cannot be performed below two cluster nodes. Unfortunately, I didn’t find any documentation from Microsoft about this message. Is it a bug or just a missing entry to the documentation or have I overlook something concerning the cluster behavior? At this point I can’t tell anything and I hope to get soon a response from Microsoft. The only thing I can claim at this point is that if I lose a cluster node, the cluster availability will be compromised. This reproduced issue is not specific on my lab environment and I faced the same behavior several times at my customers.

Let’s demonstrate by issuing a shutdown of one of my cluster node. After a couple of seconds, connection with my Windows failover cluster is lost and here what I found by looking at the Windows event log:


blog_32_-_7_-_quorum_lost

 

As I said earlier, at this point, with minimal configuration with two cluster nodes, I always recommend to my customers to skip this warming. After all, having only two cluster members with dynamic quorum behavior is sufficient to get a good quorum resiliency. Indeed, according to the Microsoft documentation to allow the system to re-calculate correctly the quorum, a simultaneous failure of a majority of voting members should not occur (in others words, the failure of cluster members must be sequential) and with two cluster nodes we may only lose one node at the same time in all cases.

What about more complex environments? Let’s say a FCI with 4 nodes (two cluster nodes on each datacenter) and a file share witness on the first datacenter. In contrast, in this case, if the file share witness fails, the cluster will adjust correctly the overall node weight configuration both on the cluster nodes and on the witness. This is completely consistent with the message found above: "Cannot go below weight count 3".


blog_32_-_8_-_quorum_adjustement_with_4_nodes



 

The bottom line is that the dynamic witness feature is very useful but you have to take care about its behavior with minimal configurations based on only two cluster nodes which may introduce unexpected results in some cases.

 

Happy cluster configuration!




Red Samurai ADF Performance Audit Tool v 3.4 - ADF Task Flow Statistics with Oracle DMS Servlet Integration

Andrejus Baranovski - Mon, 2015-02-23 12:38
We have integrated Red Samurai ADF Performance Audit Tool with Oracle DMS Spy Servlet. Integration is dynamic and doesn't require any extra configuration. It brings out of the box information about ADF Task Flow usage and performance. This means we are analysing from now on not only ADF BC performance data, but ADF Controller data also.

DMS Spy Servlet context is accessed in certain intervals and we are not only displaying DMS data, but storing it inside our audit tables. This allows to keep historical data and preserve it between WLS restarts - this is not possible with DMS Spy Servlet alone.

New tree map graph displays ADF Task Flow usage in the application - larger box, more frequently Task Flow is accessed:


Graph is clickable, user could select a box and detail data for the Task Flow will be displayed. We are displaying a number of Active/Maximum Active Task Flows over time. Average load time is logged and displayed - this allows to identify Task Flows with slow performance:


Here you can check information about previous v 3.3 version - Red Samurai ADF Performance Audit Tool v 3.3 - Audit Improvements.

SQL injections still on the rise [VIDEO]

Chris Foot - Mon, 2015-02-23 12:28

Transcript

Hi, welcome to RDX! SQL injections have been around for some time. However, they’re not necessarily outdated. Cybersecurity experts have noted that hackers are still using SQL injections to infiltrate databases.

Although the number of SQL injection-based attacks declined steadily over the past several years, 2014 saw a sharp uptick of such instances. DB Networks blamed the deadlines and cost constraints many software development projects operate under. These restrictions sometimes cause engineers to skimp on the back-end security components necessary to maintain application integrity.

The question is, are your databases open to SQL injections? Have a team of DBAs assess your software’s data transaction algorithms. Scrutinize every SQL query your applications initiate, and you’ll be able to identify any problem areas that may leave you open to attack.

Thanks for watching! Check in next time for more SQL security news!

The post SQL injections still on the rise [VIDEO] appeared first on Remote DBA Experts.

Webcast: Strategies for Delivering Next-Gen Digital Experiences

WebCenter Team - Mon, 2015-02-23 09:07
ECONTENT SPONSORED WEBEVENTS | LIVE STREAMING AUDIO Strategies for Delivering
Next-Gen Digital Experiences

Thursday February 26, 2015 11:00am PT / 2:00pm ET The world has changed to one that's always on, always-engaged, requiring organizations to rapidly become "digital businesses." In order to thrive and survive in this new economy, having the right digital experience and engagement strategy and speed of execution is crucial. So how do you get started and accelerate this transformation?

Attend this webcast as we outline best practice strategies to seize the full potential of digital experience & engagement - to deliver the next wave of revenue growth, service excellence and business efficiency. You will gain deep insights into how you can engage your customers, partners and employees for maximum results by empowering both marketing and IT with increased business agility and responsiveness.

REGISTER NOW to reserve your seat for this special webinar event. MODERATOR image Theresa Cramer
Editor
EContent & Intranets PRESENTERS image Chris Preston
Sr. Director Customer Strategies
Oracle image Kellsey Ruppel
Principal Product Marketing Director
Oracle Audio is streamed over the Internet, so turn up your computer speakers!