Skip navigation.

Feed aggregator

Whitepaper: Oracle Database 11g and 12c Consolidation and Workload Scalability with EMC XtremIO 3.0

Kevin Closson - Wed, 2015-04-29 14:30

This is a just a quick blog post to direct readers to the best Oracle-related paper detailing the value EMC XtremIO brings to Oracle Database use cases.  I’ve been looking forward to the availability of this paper for quite some time as I supported (minimally, really) the EMC Global Solutions Engineering group in this effort. They really did a great job with this testing! I highly recommend this paper for readers who are interested in:

  • Leveraging immediate, space efficient, zero overhead storage snapshots for productivity
  • All-Flash Array performance
  • Database workload consolidation

Click the following link to access the whitepaper: click here.   wp-1 Abstract:

This white paper describes the deployment of the XtremIO® all-flash array with Oracle RAC 11g and 12c databases in both physical and virtual environments. It describes optimal performance while scaling up in a physical environment, the effect of adding multiple virtualized database environments, and the impact of using XtremIO Compression with Oracle Advanced Compression. The white paper also demonstrates the physical space efficiency and low performance impact of XtremIO snapshots.


Filed under: oracle Tagged: Oracle Database performance XtremIO flash, Oracle Performance, Random I/O, XtremIO

Not Exists

Jonathan Lewis - Wed, 2015-04-29 13:21

This whole thing about “not exists” subqueries can run and run. In the previous episode I walked through some ideas of how the following query might perform depending on the data, the indexes, and the transformation that the optimizer might apply:

select
        count(*)
from    t1 w1
where   not exists (
                select  1
                from    t1 w2
                where   w2.x = w1.x
                and     w2.y <> w1.y
);  

As another participant in the original OTN thread had suggested, however, it might be possible to find a completely different way of writing the query, avoiding the subquery approach completely. In particular there are (probably) several ways that we could write an equivalent query where the table only appears once. In other words, if we restate the requirement we might be able to find a different SQL translation for that requirement.

Looking at the current SQL, it looks like the requirement is: “Count the number of rows in t1 that have values of X that only have one associated value of Y”.

Based on this requirement, the following SQL statements (supplied by two different people) look promising:


    WITH counts AS
       (SELECT x,y,count(*) xy_count
        FROM   t1
        GROUP BY x,y)
    SELECT SUM(x_count)
    FROM  (SELECT x, SUM(xy_count) x_count
           FROM   counts
           GROUP BY x
           HAVING COUNT(*) = 1);


SELECT SUM(COUNT(*))
  FROM t1
GROUP BY x HAVING COUNT(DISTINCT y)<=1

Logically they do seem to address the description of the problem – but there’s a critical difference between these statements and the original. The clue about the difference appears in the absence of any comparisons between columns in the new forms of the query, no t1.colX = t2.colX, no t1.colY != t2.colY, and this might give us an idea about how to test the code. Here’s some test data:


drop table t1 purge;

create table t1 (
        x       number(2,0),
        y       varchar2(10)
);

create index t1_i1 on t1(x,y);

--      Pick one of the three following pairs of rows

insert into t1(x,y) values(1,'a');
insert into t1(x,y) values(1,null);

-- insert into t1(x,y) values(null,'a');
-- insert into t1(x,y) values(null,'b');

-- insert into t1(x,y) values(null,'a');
-- insert into t1(x,y) values(null,'a');

commit;

--      A pair to be skipped

insert into t1(x,y) values(2,'c');
insert into t1(x,y) values(2,'c');

--      A pair to be reported

insert into t1(x,y) values(3,'d');
insert into t1(x,y) values(3,'e');

commit;

execute dbms_stats.gather_table_stats(user,'t1')

Notice the NULLs – comparisons with NULL lead to rows disappearing, so might the new forms of the query get different results from the old ?
The original query returns a count of 4 rows whichever pair we select from the top 6 inserts.

With the NULL in the Y column the new forms report 2 and 4 rows respectively – so only the second query looks viable.
With the NULLs in the X columns and differing Y columns the new forms report 2 and 2 rows respectively – so even the second query is broken.

However, if we add “or X is null” to the second query it reports 4 rows for both tests.
Finally, having added the “or x is null” predicate, we check that it returns the correct 4 rows for the final test pair – and it does.

It looks as if there is at least one solution to the problem that need only access the table once, though it then does two aggregates (hash group by in 11g). Depending on the data it’s quite likely that this single scan and double hash aggregation will be more efficient than any of the plans that do a scan and filter subquery or scan and hash anti-join. On the other hand the difference in performance might be small, and the ease of comprehension is just a little harder.

Footnote:

I can’t help thinking that the “real” requirement is probably as given in the textual restatement of the problem, and that the first rewrite of the query is probably the one that’s producing the “right” answers while the original query is probably producing the “wrong” answer.


A migration pitfall with ALL COLUMN SIZE AUTO

Yann Neuhaus - Wed, 2015-04-29 13:05

When you migrate, you should be prepared to face some execution plan changing. That's not new. But here I'll show you a case where you have several bad execution plans because lot of histograms are missing. The version is the same. The system is the same. You've migrated with DataPump importing all statistics. You have the same automatic job to gather statistics with all default options. You have repeated the migration several times on a system where you constantly reproduce the load. Have done a lot of regression tests. Everything was ok.

SOA & BPM Partner Community Webcast May 8th 16:00 CET

Andrejus Baranovski - Wed, 2015-04-29 12:49
Save the date for the Webcast below - make sure to attend, if you don't want to miss SOA & BPM news.
SOA & BPM Partner Community Webcast May 8th 16:00 CETJoin us for our monthly SOA & BPM Partner Community Webcast. We will give you an update on our SOA Suite 12c, Integration Cloud Service offerings and our community activities.


Speakers:
Vikas Anand
Jürgen Kress

Schedule: May 8th 2014 16:00-16:45 CET (Berlin time)

Attendance Information:
Join Webcast or dial in Call ID: 4070776 and Call Passcode: 333111
Austria: +43 (0) 192 865 12
Belgium: +32 (0) 240 105 28
Denmark: +45 327 292 22
Finland: +358 (0) 923 193 923
France: +33 (0) 15760 2222
Germany: +49 (0) 692 222 161 06
Ireland: +353 (0) 124 756 50
Italy: +39 (0) 236 008 198
Netherlands: +31 (0) 207 143 543
Spain: +34 914 143 755
Sweden: +46 (0) 856 619 465
Switzerland: +41 (0) 445 804 003
UK: +44 (0) 208 118 1001
United States: 140 877 440 73
More Local Numbers Watch and listen
You can join the Conference by clicking on the link: Join Webcast  (audio will play over your computer speakers or headset). Visit our SOA Partner Community Technology Webcast series here.

More Apple Watch-ness from Oracle Social Network

Oracle AppsLab - Wed, 2015-04-29 12:00

And now back to the Apple Watch content.

If you’ve read here for a while, you might remember we used to be part of the WebCenter development team, and we worked with Oracle Social Network, affectionately OSN.

We even ran a developer challenge for OSN at OpenWorld back in 2012.

Yesterday, longtime friend of the ‘Lab and all-around good dude, Chris Bales (@cbales) reached out to ask about the OSN Android Wear push notifications because we have a few of those watches.

Noel (@noelportugal) and Anthony (@anthonyslai), having Apple Watch on the brain, misread and rushed to test OSN and its push notifications on the Watch, and then, they finally *read* the email from Chris and checked the Android Wear notifications too.

Both watch platforms look great, as you can see.

IMG_0899 IMG_0906 IMG_0909 IMG_0912 IMG_0917 IMG_0920 IMG_0921 IMG_0922

Kudos to Chris and OSN, and consider yourself all the Apple Watch-wiser for today.Possibly Related Posts:

jQuery - Replace Image

Denes Kubicek - Wed, 2015-04-29 11:06
I am posting this just because I am getting questions related to this or similar issues all the time. In this example you can see how you can replace images on the fly in your page using Dynamic Actions. There are at least two good ways to do that:

1. Rendering images in a report. Refreshing a report is a predefined event in a Dynamic Action and doesn't require coding.

or

2. Using jQuery to do that.



Enjoy.
Categories: Development

SalesCloud Payload : How to create a Activity(Task)

Angelo Santagata - Wed, 2015-04-29 08:48


From SalesCloud R9 onwards we now have Activities.. Activities can be tasks, appointments etc.  Object Name Activity WSDLhttps://<hostname>:443/appCmmnCompActivitiesActivityManagement/ActivityService?wsdl Version Tested on R9  DescriptionThis payload demonstrates how to create an activity of type TASK, assign a primary lead owner  OperationcreateActivity  Parameters
*Required PriorityCode*,
StatusCode
ActivityContact
ActivityTypeCode*
ActivityFunctionCode*
Subject  Payload <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/apps/crmCommon/activities/activityManagementService/types/" xmlns:act="http://xmlns.oracle.com/apps/crmCommon/activities/activityManagementService/" xmlns:not="http://xmlns.oracle.com/apps/crmCommon/notes/noteService" xmlns:not1="http://xmlns.oracle.com/apps/crmCommon/notes/flex/noteDff/">    <soapenv:Header/>    <soapenv:Body>       <typ:createActivity>          <typ:activity> <!-- Priority = 1,2,3 , high, medium, low -->             <act:PriorityCode>1</act:PriorityCode>             <act:StatusCode>NOT_STARTED</act:StatusCode>             <act:ActivityContact> <!-- Primary contact ID-->                <act:ContactId>300000093409168</act:ContactId>             </act:ActivityContact>             <act:ActivityAssignee> <!-- Party ID of Assignnee -->                <act:AssigneeId>300000050989179</act:AssigneeId>             </act:ActivityAssignee>             <act:ActivityTypeCode>MEETING</act:ActivityTypeCode>             <act:ActivityFunctionCode>TASK</act:ActivityFunctionCode>                         <act:Subject>Test assign to Matt Hooper for Picard</act:Subject>          </typ:activity>       </typ:createActivity>    </soapenv:Body> </soapenv:Envelope>

Oracle CIO, IDC, Other Executives Discuss Cloud File Sharing - Don't Miss

WebCenter Team - Wed, 2015-04-29 06:28

Webcast: Introducing Oracle Documents Cloud Service

Driving Improved Productivity and Collaboration for Sales, Marketing, Customer Experience and Operations

A recent survey shows that 89 percent of business managers believe their employees need 24/7 access to core business systems to implement their business strategy. But are current file sync and share solutions up to the mark?

Join Oracle Chief Information Officer and SVP, Mark Sunday, Senior Oracle Product Management and customer executives for a live webcast on Documents Cloud Service – an enterprise-grade, secure and integrated cloud-based content sharing and collaboration offering from Oracle. Find out how your organization can standardize on a corporate/IT-approved cloud solution while meeting the varying business needs of Marketing, Sales, Customer Service, HR, Operations and other departments.

Learn how Oracle Documents Cloud Service is uniquely positioned to:
  • Power anytime, anywhere secure access across Web, mobile and desktop
  • Mobilize enterprise content without creating information silos
  • Drive enterprise-wide collaboration with IT-sanctioned security and controls
Register Now for this webcast. Register Now May 13, 2015
10:00 AM PT
/ 1:00 PM ET
#OracleDOCS Featured Speakers:


Mark Sunday
Chief Information Officer and Senior Vice President, Oracle


Judd Robins
Executive Vice President, TekStream Solutions


Melissa Webster
Program Vice President Content and Digital Media Technologies, IDC Copyright © 2015, Oracle Corporation and/or its affiliates.
All rights reserved.
Contact Us | Legal Notices and Terms of Use | Privacy Statement SEV100414452

Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States

Thanks Oracle for R12.AD.C.DELTA.6

Pythian Group - Wed, 2015-04-29 06:18

When reading through the release notes of the latest Oracle E-Business Suite R12.2 AD.C.Delta.6 patch in note 1983782.1, I wondered what they meant by “Simplification and enhancement of adop console messages”. I realized what I was missing after I applied the AD.C.Delta6 patch. The format of the console messaged changed drastically. To be honest, the old console messages printed by adop command reminded me of a program where somebody forgot to turn off the debug feature. The old adop console messages are simply not easily readable and looked more like debug messages of a program. AD.C.Delta6 brought in a fresh layout to the console messages, it’s now more readable and easy to follow. You can see for your self by looking at the below snippet:

### AD.C.Delta.5 ###

$ adop phase=apply patches=19197270 hotpatch=yes

Enter the APPS password:
Enter the SYSTEM password:
Enter the WLSADMIN password:

 Please wait. Validating credentials...


RUN file system context file: /u01/install/VISION/fs2/inst/apps/VISION_ebs/appl/admin/VISION_ebs.xml

PATCH file system context file: /u01/install/VISION/fs1/inst/apps/VISION_ebs/appl/admin/VISION_ebs.xml
Execute SYSTEM command : df /u01/install/VISION/fs1

************* Start of  session *************
 version: 12.2.0
 started at: Fri Apr 24 2015 13:47:58

APPL_TOP is set to /u01/install/VISION/fs2/EBSapps/appl
[START 2015/04/24 13:48:04] Check if services are down
  [STATEMENT]  Application services are down.
[END   2015/04/24 13:48:09] Check if services are down
[EVENT]     [START 2015/04/24 13:48:09] Checking the DB parameter value
[EVENT]     [END   2015/04/24 13:48:11] Checking the DB parameter value
  Using ADOP Session ID from currently incomplete patching cycle
  [START 2015/04/24 13:48:23] adzdoptl.pl run
    ADOP Session ID: 12
    Phase: apply
    Log file: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/adop_20150424_134739.log
    [START 2015/04/24 13:48:30] apply phase
        Calling: adpatch  workers=4   options=hotpatch     console=no interactive=no  defaultsfile=/u01/install/VISION/fs2/EBSapps/appl/admin/VISION/adalldefaults.txt patchtop=/u01/install/VISION/fs_ne/EBSapps/patch/19197270 driver=u19197270.drv logfile=u19197270.log
        ADPATCH Log directory: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_134739/VISION_ebs/19197270/log
        [EVENT]     [START 2015/04/24 13:59:45] Running finalize since in hotpatch mode
        [EVENT]     [END   2015/04/24 14:00:10] Running finalize since in hotpatch mode
          Calling: adpatch options=hotpatch,nocompiledb interactive=no console=no workers=4 restart=no abandon=yes defaultsfile=/u01/install/VISION/fs2/EBSapps/appl/admin/VISION/adalldefaults.txt patchtop=/u01/install/VISION/fs2/EBSapps/appl/ad/12.0.0/patch/115/driver logfile=cutover.log driver=ucutover.drv
          ADPATCH Log directory: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_134739/VISION_ebs/log
        [EVENT]     [START 2015/04/24 14:01:32] Running cutover since in hotpatch mode
        [EVENT]     [END   2015/04/24 14:01:33] Running cutover since in hotpatch mode
      [END   2015/04/24 14:01:36] apply phase
      [START 2015/04/24 14:01:36] Generating Post Apply Reports
        [EVENT]     [START 2015/04/24 14:01:38] Generating AD_ZD_LOGS Report
          [EVENT]     Report: /u01/install/VISION/fs2/EBSapps/appl/ad/12.0.0/sql/ADZDSHOWLOG.sql

          [EVENT]     Output: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_134739/VISION_ebs/adzdshowlog.out

        [EVENT]     [END   2015/04/24 14:01:42] Generating AD_ZD_LOGS Report
      [END   2015/04/24 14:01:42] Generating Post Apply Reports
    [END   2015/04/24 14:01:46] adzdoptl.pl run
    adop phase=apply - Completed Successfully

    Log file: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/adop_20150424_134739.log

adop exiting with status = 0 (Success)
### AD.C.Delta.6 ###

$ adop phase=apply patches=19330775 hotpatch=yes

Enter the APPS password:
Enter the SYSTEM password:
Enter the WLSADMIN password:

Validating credentials...

Initializing...
    Run Edition context  : /u01/install/VISION/fs2/inst/apps/VISION_ebs/appl/admin/VISION_ebs.xml
    Patch edition context: /u01/install/VISION/fs1/inst/apps/VISION_ebs/appl/admin/VISION_ebs.xml
Reading driver file (up to 50000000 bytes).
    Patch file system freespace: 181.66 GB

Validating system setup...
    Node registry is valid.
    Application services are down.
    [WARNING]   ETCC: The following database fixes are not applied in node ebs
                  14046443
                  14255128
                  16299727
                  16359751
                  17250794
                  17401353
                  18260550
                  18282562
                  18331812
                  18331850
                  18440047
                  18689530
                  18730542
                  18828868
                  19393542
                  19472320
                  19487147
                  19791273
                  19896336
                  19949371
                  20294666
                Refer to My Oracle Support Knowledge Document 1594274.1 for instructions.

Checking for pending adop sessions...
    Continuing with the existing session [Session id: 12]...

===========================================================================
ADOP (C.Delta.6)
Session ID: 12
Node: ebs
Phase: apply
Log: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/adop_20150424_140643.log
===========================================================================

Applying patch 19330775 with adpatch utility...
    Log: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_140643/VISION_ebs/19330775/log/u19330775.log

Running finalize actions for the patches applied...
    Log: @ADZDSHOWLOG.sql "2015/04/24 14:15:09"

Running cutover actions for the patches applied...
    Spawning adpatch parallel workers to process CUTOVER DDLs in parallel
    Log: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_140643/VISION_ebs/log/cutover.log
    Performing database cutover in QUICK mode

Generating post apply reports...

Generating log report...
    Output: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_140643/VISION_ebs/adzdshowlog.out

adop phase=apply - Completed Successfully


adop exiting with status = 0 (Success)

So what are you waiting for fellow Apps DBAs? Go ahead, apply the new AD Delta update to your R12.2 EBS instances. I am really eager to try out other AD.C.Delta6 new features, especially “Online Patching support for single file system on development or test systems”

Categories: DBA Blogs

Tabular Form - Change Column Type

Denes Kubicek - Wed, 2015-04-29 05:34
Just started a new version of My Demo Application - based on APEX 5.0 and Universal Theme. There, I will rework the old content and provide some interesting new possibilities (as of 4.2). In this example I am showing how to change the column type from Text to Display in a tabular form.



When could this be useful? For example if you have a certain group of users not allowed to edit a column and you don't want to create additional and conditional columns. Eventually you can use this to set only specific rows to display only depending on your application logic.
Categories: Development

How to pass the #Oracle Database 11g OCM Exam

The Oracle Instructor - Wed, 2015-04-29 04:05

The Oracle Certified Master Exam is among the highest rated exams in the IT industry for a good reason: It is extremely hard to pass!

Unlike most other IT exams that are done as multiple choice tests, the OCM exam means two days of  hands-on practical tasks. No chance you can pass it by just reading books or brain dumps and learning by heart without deep understanding. Without years of practical experience with Oracle database administration – don’t even think about it. Even as a seasoned DBA, you won’t find it easy to pass the OCM exam. Why is that so?

The tested topics have a very broad range and some of them are likely outside your comfort zone
Your usual tools (e.g. scripts and google) are not available
There is very limited time to complete the tasks
The exam is exhausting, so after a while oversights become a severe danger

To help you prepare for the exam, we offer a quite useful class: Oracle Database 11g: OCM Exam Preparation Workshop Ed 2

I delivered it many times and it is probably the best preparation you can get – but also expensive, I admit.

I use to give following guidance to the attendees of the workshop – and the last two paragraphs may help you even if you don’t attend it:

During the OCM Preparation Workshop:
Go beyond your comfort zone and put additional focus on the topics you are not yet so familiar with
Notify the pages in the documentation that you can copy from to resolve the tasks and memorize them
Check if & how the Enterprise Manager can help doing things more efficiently than manual procedures
Make sure that you are ABLE to do things manually in the absence of GUIs, though After the Workshop:
Create a sandbox environment (e.g. VirtualBox on your notebook)
Practice using only the Documentation!
Practice the things that you felt not so comfortable with during the workshop in the first place
Practice things from inside your comfort zone also, but with a (short!) time limit for the task During the Exam:
Read the tasks carefully and make sure that you understand them exactly BEFORE you begin working
If the order of tasks is not relevant, do the things first that you feel most comfortable about
Don’t waste too much time on a problematic task if other things can be done instead
You don’t need 100% to pass – so keep up your confidence even if you couldn’t complete all tasks

It is of course possible to prepare also without the workshop. See here for an impressive description about it. Good luck with your journey to become an OCM and I hope you find this little article helpful :-)


Tagged: OCM
Categories: DBA Blogs

So What’s the Real Point of ODI12c for Big Data Generating Pig and Spark Mappings?

Rittman Mead Consulting - Wed, 2015-04-29 00:30

Oracle ODI12c for Big Data came out the other week, and my colleague Jérôme Françoisse put together an introductory post on the new features shortly after, covering ODI’s new ability to generate Pig and Spark transformations as well as the traditional Hive ones. How this works is that you can now select Apache Pig, or Apache Spark (through pySpark, the Spark API through Python) as the implementation language for an ODI mapping, and ODI will generate one of those languages instead of HiveQL commands to run the mapping.

NewImage

How this works is that ODI12c 12.1.3.0.1 adds a bunch of new component-style KMs to the standard 12c ones, providing filter, aggregate, file load and other features that generate pySpark and Pig code rather than the usual HiveQL statement parts. Component KMs have also been added for Hive as well, making it possible now to include non-Hive datastores in a mapping and join them all together, something it was hard to do in earlier versions of ODI12c where the Hive IKM expected to do the table data extraction as well.

But when you first look at this you may well be tempted to think “…so what?”, in that Pig compiles down to MapReduce in the end, just like Hive does, and you probably won’t get the benefits of running Spark for just a single batch mapping doing largely set-based transformations. To my mind where this new feature gets interesting is its ability to let you take existing Pig and Spark scripts, which process data in a different, dataflow-type way compared to Hive’s set-based transformations and which also potentially also use Pig and Spark-specific function libraries, and convert them to managed graphical mappings that you can orchestrate and run as part of a wider ODI integration process.

Pig, for example, has the LinkedIn-originated DataFu UDF library that makes it easy to sessionize and further transform log data, and the Piggybank community library that extends Pig’s loading and saving capabilities to additional storage formats, and provides additional basic UDFs for timestamp conversion, log parsing and so forth. We’ve used these libraries in the past to process log files from our blog’s webserver and create classification models to help predict whether a visitor will return, with the Pig script below using the DataFu and Piggybank libraries to perform these tasks easily in Pig.

register /opt/cloudera/parcels/CDH/lib/pig/datafu.jar;
register /opt/cloudera/parcels/CDH/lib/pig/piggybank.jar;

DEFINE Sessionize datafu.pig.sessions.Sessionize('60m');
DEFINE Median datafu.pig.stats.StreamingMedian();
DEFINE Quantile datafu.pig.stats.StreamingQuantile('0.9','0.95');
DEFINE VAR datafu.pig.VAR();
DEFINE CustomFormatToISO org.apache.pig.piggybank.evaluation.datetime.convert.CustomFormatToISO();
DEFINE ISOToUnix org.apache.pig.piggybank.evaluation.datetime.convert.ISOToUnix();

--------------------------------------------------------------------------------
-- Import and clean logs
raw_logs = LOAD '/user/flume/rm_logs/apache_access_combined' USING TextLoader AS (line:chararray);

-- Extract individual fields
logs_base = FOREACH raw_logs
GENERATE FLATTEN
(REGEX_EXTRACT_ALL(line,'^(\\S+) (\\S+) (\\S+) \\[([\\w:/]+\\s[+\\-]\\d{4})\\] "(.+?)" (\\S+) (\\S+) "([^"]*)" "([^"]*)"')) AS
(remoteAddr: chararray, remoteLogName: chararray, user: chararray, time: chararray, request: chararray, status: chararray, bytes_string: chararray, referrer:chararray, browser: chararray);

-- Remove Bots and convert timestamp
logs_base_nobots = FILTER logs_base BY NOT (browser matches '.*(spider|robot|bot|slurp|Bot|monitis|Baiduspider|AhrefsBot|EasouSpider|HTTrack|Uptime|FeedFetcher|dummy).*');

-- Remove uselesss columns and convert timestamp
clean_logs = FOREACH logs_base_nobots GENERATE CustomFormatToISO(time,'dd/MMM/yyyy:HH:mm:ss Z') as time, remoteAddr, request, status, bytes_string, referrer, browser;

--------------------------------------------------------------------------------
-- Sessionize the data

clean_logs_sessionized = FOREACH (GROUP clean_logs BY remoteAddr) {
ordered = ORDER clean_logs BY time;
GENERATE FLATTEN(Sessionize(ordered))
AS (time, remoteAddr, request, status, bytes_string, referrer, browser, sessionId);
};

-- The following steps will generate a tsv file in your home directory to download and work with in R
store clean_logs_sessionized into '/user/jmeyer/clean_logs' using PigStorage('\t','-schema');

If you know Pig (or read my previous articles on this theme), you’ll know that pig has the concept of an “alias”, a dataset you define using filters, aggregations, projections and other operations against other aliases, with a typical pig script starting with a large data extract and then progressively whittling it down to just the subset of data, and derived data, you’re interested in. When it comes to script execution, Pig only materializes these aliases when you tell it to store the results in permanent storage (file, Hive table etc) with the intermediate steps just being instructions on how to progressively arrive at the final result. Spark works in a similar way with its RDDs, transformations and operations which either create a new dataset based off of an existing one, or materialise the results in permanent storage when you run an “action”. So let’s see if ODI12c for Big Data can create a similar dataflow, based as much as possible on the script I’ve used above.

… and in-fact it can. The screenshot below shows the logical mapping to implement this same Pig dataflow, with the data coming into the mapping as a Hive table, an expression operator creating the equivalent of a Pig alias based off of a filtered, transformed version of the original source data using the Piggybank CustomFormatToISO UDF, and then runs the results of that through an ODI table function that in the background transforms the data using Pig’s GENERATE FLATTEN command and a call to the DataFu Sessionize UDF.

NewImage

And this is the physical mapping to go with the logical mapping. Note that all of the Pig transformations are contained within a separate execution unit, that contains operators for the expression to transform and filter the initial dataset, and another for the table function.

NewImage

The table function operator runs the input fields through an arbitrary Pig Latin script, in this case defining another alias to match the table function operator name and using the DataFu Sessionize UDF within a FOREACH to first sort, and then GENERATE FLATTEN the same columns but with a session ID for user sessions with the same IP address and within 60 seconds of each other.

NewImage

If you’re interested in the detail of how this works and other usages of the new ODI12c for Big Data KMs, then come along to the masterclass I’m running with Jordan Meyer at the Brighton and Atlanta Rittman Mead BI Forums where I’ll go into the full details as part of a live end-to-end demo. Looking at the Pig Latin that comes out of it though, you can see it more or less matches the flow of the hand-written script and implements all of the key steps.

NewImage

Finally, checking the output of the mapping I can see that the log entries have been sessionized and they’re ready to pass on to the next part of the classification model.

NewImage

So that to my mind is where the value is in ODI generating Pig and Spark mappings. It’s not so much taking an existing Hive set-based mapping and just running it using a different language, it’s more about being able to implement graphically the sorts of data flows you can create with Pig and Spark, and being able to get access to the rich UDF and data access libraries that these two languages benefit from. As I said, come along to the masterclass Jordan and I are running, and I’ll go into much more detail and show how the mapping is set up, along with other mappings to create an end-to-end Hadoop data integration process.

Categories: BI & Warehousing

SQL Server Tips: How to know if In-Memory Feature is supported by your server?

Yann Neuhaus - Wed, 2015-04-29 00:21


A customer asks me, how to know if In-Memory Feature is supported by my SQL Server server?

An easy way is to check the edition, version etc. but now, you have directly a property for that.

 

On msdn here, you find all property that you can search with the T-SQL Command: SERVERPROPERTY

 

But if you try to run through all your servers with CMS (Central Management Server), for all SQL Server below than SQL Server 2014, you have a NULL value.


 InMemory_Property01.png

 

I write rapidly this script to have no NULL value and have an usable info:

SELECT CASE CAST( ServerProperty('IsXTPSupported') AS INT)  WHEN 0 THEN 'Not Supported'  WHEN 1 THEN 'Supported'  ELSE 'Not Available Version (must be SQL2014 or Higher)'ENDAS In_Memory_Supported

 


InMemory_Property02.png


I hope this script can help you and if you want to know more on SQL Server In-Memory Technology come to our Event in June 2015: inscription & details here Cool




OUAF 4.3.0.0.1 and CCB 2.5 has been released

Anthony Shorten - Tue, 2015-04-28 20:08

Oracle Utilities is pleased to announce the general availability of Oracle Utilities Customer Care And Billing V2.5 with the new Oracle Utilities Application Framework V4.3.0.0.1. This new release features the following:

  • New Look and Feel - The user experience has been upgraded to support the new Oracle Alta Look and Feel which is being progressively implemented across Oracle products. This release allows for better visibility with new useability features such as bookmarking, named query views as well as a cleaner user interface.
  • Support for different browsers - Oracle Utilities Customer Care And Billing 2.5 and Oracle Utilities Application Framework V4.3.0.0.1 now supports Microsoft Internet Explorer in native mode (compatibility mode is no longer required) and now supports Mozilla Firefox. Other browsers will be added in service packs.
  • 100 % Java implementation -  Oracle Utilities Customer Care And Billing 2.5 is now 100% java. Oracle Utilities Application Framework V4.3.0.0.1 has removed support for COBOL based extensions. This results in a simpler implementations with overall lower memory requirements. 
  • Optimized for latest Oracle technology - Oracle Utilities Application Framework has been optimized for use with Oracle Database 12c and Oracle WebLogic 12.1.3 and above.
  • New Graph engine - A new engine to generate graphs has been introduced that is backward compatible with existing graphs but now offers additional features to enhance the user experience and display data in a more flexible way. This also removes the necessity for Flash integration for graphs.
  • ConfigLab/Archiving Engine has been removed - The ConfigLab functionality has been removed from the product as the Configuration Migration Assistant, introduced in the past release, provides an alternative. The inbuilt Archiving engine has been removed from the product as the ILM based solution, introduced in Oracle Utilities Customer Care And Billing 2.4.0.2.0, provides a viable, more powerful and more flexible alternative.
  • XAI menu items migration - With the announced deprecation of XML Application Integration (XAI), components retained for future releases have been moved to a new External Message menu and renamed accordingly. XAI Servlet is provided in maintenance mode, with all potential enhancements frozen, in this release to allow customers to migrate to Inbound Web Services and other mechanisms accordingly. 
  • New platform versions supported - New platforms and versions have been added to this release to allow for maximum supportability and performance.

This information is available with more information in the release notes provided with the software download. The software is available for download on Oracle Software Delivery Cloud today. In the coming weeks a number of whitepapers will be release outlining the new features as well as a set of articles highlighting new features.

Pitchbook Lists Most Valuable Ed Tech Companies

Michael Feldstein - Tue, 2015-04-28 18:16

By Phil HillMore Posts (310)

Update: Jeez – sorry about the multiple typos (mistakenly showed in thousands instead of millions). Fixed now.

Pitchbook – a database service for M&A, private equity and venture capital – listed in Hot Topics what they saw as the top ten most valuable ed tech companies based on public valuations[1]. The definition of startup is a little loose, as one company (D2L) was founded in 1999 and public companies are excluded.

Below are the market valuation estimates, to which I have added the year each company was founded along with the total funding by each company in parentheses, according to Crunchbase data.

Company (year founded, funding total)  Market Valuation

  1. Pluralsight (2004, $169m)            $1.0 billion
  2. Instructure (2008, $79m)               $554 million
  3. Lynda.com (1995, $289m)             $456 million
  4. Coursera (2012, $85m)                   $367 million
  5. Open English (2006, $120m)        $350 million
  6. Craftsy (2010, $106m)                   $339 million
  7. D2L (1999, $165m)                        $330 million
  8. Lumos Labs (2005, $68m)           $265 million
  9. Clever (2012, $44m)                      $247 million
  10. Edmodo (2008, $88m)                 $236 million

Some notes to consider:

  • 2U, not listed as they are public (I assume), is worth approximately $1.1 billion.
  • Chegg, also not listed as they are public, is worth approximately $674 million.
  • Lynda.com was purchased by LinkedIn this month for $1.5 billion – obviously a premium – but take the data with a grain of salt due to market variations.
  • Pitchbook listed Sympoz, but I listed them as Craftsy, the company name.
  • Pitchbook noted how US-centric their own list is, but this is partially due to their own data collection methods.
  1. Note that estimates are as of the end of 2014.

The post Pitchbook Lists Most Valuable Ed Tech Companies appeared first on e-Literate.

Setting up Security and Access Control on a Big Data Appliance

Rittman Mead Consulting - Tue, 2015-04-28 14:05

Like all Oracle Engineered Systems, Oracle’s field servicing and Advanced Customer Services (ACS) teams go on-site once a BDA has been sold to a customer and do the racking, installation and initial setup. They will usually ask the customer a set of questions such as “do you want to enable Kerberos authentication”, “what’s the range of IP addresses you want to use for each of the network interfaces”, “what password do you want to use” and so on. It’s usually enough to get a customer going, but in-practice we’ve found most customers need a number of other things set-up and configured before they use the BDA in development and production; for example:

  • Integrating Cloudera Manager, Hue and other tools with the corporate LDAP directory
  • Setting up HDFS and SSH access for the development and production support team, so they can log in with their usual corporate credentials
  • Come up with a directory layout and file placement strategy for loading data into the BDA, and then moving it around as data gets processed
  • Configuring some sort of access control to the Hive tables (and sometimes HDFS directories) that users use to get access to the Hadoop data
  • Devising a backup and recovery strategy, and thinking about DR (disaster recovery)
  • Linking the BDA to other tools and products in the Oracle Big Data and Engineered Systems family; Exalytics, for example, or setting up ODI and OBIEE to access data in the BDA

The first task we’re usually asked to do is integrate Cloudera Manager, the web-based admin console for the Hadoop parts of the BDA, with the corporate LDAP server. By doing this we can enable users to log into Cloudera Manager with their usual corporate login (and restrict access to just certain LDAP groups, and further segregate users into admin ones and stop/start/restart services-type ones), and similarly allow users to log into Hue using their regular LDAP credentials. In my experience Cloudera Manager is easier to set up than Hue, but let’s look at a high-level at what’s involved.

LDAP Integration for Hue, Cloudera Manager, Hive etc

In our Rittman Mead development lab, we have OpenLDAP running on a dedicated appliance VM and a number of our team setup as LDAP users. We’ve defined four LDAP groups, two for Cloudera Manager and two for Hue, with varying degrees of access for each product.

NewImage

Setting up Cloudera Manager is pretty straightforward, using the Administration > Settings menu in the Cloudera Manager web UI (note this option is only available for the paid, Cloudera Enterprise version, not the free Cloudera Express version). Hue security integration is configured through the Hue service menu, and again you can configure the LDAP search credentials, any LDAPS or certificate setup, and then within Hue itself you can define groups to determine what Hue features each set of users can use.

NewImage

Where Hue is a bit more fiddly (last time I looked) is in controlling access to the tool itself; Cloudera Manager lets you explicitly define which LDAP groups can access the tool with other users then locked-out, but Hue either allows all authenticated LDAP users to login to the tool or makes you manually import each authorised user to grant them access (you can then either have Hue check-back to the LDAP server for their password each login, or make a copy of the password and store it within Hue for later use, potentially getting out-of-sync with their LDAP directory password version). In practice what I do is use the manual authorisation method but then have Hue link back to the LDAP server to check the users’ password, and then map their LDAP groups into Hue groups for further role-based access control. There’s a similar process for Hive and Impala too, where you can configure the services to authenticate against LDAP, and also have Hive use user impersonation so their LDAP username is passed-through the ODBC or JDBC connection and queries run as that particular user.

Configuring SSH and HDFS Access and Setting-up Kerberos Authentication

Most developers working with Hadoop and the BDA will either SSH (Secure Shell) into the cluster and work directly on one of the nodes, or connect into their workstation which has been configured as a Hadoop client for the BDA. If they SSH in directly to the cluster they’ll need Linux user accounts there, and if they go in via their workstation the Hadoop client installed there will grant them access as the user they’re logged-into the workstation as. On the BDA you can either set-up user accounts on each BDA node separately, or more likely configure user authentication to connect to the corporate LDAP and check credentials there.

NewImage

One thing you should definitely do, either when your BDA is initially setup by Oracle or later on post-install, is configure your Hadoop cluster as a secure cluster using Kerberos authentication. Hadoop normally trusts that each user accessing Hadoop services via the Hadoop Filesystem API (FS API) is who they say they are, but using the example above I could easily setup an “oracle” user on my workstation and then access all Hadoop services on the main cluster without the Hadoop FS API actually checking that I am who I say I am – in other words the Hadoop FS API shell doesn’t check your password, it merely runs a “whoami” Linux command to determine my username and grants me access as them.

NewImage

The way to address this is to configure the cluster for Kerberos authentication, so that users have to have a valid Kerberos ticket before accessing any secured services (Hive, HDFS etc) on the cluster. I covered this as part of an article on configuring OBIEE11g to connect to Kerberos-secured Hadoop clusters last Christmas and you can either do it as part of the BDA install, or later on using a wizard in more recent versions of CDH5, the Cloudera Hadoop distribution that the BDA uses.

NewImage

The complication with Kerberos authentication is that your organization needs to have a Kerberos KDC (Key Distribution Center) server setup already, which will then link to your corporate LDAP or Active Directory service to check user credentials when they request a Kerberos ticket. The BDA installation routine gives you the option of creating a KDC as part of the BDA setup, but that’s only really useful for securing inter-cluster connections between services as it won’t be checking back to your corporate directory. Ideally you’d set up a connection to an existing, well-tested and well-understood Kerberos KDC server and secure things that way – but beware that not all Oracle and other tools that run on the BDA are setup for Kerberos authentication – OBIEE and ODI are, for example, but the current 1.0 version of Big Data Discovery doesn’t yet support Kerberos-secured clusters.

Coming-up with the HDFS Directory Layout

It’s tempting with Hadoop to just have a free-for-all with the Hadoop HDFS filesystem setup, maybe restricting users to their own home directory but otherwise letting them put files anywhere. HDFS file data for Hive tables typically goes in Hive’s own filesystem area /user/hive/warehouse, but users can of course create Hive tables over external data files stored in their own part of the filesystem.

What we tend to do (inspired by Gwen Shapira’a “Scaling ETL with Hadoop” presentation) is create separate areas for incoming data, ETL processing data and process output data, with developers then told to put shared datasets in these directories rather than their own. I generally create additional Linux users for each of these directories so that these can own the HDFS files and directories rather than individual users, and then I can control access to these directories using HDFS’s POSIX permissions. A typical user setup script might look like this:

[oracle@bigdatalite ~]$ cat create_mclass_users.sh 
sudo groupadd bigdatarm
sudo groupadd rm_website_analysis_grp
useradd mrittman -g bigdatarm
useradd ryeardley -g bigdatarm
useradd mpatel -g bigdatarm
useradd bsteingrimsson -g bigdatarm
useradd spoitnis -g bigdatarm
useradd rm_website_analysis -g rm_website_analysis_grp
echo mrittman:welcome1 | chpasswd
echo ryeardley:welcome1 | chpasswd
echo mpatel:welcome1 | chpasswd
echo bsteingrimsson:welcome1 | chpasswd
echo spoitnis:welcome1 | chpasswd
echo rm_website_analysis:welcome1 | chpasswd

whilst a script to setup the directories for these users, and the application user, might look like this:

[oracle@bigdatalite ~]$ cat create_hdfs_directories.sh 
set echo on
#setup individual user HDFS directories, and scratchpad areas
sudo -u hdfs hadoop fs -mkdir /user/mrittman
sudo -u hdfs hadoop fs -mkdir /user/mrittman/scratchpad
sudo -u hdfs hadoop fs -mkdir /user/ryeardley
sudo -u hdfs hadoop fs -mkdir /user/ryeardley/scratchpad
sudo -u hdfs hadoop fs -mkdir /user/mpatel
sudo -u hdfs hadoop fs -mkdir /user/mpatel/scratchpad
sudo -u hdfs hadoop fs -mkdir /user/bsteingrimsson
sudo -u hdfs hadoop fs -mkdir /user/bsteingrimsson/scratchpad
sudo -u hdfs hadoop fs -mkdir /user/spoitnis
sudo -u hdfs hadoop fs -mkdir /user/spoitnis/scratchpad
 
#setup etl directories
sudo -u hdfs hadoop fs -mkdir -p /data/rm_website_analysis/logfiles/incoming
sudo -u hdfs hadoop fs -mkdir /data/rm_website_analysis/logfiles/archive/
sudo -u hdfs hadoop fs -mkdir -p /data/rm_website_analysis/tweets/incoming
sudo -u hdfs hadoop fs -mkdir /data/rm_website_analysis/tweets/archive
 
#change ownership of user directories
sudo -u hdfs hadoop fs -chown -R mrittman /user/mrittman
sudo -u hdfs hadoop fs -chown -R ryeardley /user/ryeardley
sudo -u hdfs hadoop fs -chown -R mpatel /user/mpatel
sudo -u hdfs hadoop fs -chown -R bsteingrimsson /user/bsteingrimsson
sudo -u hdfs hadoop fs -chown -R spoitnis /user/spoitnis
sudo -u hdfs hadoop fs -chgrp -R bigdatarm /user/mrittman
sudo -u hdfs hadoop fs -chgrp -R bigdatarm /user/ryeardley
sudo -u hdfs hadoop fs -chgrp -R bigdatarm /user/mpatel
sudo -u hdfs hadoop fs -chgrp -R bigdatarm /user/bsteingrimsson
sudo -u hdfs hadoop fs -chgrp -R bigdatarm /user/spoitnis
 
#change ownership of shared directories
sudo -u hdfs hadoop fs -chown -R rm_website_analysis /data/rm_website_analysis
sudo -u hdfs hadoop fs -chgrp -R rm_website_analysis_grp /data/rm_website_analysis

Giving you a directory structure like this (with the directories for Hive, Impala, HBase etc removed for clarity)

NewImage

In terms of Hive and Impala data, there’s varying opinions on whether to create tables as EXTERNAL and store the data (including sub-directories for table partitions) in the /data/ HDFS area or let Hive store them in its own /user/hive/warehouse area – I tend to let Hive store them within its area as I use Apache Sentry to then control access to those Tables’s data.

Setting up Access Control for HDFS, Hive and Impala Data

At its simplest level, access control can be setup on the HDFS directory structure by using HDFS’s POSIX security model:

  • Each HDFS file or directory has an owner, and a group
  • You can add individual Linux users to a group, but an HDFS object can only have one group owning it

What this means in-practice though is you have to jump through quite a few hoops to set up finer-grained access control to these HDFS objects. What we tend to do is set RW access to the /data/ directory and subdirectories to the application user account (rm_website_analysis in this case), and RO access to that user’s associated group (rm_website_analysis_grp). If users then want access to that application’s data we add them to the relevant application group, and a user can belong to more than one group, making it possible to grant access to more than one application data area

[oracle@bigdatalite ~]$ cat ./set_hdfs_directory_permissions.sh 
sudo -u hdfs hadoop fs -chmod -R 750 /data/rm_website_analysis
usermod -G rm_website_analysis_grp mrittman

making it possible for the main application owner to write data to the directory, but group members only have read access. What you can also now do with more recent versions of Hadoop (CDH5.3 onwards, for example) is define access control lists to go with individual HDFS objects, but this feature isn’t enabled by default as it consumes more namenode memory than the traditional POSIX approach. What I prefer to do though is control access by restricting users to only accessing Hive and Impala tables, and using Apache Sentry, or Oracle Big Data SQL, to provide role-based access control over them.

Apache Sentry is a project originally started by Cloudera and then adopted by the Apache Foundation as an incubating project. It aims to provide four main authorisation features over Hive, Impala (and more recently, the underlying HDFS directories and datafiles):

  • Secure authorisation, with LDAP integration and Kerberos prerequisites for Sentry enablement
  • Fine-grained authorisation down to the column-level, with this feature provided by granting access to views containing subsets of columns at this point
  • Role-based authorisation, with different Sentry roles having different permissions on individual Hive and Impala tables
  • Multi-tenant administration, with a central point of administration for Sentry permissions

From this Cloudera presentation on Sentry on Slideshare, Sentry inserts itself into the query execution process and checks access rights before allowing the rest of the Hive query to execute. Sentry is configured through security policy files, or through a new web-based interface introduced with recent versions of CDH5, for example.

NewImage

The other option for customers using Oracle Exadata,Oracle Big Data Appliance and Oracle Big Data SQL is to use the Oracle Database’s access control mechanisms to govern access to Hive (and Oracle) data, and also set-up fine-grained access control (VPD), data masking and redaction to create a more “enterprise” access control system.

NewImage

So these are typically tasks we perform when on-boarding an Oracle BDA for a customer. If this is of interest to you and you can make it to either Brighton, UK next week or Atlanta, GA the week after, I’ll be covering this topic at the Rittman Mead BI Forum 2015 as part of the one-day masterclass with Jordan Meyer on the Wednesday of each week, along with topics such as creating ETL data flows using Oracle Data Integrator for Big Data, using Oracle Big Data Discovery for faceted search and cataloging of the data reservoir, and reporting on Hadoop and NoSQL data using Oracle Business Intelligence 11g. Spaces are still available so register now if you’d like to hear more on this topic.

Categories: BI & Warehousing

Is there a Support Blog for my product area?

Joshua Solomin - Tue, 2015-04-28 13:16
Welcome to Support Blogs

To improve the timeliness of delivering technical insight, updates, and support news to you, many of the Oracle Product teams are moving from newsletters to blogs. You may access the Support Blogs directly, via the Support Product Index (Document 222.1), or by searching for Product Support Blogs in My Oracle Support. Subscribe to the blog posting to ensure you never miss an update. Watch this short video to see how.

Oracle Developer Cloud Service - Automating Builds for Oracle ADF Applications

Shay Shmeltzer - Tue, 2015-04-28 11:49

Following up on the previous blog that showed how to get your ADF application into the Developer Cloud Service git repository, this entry will show you the next step in the lifecycle - executing builds.

The Oracle Developer Cloud Service (DevCS) supports build automation with both Maven and Ant scripts - and in this demo I'm showing you how to use the Ant option. One of the unique aspects of DevCS for customers who are  using Oracle ADF and JDeveloper is that the cloud comes pre-configured with the ADF libraries needed to compile your code, and also with support for OJDeploy so you can leverage deployment profiles that you defined for your application.

In fact DevCS comes with support for two ADF versions - 11.1.1.7.1 and 12.1.3 (as of the time of this blog).

In the video below you'll see

  • How to add a build file for your ADF application
  • How to  configure the build file to work in the cloud environment
  • How to define a build job and execute it
  • How to look at the log files for the build job
  • How to automate the build execution to happen when changes are committed to the git repository

The build in the example above just does the packaging, but in more realistic scenarios you can use similar build processes to create ADF libraries from projects, automate testing, modify configuration and more.

There are a couple of files that are used in the demo that you might want to use in your implementation:

The build.xml file: 

  <property environment="env" /> 
  <property file="build.properties"/>
    <target name="deploy" description="Deploy JDeveloper profiles">
    <taskdef name="ojdeploy"
             classname="oracle.jdeveloper.deploy.ant.OJDeployAntTask"
             uri="oraclelib:OJDeployAntTask"
             classpath="${oracle.jdeveloper.ant.library}"/>
    <ora:ojdeploy xmlns:ora="oraclelib:OJDeployAntTask"
                  executable="${oracle.jdeveloper.ojdeploy.path}"
                  ora:buildscript="${oracle.jdeveloper.deploy.dir}/ojdeploy-build.xml"
                  ora:statuslog="${oracle.jdeveloper.deploy.dir}/ojdeploy-statuslog.xml">
      <ora:deploy>
        <ora:parameter name="workspace"
                       value="${oracle.jdeveloper.workspace.path}"/>
        <ora:parameter name="profile"
                       value="${oracle.jdeveloper.deploy.profile.name}"/>
        <ora:parameter name="nocompile" value="false"/>
        <ora:parameter name="outputfile"
                       value="${oracle.jdeveloper.deploy.outputfile}"/>
      </ora:deploy>
    </ora:ojdeploy>
  </target>

The build.properties file I used can be found here.

The mask for the build automatic execution is */1 * * * *

Note that in the properties file there are references to environment variables that you will need to change if you are looking to deploy an 11.1.1.* app - specifically the options for 12 and 11 are:

WLS_HOME_12C3=/opt/Oracle/Middleware12c3/wlserver
WLS_HOME_11G=/opt/Oracle/Middleware/wlserver_10.3
MIDDLEWARE_HOME_12C3=/opt/Oracle/Middleware12c3
MIDDLEWARE_HOME_11G=/opt/Oracle/Middleware
ORACLE_HOME_12C3=/opt/Oracle/Middleware12c3/jdeveloper
ORACLE_HOME_11G=/opt/Oracle/Middleware/jdeveloper
Categories: Development

Google Search Appliance (GSA) Version 7.4 Released

Wanted to drop a quick note here that Google released the latest version of the Google Search Appliance software last week. That brings the most current version up to 7.4.0.G.72 and officially end of life’s the 7.0 version of the appliance software.

New features from the support site posting:

  • Seamless Integration with Microsoft: we’re releasing our SharePoint and Active Directory 4.0 connectors out of beta. These connectors provide improved scalability, easier configuration and tighter integration with SharePoint. Additionally, the GSA now supports ADFS which improves our ability to integrate with Windows security.
  • Strengthening GSA as a Platform: GSA 7.4 improves the overall quality of the GSA as a search platform. Examples of this include better monitoring through exposing new SNMP metrics, better administration of the internal group resolution repository (GroupDB) and more support for security standards through a generic SAML Identity Provider (eg: OpenSAML).
  • Improved Performance: this new GSA version provides better performance in 3 areas: crawling, serving and authorizing search results.
  • Enhanced Connector Platform: we have released new features to our 4.0 connector platform like a new beta Database connector and support for SharePoint 2013 multi tenant. We’ll continue to release new Connector functionality regularly.

I am particularly excited about the ability to see into the onboard GroupDB and manage it through the admin console.

Link to 7.4 documentation: https://support.google.com/gsa/answer/6187273?hl=en&ref_topic=2709671

The post Google Search Appliance (GSA) Version 7.4 Released appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

getting started with postgres plus advanced server (1) - setting up ppas

Yann Neuhaus - Tue, 2015-04-28 10:35

I did several posts around postgresql and postgres plus advanced server in the past. What is missing is a beginners guide on how to get postgres plus advanced server up and running including a solution for backup and recovery, high availability and monitoring. So I thought I'd write a guide on how to do that, consisting of:

  1. setting up postgres plus advanced server
  2. setting up a backup and recovery server
  3. setting up a hot standby database
  4. setting up monitoring

As this is the first post of the series this is about getting ppas installed and creating the first database cluster.

Obviously the first thing to do is to install an operating system. Several of these are supported, just choose the one you like. An example setup can be found here. So, once ppas was downloaded and transferred to the system where it is supposed to be installed we can start. There are several ways to get ppas installed on the system but before you begin java should be installed. For yum based distributions this is done by:

yum install java
Using the standalone installer in interactive mode

Starting the installation is just a matter of extracting the file and executing it:

[root@oel7 tmp]# ls
ppasmeta-9.4.1.3-linux-x64.tar.gz
[root@oel7 tmp]# tar -axf ppasmeta-9.4.1.3-linux-x64.tar.gz 
[root@oel7 tmp]# ls
ppasmeta-9.4.1.3-linux-x64  ppasmeta-9.4.1.3-linux-x64.tar.gz
[root@oel7 tmp]# ppasmeta-9.4.1.3-linux-x64/ppasmeta-9.4.1.3-linux-x64.run 

alt alt alt provide the username and password you used for downloading the product: alt alt alt alt alt alt alt alt alt alt alt alt alt alt altdone.

Using the standalone installer in interactive text mode

If you do not want to use the graphical user interface you can launch the installer in interactive text mode:

# ppasmeta-9.4.1.3-linux-x64/ppasmeta-9.4.1.3-linux-x64.run --mode text

Either go with the default options or adjust what you like. The questions should be self explaining:

Language Selection

Please select the installation language
[1] English - English
[2] Japanese - 日本語
[3] Simplified Chinese - 简体中文
[4] Traditional Chinese - 繁体中文
[5] Korean - 한국어
Please choose an option [1] : 1
----------------------------------------------------------------------------
Welcome to the Postgres Plus Advanced Server Setup Wizard.

----------------------------------------------------------------------------
Please read the following License Agreement. You must accept the terms of this 
agreement before continuing with the installation.

Press [Enter] to continue:
.....
.....
Press [Enter] to continue:

Do you accept this license? [y/n]: y

----------------------------------------------------------------------------
User Authentication

This installation requires a registration with EnterpriseDB.com. Please enter 
your credentials below. If you do not have an account, Please create one now on 
https://www.enterprisedb.com/user-login-registration

Email []: 

Password : xxxxx

----------------------------------------------------------------------------
Please specify the directory where Postgres Plus Advanced Server will be 
installed.

Installation Directory [/opt/PostgresPlus]: 

----------------------------------------------------------------------------
Select the components you want to install.

Database Server [Y/n] :y

Connectors [Y/n] :y

Infinite Cache [Y/n] :y

Migration Toolkit [Y/n] :y

Postgres Enterprise Manager Client [Y/n] :y

pgpool-II [Y/n] :y

pgpool-II Extensions [Y/n] :y

EDB*Plus [Y/n] :y

Slony Replication [Y/n] :y

PgBouncer [Y/n] :y

Is the selection above correct? [Y/n]: y

----------------------------------------------------------------------------
Additional Directories

Please select a directory under which to store your data.

Data Directory [/opt/PostgresPlus/9.4AS/data]: 

Please select a directory under which to store your Write-Ahead Logs.

Write-Ahead Log (WAL) Directory [/opt/PostgresPlus/9.4AS/data/pg_xlog]: 

----------------------------------------------------------------------------
Configuration Mode

Postgres Plus Advanced Server always installs with Oracle(R) compatibility features and maintains full PostgreSQL compliance. Select your style preference for installation defaults and samples.

The Oracle configuration will cause the use of certain objects  (e.g. DATE data types, string operations, etc.) to produce Oracle compatible results, create the same Oracle sample tables, and have the database match Oracle examples used in the documentation.

Configuration Mode

[1] Oracle Compatible
[2] PostgreSQL Compatible
Please choose an option [1] : 1

----------------------------------------------------------------------------
Please provide a password for the database superuser (enterprisedb). A locked 
Unix user account (enterprisedb) will be created if not present.

Password :
Retype Password :
----------------------------------------------------------------------------
Additional Configuration

Please select the port number the server should listen on.

Port [5444]: 

Select the locale to be used by the new database cluster.

Locale

[1] [Default locale]
......
Please choose an option [1] : 1

Install sample tables and procedures. [Y/n]: Y

----------------------------------------------------------------------------
Dynatune Dynamic Tuning:
Server Utilization

Please select the type of server to determine the amount of system resources 
that may be utilized:

[1] Development (e.g. a developer's laptop)
[2] General Purpose (e.g. a web or application server)
[3] Dedicated (a server running only Postgres Plus)
Please choose an option [2] : 2

----------------------------------------------------------------------------
Dynatune Dynamic Tuning:
Workload Profile

Please select the type of workload this server will be used for:

[1] Transaction Processing (OLTP systems)
[2] General Purpose (OLTP and reporting workloads)
[3] Reporting (Complex queries or OLAP workloads)
Please choose an option [1] : 2

----------------------------------------------------------------------------
Advanced Configuration

----------------------------------------------------------------------------
PgBouncer Listening Port [6432]: 

----------------------------------------------------------------------------
Service Configuration

Autostart PgBouncer Service [Y/n]: n

Autostart pgAgent Service [Y/n]: n

Update Notification Service [Y/n]: n

The Update Notification Service informs, downloads and installs whenever 
security patches and other updates are available for your Postgres Plus Advanced 
Server installation.

----------------------------------------------------------------------------
Pre Installation Summary

Following settings will be used for installation:

Installation Directory: /opt/PostgresPlus
Data Directory: /opt/PostgresPlus/9.4AS/data
WAL Directory: /opt/PostgresPlus/9.4AS/data/pg_xlog
Database Port: 5444
Database Superuser: enterprisedb
Operating System Account: enterprisedb
Database Service: ppas-9.4
PgBouncer Listening Port: 6432

Press [Enter] to continue:

----------------------------------------------------------------------------
Setup is now ready to begin installing Postgres Plus Advanced Server on your 
computer.

Do you want to continue? [Y/n]: Y

----------------------------------------------------------------------------
Please wait while Setup installs Postgres Plus Advanced Server on your computer.

 Installing Postgres Plus Advanced Server
 0% ______________ 50% ______________ 100%
 ########################################
 Installing Database Server ...
 Installing pgAgent ...
 Installing Connectors ...
 Installing Migration Toolkit ...
 Installing EDB*Plus ...
 Installing Infinite Cache ...
 Installing Postgres Enterprise Manager Client ...
 Installing Slony Replication ...
 Installing pgpool-II ...
 Installing pgpool-II Extensions ...
 Installing PgBouncer ...
 Installing StackBuilder Plus ...
 #

----------------------------------------------------------------------------
Setup has finished installing Postgres Plus Advanced Server on your computer.

done.

Using the standalone installer in unattended mode

Another option is to use the unattended mode by providing all the parameters on the command line or by creating a configuration file. This is an example for providing the parameters on the command line. Most of the parameters can be skipped and the default is applied:

ppasmeta-9.4.1.3-linux-x64/ppasmeta-9.4.1.3-linux-x64.run --mode unattended 
   --enable-components dbserver,connectors,infinitecache,edbmtk,pem_client,
                       pgpool,pgpoolextension,edbplus,replication,pgbouncer 
   --installer-language en --superaccount enterprisedb 
   --servicename ppas-9.4 --serviceaccount enterprisedb 
   --prefix /opt/PostgresPlus --datadir /opt/PostgresPlus/9.4AS/data 
   --xlogdir /opt/PostgresPlus/9.4AS/data/pg_xlog 
   --databasemode oracle --superpassword enterprisedb 
   --webusername document.write(['xx.xx','xx.xxx'].join('@')) --webpassword xxxxx

 Installing Database Server ...
 Installing pgAgent ...
 Installing Connectors ...
 Installing Migration Toolkit ...
 Installing EDB*Plus ...
 Installing Infinite Cache ...
 Installing Postgres Enterprise Manager Client ...
 Installing Slony Replication ...
 Installing pgpool-II ...
 Installing pgpool-II Extensions ...
 Installing PgBouncer ...
 Installing StackBuilder Plus ...X11 connection rejected because of wrong authentication.

Done. No matter which installation method was chosen the result is that ppas is installed and the database cluster is initialized. You might check the processes:

# ps -ef | grep postgres
enterpr+ 12759     1  0 12:03 ?        00:00:00 /opt/PostgresPlus/9.4AS/bin/edb-postgres -D /opt/PostgresPlus/9.4AS/data
enterpr+ 12760 12759  0 12:03 ?        00:00:00 postgres: logger process   
enterpr+ 12762 12759  0 12:03 ?        00:00:00 postgres: checkpointer process   
enterpr+ 12763 12759  0 12:03 ?        00:00:00 postgres: writer process   
enterpr+ 12764 12759  0 12:03 ?        00:00:00 postgres: wal writer process   
enterpr+ 12765 12759  0 12:03 ?        00:00:00 postgres: autovacuum launcher process   
enterpr+ 12766 12759  0 12:03 ?        00:00:00 postgres: stats collector process   
enterpr+ 12882 12759  0 12:03 ?        00:00:00 postgres: enterprisedb edb ::1[45984] idle
root     13866  2619  0 12:15 pts/0    00:00:00 grep --color=auto postgres

Or the services that got created:

# chkconfig --list | grep ppas

Note: This output shows SysV services only and does not include native
      systemd services. SysV configuration data might be overridden by native
      systemd configuration.

      If you want to list systemd services use 'systemctl list-unit-files'.
      To see services enabled on particular target use
      'systemctl list-dependencies [target]'.

ppas-9.4       	0:off	1:off	2:on	3:on	4:on	5:on	6:off
ppas-agent-9.4 	0:off	1:off	2:on	3:on	4:on	5:on	6:off
ppas-infinitecache	0:off	1:off	2:off	3:off	4:off	5:off	6:off
ppas-pgpool    	0:off	1:off	2:off	3:off	4:off	5:off	6:off
ppas-replication-9.4	0:off	1:off	2:off	3:off	4:off	5:off	6:off
# ls -la /etc/init.d/ppas*
-rwxr-xr-x. 1 root root 3663 Apr 23 12:03 /etc/init.d/ppas-9.4
-rwxr-xr-x. 1 root root 2630 Apr 23 12:03 /etc/init.d/ppas-agent-9.4
-rwxr-xr-x. 1 root root 1924 Apr 23 12:04 /etc/init.d/ppas-infinitecache
-rwxr-xr-x. 1 root root 3035 Apr 23 12:04 /etc/init.d/ppas-pgpool
-rwxr-xr-x. 1 root root 3083 Apr 23 12:04 /etc/init.d/ppas-replication-9.4

As the account which installed the software should not be used to work with the database lets create an os account for doing the connections to the database:

# groupadd postgres
# useradd -g postgres postgres
# passwd postgres
Changing password for user postgres.
New password: 
Retype new password: 
passwd: all authentication tokens updated successfully.

ppas brings an environment file for setting all the environment variables. Lets source that so it will be available for future logins:

su - postgres
echo ". /opt/PostgresPlus/9.4AS/pgplus_env.sh" >> .bash_profile

Once you login to the postgres account the environment is there:

$ env | grep PG
PGPORT=5444
PGDATABASE=edb
PGLOCALEDIR=/opt/PostgresPlus/9.4AS/share/locale
PGDATA=/opt/PostgresPlus/9.4AS/data
$ env | grep EDB
EDBHOME=/opt/PostgresPlus/9.4AS

Now we are ready to login to the database:

$ psql -U enterprisedb
Password for user enterprisedb: 
psql.bin (9.4.1.3)
Type "help" for help.

edb=# l
                                           List of databases
   Name    |    Owner     | Encoding |   Collate   |    Ctype    | ICU |       Access privileges       
-----------+--------------+----------+-------------+-------------+-----+-------------------------------
 edb       | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 |     | 
 postgres  | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 |     | 
 template0 | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 |     | =c/enterprisedb              +
           |              |          |             |             |     | enterprisedb=CTc/enterprisedb
 template1 | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 |     | =c/enterprisedb              +
           |              |          |             |             |     | enterprisedb=CTc/enterprisedb
(4 rows)

Mission completed. The next post will setup a backup and recovery server for backing up and restoring the ppas database cluster.