Skip navigation.

Feed aggregator

APEX 5.0: Bye bye Tabs, welcome to Navigation Lists

Dimitri Gielis - Thu, 2014-09-11 02:22
In previous versions of Oracle APEX (< 5.0) you could use Tabs for the navigation in your application.


Tabs were not that flexible, they were typical on top of your page in a specific look and feel. Since APEX 4.x I started to dismiss using Tabs in most of the cases, instead I would use a List with the "Page Tabs" template if people wanted that look and feel.

APEX 5.0 introduces the concept of a "Navigation List" that replaces the tabs. It's the same mechanism as before (a normal List which you find in Shared Components), but you can define in your User Interface which list to use as your Navigation List.

Go to Shared Components > User Interface Attributes:


Next in the User Interface section, click on Desktop (or the User Interface you want to adapt):


In the Attributes section you can define the List you want to use as "Navigation List"


Behind the scenes the Navigation List is put on the screen where the #NAVIGATION_LIST# token is specified in your Page Template.

The Navigation List is another example where APEX 5.0 makes common behaviour of developers more declarative and embedded in the product.
Categories: Development

2002 Honda passport timing belt replacement

EBIZ SIG BLOG - Wed, 2014-09-10 19:14
The Honda Passport was a activity-utility car bought via the japanese maker from 1994 through 2002. It used to be changed in 2003 through the Honda Pilot, a crossover utility automobile that shared one of the most underpinnings of the Honda Odyssey minivan. not like the Pilot, which adopted the lead of the Toyota Highlander in placing a mid-dimension crossover body on the underpinnings of what used to be basically a car, the Passport was once constructed on a rear-wheel-power truck chassis with all-wheel force as an choice. The experience quality and coping with reflected its truck origins, so the Pilot was a striking step ahead when it replaced the Passport.

The Passport was once actually a re-badged Isuzu Rodeo, a truck-based SUV inbuilt Indiana, at a plant that Subaru and Isuzu shared on the time. the primary era Passport, sold from 1994 via 1997, offered a collection of a one hundred twenty-horsepower 2.6-liter four-cylinder engine, paired with a 5-pace handbook gearbox, or a a hundred seventy five-hp 3.2-liter V-6--and an available four-pace automated transmission. Rear-wheel power was same old, and all-wheel pressure might be ordered as an choice. Trim ranges have been base and EX.
2002 honda passport check engine light flashingIn 1998, a 2nd-era Passport used to be introduced. It used to be still based on a truck chassis, nevertheless it came with extra relief and safety options than the earlier adaptation, and was considerably extra refined. The 4-door game-utility vehicle came usual with a 205-hp three.2-liter V-6, matched with a 5-speed guide gearbox on base versions, though a four-speed computerized transmission was additionally on hand.

The second Passport was once offered in two trim ranges: the LX will be ordered with the 5-velocity guide, with four-wheel-pressure as an possibility, and the extra upscale EX came with the 4-velocity automatic, once more with both force possibility. while the spare tire on the base LX was established on a swinging bracket on the tailgate, the EX relocated it to a service beneath the cargo house. For the 2000 version year, the Honda Passport received a handful of updates, together with non-compulsory 16-inch wheels on the LX and available two-tone paint treatments.
2002 honda passport transmission dipstick locationWhen taking into account the Passport as a used car, patrons should comprehend that the 1998-2002 models have been recalled in October 2010 as a result of body corrosion in the basic house where the rear suspension was mounted. Any autos with out seen corrosion have been handled with a rust-resistant compound, but reinforcement brackets were to be installed in those with more extreme rust. In some cases, the damage was once so extreme that Honda simply repurchased the autos from their homeowners. Used-automotive shoppers taking a look at Passports must be sure to in finding out whether the car had been via a remember, and what--if anything else--was achieved.
2002 honda passport keyless remote
2002 honda passport o2 sensor location
2002 honda passport picture gallery
2002 honda passport transmission problems
2002 honda passport starter replacement
Categories: APPS Blogs

2002 Honda passport timing belt replacement

Ameed Taylor - Wed, 2014-09-10 19:14
The Honda Passport was a activity-utility car bought via the japanese maker from 1994 through 2002. It used to be changed in 2003 through the Honda Pilot, a crossover utility automobile that shared one of the most underpinnings of the Honda Odyssey minivan. not like the Pilot, which adopted the lead of the Toyota Highlander in placing a mid-dimension crossover body on the underpinnings of what used to be basically a car, the Passport was once constructed on a rear-wheel-power truck chassis with all-wheel force as an choice. The experience quality and coping with reflected its truck origins, so the Pilot was a striking step ahead when it replaced the Passport.

The Passport was once actually a re-badged Isuzu Rodeo, a truck-based SUV inbuilt Indiana, at a plant that Subaru and Isuzu shared on the time. the primary era Passport, sold from 1994 via 1997, offered a collection of a one hundred twenty-horsepower 2.6-liter four-cylinder engine, paired with a 5-pace handbook gearbox, or a a hundred seventy five-hp 3.2-liter V-6--and an available four-pace automated transmission. Rear-wheel power was same old, and all-wheel pressure might be ordered as an choice. Trim ranges have been base and EX.
2002 honda passport check engine light flashingIn 1998, a 2nd-era Passport used to be introduced. It used to be still based on a truck chassis, nevertheless it came with extra relief and safety options than the earlier adaptation, and was considerably extra refined. The 4-door game-utility vehicle came usual with a 205-hp three.2-liter V-6, matched with a 5-speed guide gearbox on base versions, though a four-speed computerized transmission was additionally on hand.

The second Passport was once offered in two trim ranges: the LX will be ordered with the 5-velocity guide, with four-wheel-pressure as an possibility, and the extra upscale EX came with the 4-velocity automatic, once more with both force possibility. while the spare tire on the base LX was established on a swinging bracket on the tailgate, the EX relocated it to a service beneath the cargo house. For the 2000 version year, the Honda Passport received a handful of updates, together with non-compulsory 16-inch wheels on the LX and available two-tone paint treatments.
2002 honda passport transmission dipstick locationWhen taking into account the Passport as a used car, patrons should comprehend that the 1998-2002 models have been recalled in October 2010 as a result of body corrosion in the basic house where the rear suspension was mounted. Any autos with out seen corrosion have been handled with a rust-resistant compound, but reinforcement brackets were to be installed in those with more extreme rust. In some cases, the damage was once so extreme that Honda simply repurchased the autos from their homeowners. Used-automotive shoppers taking a look at Passports must be sure to in finding out whether the car had been via a remember, and what--if anything else--was achieved.
2002 honda passport keyless remote
2002 honda passport o2 sensor location
2002 honda passport picture gallery
2002 honda passport transmission problems
2002 honda passport starter replacement
Categories: DBA Blogs

Creating a Pivotal GemFireXD Data Source Connection from IntelliJ IDEA 13.x

Pas Apicella - Wed, 2014-09-10 19:04
In order to create a Pivotal GemFireXD Data Source Connection from IntelliJ 13.x , follow the steps below. You will need to define a GemFireXD driver , prior to creating the Data Source itself.

1. Bring up the Databases panel.

2. Define a GemFireXD Driver as follows


3. Once defined select it by using the following options. Your using the Driver you created at #2 above

+ -> Data Source -> com.pivotal.gemfirexd.jdbc.ClientDriver 

4. Create a Connection as shown below. You would need to having a running GemFireXD cluster at this point in order to connect.



5.  Once connected you can browse objects as shown below.



6. Finally we can run DML/DDL directly from IntelliJ as shown below.


http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Documentum upgrade project - D2EventSenderMailMethod & bug with Patch 12

Yann Neuhaus - Wed, 2014-09-10 18:55

We started the Documentum upgrade in the wintertime and our jobs ran successfully by following the defined schedule. Once we moved to the summertime we hit an issue: A job that was scheduled for instance at 4:00 AM was executed at 4:00 AM, but also started every 2 minutes until 5:00 AM. We had this issue on all our 6.7SP2P009 repositories - on upgraded as well as on new repositories.

Before opening an SR in powerlink, I first checked the date and time with the following query.

On the content server using idql:

 

1> select date(now) as date_now from dm_docbase_config
2> go
date_now          
------------------------
4/9/2014 17:47:55       
(1 row affected)
1>

 

The date and time was correct, EMC confirmed a bug and asked us to install the Patch 12 which solved the issue.

  Patch 12 and D2EventSenderMailMethod

Unfortunately the patch 12 introduced a bug on D2EventSenderMailMethod which does not work anymore. The mail could not be sent out. D2EventSenderMailMethod is a requirement for D2. It is used by D2 mails but also for some workflow functionalities. By default, if the event is not managed by D2 (ie : configured) the default Documentum mail method is executed, EMC said.

To test the mail issue, I used the dm_ContentWarning job by setting the -percent_full parameter to 5 (lower than the value displayed by df -k).

In $DOCUMENTUM/dba/log//MethodServer/test67.log thefollowing error was displayed:

 

Wrong number of arguments (31) passed to entry point 'Mail'.

 

And by setting the trace flag for the dm_event_sender method we saw:


2014-05-08T12:53:45.504165      7260[7260]      0100007b8000c978        TRACE LAUNCH [MethodServer]: ./dmbasic -f./dm_event_sender.ebs -eMail  --   "test67"  "xxx May 08 12:53:25 2014"  "DM_SYSADMIN"  "Take a look at /dev/mapper/vg00-lvpkgs--it's 81% full!!!"  "ContentWarning"  "0900007b8000aeb3"  "nulldate"  "10"  "dm_null_id"  " "  "dmsd"  "test67"  "event"  " "  "test67" ""  "undefined"  "dmsd"  "1b00007b80003110"  "5/8/2014 12:53:28"  "0"  "dm_document"  "text"  "3741"  "text"  "cs.xxy.org"  "80"  ""  "/soft/opt/documentum/share/temp/3799691ad29ffd73699c0e85b792ea66"  "./dm_mailwrapper.sh"  " " dmProcess::Exec() returns: 1

 

It worked when I updated the dm_server_config object:

 

update dm_server_config objects set mail_method = 'dm_event_template_sender'

 

EMC confirmed that this is a bug and should be fixed with D2 3.1 P05

Using Oracle GoldenGate for Trickle-Feeding RDBMS Transactions into Hive and HDFS

Rittman Mead Consulting - Wed, 2014-09-10 15:13

A few months ago I wrote a post on the blog around using Apache Flume to trickle-feed log data into HDFS and Hive, using the Rittman Mead website as the source for the log entries. Flume is a good technology to use for this type of capture requirement as it captures log entries, HTTP calls, JMS queue entries and other “event” sources easily, has a resilient architecture and integrates well with HDFS and Hive. But what if the source you want to capture activity for is a relational database, for example Oracle Database 12c? With Flume you’d need to spool the database transactions to file, whereas what you really want is a way to directly connect to the database engine and capture the changes from source.

Which is exactly what Oracle GoldenGate does, and what most people don’t realise is that GoldenGate can also load data into HDFS and Hive, as well as the usual database targets. Hive and HDFS aren’t fully-supported targets yet, you can use the Oracle GoldenGate for Java adapter to act as the handler process and then land the data in HDFS files or Hive tables on your target Hadoop platform. My Oracle Support has two tech nodes, “Integrating OGG Adapter with Hive (Doc ID 1586188.1)” and “Integrating OGG Adapter with HDFS (Doc ID 1586210.1)” that give example implementations of the Java adapters you’d need for these two target types, with the overall end-to-end process for landing Hive data looking like the diagram below (and the HDFS one just swapping out HDFS for Hive at the handler adapter stage)

NewImage

This is also a good example of the sorts of technology we’d use to implement the “data factory” concept within the new Oracle Information Management Reference Architecture, the part of the architecture that moves data between the Hadoop and NoSQL-based Data Reservoir, and the relationally-stored enterprise information store; in this case, trickle-feeding transactional data from the Oracle database into Hadoop, perhaps to archive it at lower-cost than we could do in an Oracle database, or to add transaction activity data to a Hadoop-based application

NewImage

So I asked my colleague Nelio Guimaraes to set up a GoldenGate capture process on our Cloudera CDH5.1 Hadoop cluster, using GoldenGate 12.1.2.0.0 for our source Oracle 11gR2 database and Oracle GoldenGate for Java, downloadable separately on edelivery.oracle.com under Oracle Fusion Middleware > Oracle GoldenGate Application Adapters 11.2.1.0.0 for JMS and Flat File Media Pack. In our example, we’re going to capture activity on the SCOTT.EMP table in the Oracle database, and then perform the following step to set up replication from it into a replica Hive table:

  1. Create a table in Hive that corresponds to the table in Oracle database.
  2. Create a table in the Oracle database and prepare the table for replication.
  3. Configure the Oracle GoldenGate Capture to extract transactions from the Oracle database and create the trail file.
  4. Configure the Oracle GoldenGate Pump to read the trail and invoke the custom adapter
  5. Configure the property file for the Hive handler
  6. Code, Compile and package the custom Hive handler
  7. Execute a test. 
Setting up the Oracle Database Source Capture

Let’s go into the Oracle database first, check the table definition, and then connect to Hadoop to create a Hive table of the same column definition.

[oracle@centraldb11gr2 ~]$ sqlplus scott/tiger
SQL*Plus: Release 11.2.0.3.0 Production on Thu Sep 11 01:08:49 2014
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
SQL> describe DEPT
 Name Null? Type
 ----------------------------------------- -------- ----------------------------
 DEPTNO NOT NULL NUMBER(2)
 DNAME VARCHAR2(14)
 LOC VARCHAR2(13)
SQL> exit
...
[oracle@centraldb11gr2 ~]$ ssh oracle@cdh51-node1
Last login: Sun Sep 7 16:11:36 2014 from officeimac.rittmandev.com
[oracle@cdh51-node1 ~]$ hive
...
create external table dept
(
 DEPTNO string, 
 DNAME string, 
 LOC string
) row format delimited fields terminated by '\;' stored as textfile
location '/user/hive/warehouse/department'; 
exit
...

Then I install Oracle Golden Gate 12.1.2 on the source Oracle database, just as you’d do for any Golden Gate install, and make sure supplemental logging is enabled for the table I’m looking to capture. Then I go into the ggsci Golden Gate command-line utility, to first register the user it’ll be connecting as, and what table it needs to capture activity for.

[oracle@centraldb11gr2 12.1.2]$ cd /u01/app/oracle/product/ggs/12.1.2/
[oracle@centraldb11gr2 12.1.2]$ ./ggsci
$ggsci> DBLOGIN USERID sys@ctrl11g, PASSWORD password sysdba
$ggsci> ADD TRANDATA SCOTT.DEPT COLS(DEPTNO), NOKEY

GoldenGate uses a number of components to replicate data from source to targets, as shown in the diagram below.

NewImageFor our purposes, though, there are just three that we need to configure; the Extract component, which captures table activity on the source; the Pump process that moves data (or the “trail”) from source database to the Hadoop cluster; and the Replicat component that takes that activity and applies it to the target tables. In our example, the extract and pump processes will be as normal, but we need to create a custom “handler” for the target Hive table that uses the Golden Gate Java API and the Hadoop FS Java API.

The tool we use to set up the extract and capture process is ggsci, the command-line Golden Gate Software Command Interface. I’ll use it first to set up the Manager process that runs on both source and target servers, giving it a port number and connection details into the source Oracle database.

$ggsci> edit params mgr
PORT 7809
USERID sys@ctrl11g, PASSWORD password sysdba
PURGEOLDEXTRACTS /u01/app/oracle/product/ggs/12.1.2/dirdat/*, USECHECKPOINTS

Then I create two configuration files, one for the extract process and one for the pump process, and then use those to start those two processes.

$ggsci> edit params ehive
EXTRACT ehive
USERID sys@ctrl11g, PASSWORD password sysdba
EXTTRAIL /u01/app/oracle/product/ggs/12.1.2/dirdat/et, FORMAT RELEASE 11.2
TABLE SCOTT.DEPT;
$ggsci> edit params phive
EXTRACT phive
RMTHOST cdh51-node1.rittmandev.com, MGRPORT 7809
RMTTRAIL /u01/app/oracle/product/ggs/11.2.1/dirdat/rt, FORMAT RELEASE 11.2
PASSTHRU
TABLE SCOTT.DEPT;
$ggsci> ADD EXTRACT ehive, TRANLOG, BEGIN NOW
$ggsci> ADD EXTTRAIL /u01/app/oracle/product/ggs/12.1.2/dirdat/et, EXTRACT ehive
$ggsci> ADD EXTRACT phive, EXTTRAILSOURCE /u01/app/oracle/product/ggs/12.1.2/dirdat/et
$ggsci> ADD RMTTRAIL /u01/app/oracle/product/ggs/11.2.1/dirdat/rt, EXTRACT phive

As the Java event handler on the target Hadoop platform won’t be able to ordinarily get table metadata for the source Oracle database, we’ll use the defgen utility on the source platform to create the parameter file that the replicat process will need.

$ggsci> edit params dept
defsfile ./dirsql/DEPT.sql
USERID ggsrc@ctrl11g, PASSWORD ggsrc
TABLE SCOTT.DEPT;
./defgen paramfile ./dirprm/dept.prm NOEXTATTR

Note that NOEXTATTR means no extra attributes; because the version on target is a generic and minimal version, the definition file with extra attributes won’t be interpreted. Then, this DEPT.sql file will need to be copied across to the target Hadoop platform where you’ve installed Oracle GoldenGate for Java, to the /dirsql folder within the GoldenGate install. 

[oracle@centraldb11gr2 12.1.2]$ ssh oracle@cdh51-node1
oracle@cdh51-node1's password: 
Last login: Wed Sep 10 17:05:49 2014 from centraldb11gr2.rittmandev.com
[oracle@cdh51-node1 ~]$ cd /u01/app/oracle/product/ggs/11.2.1/
[oracle@cdh51-node1 11.2.1]
$ pwd/u01/app/oracle/product/ggs/11.2.1
[oracle@cdh51-node1 11.2.1]$ ls dirsql/
DEPT.sql

Then, going back to the source Oracle database platform, we’ll start the Golden Gate Monitor process, and then the extract and pump processes.

[oracle@cdh51-node1 11.2.1]$ ssh oracle@centraldb11gr2
oracle@centraldb11gr2's password: 
Last login: Thu Sep 11 01:08:18 2014 from bdanode1.rittmandev.com
GGSCI (centraldb11gr2.rittmandev.com) 7> start mgr
Manager started.
 
GGSCI (centraldb11gr2.rittmandev.com) 8> start ehive
 
Sending START request to MANAGER ...
EXTRACT EHIVE starting
 
GGSCI (centraldb11gr2.rittmandev.com) 9> start phive
 
Sending START request to MANAGER ...
EXTRACT PHIVE starting

Setting up the Hadoop / Hive Replicat Process

Setting up the Hadoop side involves a couple of similar steps to the source capture side; first we configure the parameters for the Manager process, then configure the extract process that will pull table activity off of the trail file, sent over by the pump process on the source Oracle database.

[oracle@centraldb11gr2 12.1.2]$ ssh oracle@cdh51-node1
oracle@cdh51-node1's password: 
Last login: Wed Sep 10 21:09:38 2014 from centraldb11gr2.rittmandev.com
[oracle@cdh51-node1 ~]$ cd /u01/app/oracle/product/ggs/11.2.1/
[oracle@cdh51-node1 11.2.1]$ ./ggsci
$ggsci> edit params mgr
PORT 7809
PURGEOLDEXTRACTS /u01/app/oracle/product/ggs/11.2.1/dirdat/*, usecheckpoints, minkeepdays 3
$ggsci> add extract tphive, exttrailsource /u01/app/oracle/product/ggs/11.2.1/dirdat/rt
$ggsci> edit params tphive
EXTRACT tphive
SOURCEDEFS ./dirsql/DEPT.sql
CUserExit ./libggjava_ue.so CUSEREXIT PassThru IncludeUpdateBefores
GETUPDATEBEFORES
TABLE SCOTT.DEPT;

Now it’s time to create the Java hander that will write the trail data to the HDFS files and Hive table. The My Oracle Support Doc.ID 1586188.1 I mentioned at the start of the article has a sample Java program called SampleHandlerHive.java that writes incoming transactions into an HDFS file within the Hive directory, and also writes it to a file on the local filesystem. To get this working on our Hadoop system, we created a new java source code file from the content in SampleHandlerHive.java, updated the path from hadoopConf.addResource to point the the correct location of core-site.xml, hdfs-site.xml and mapred-site.xml, and then compiled it as follows:

export CLASSPATH=/u01/app/oracle/product/ggs/11.2.1/ggjava/ggjava.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/client/*
javac -d . SampleHandlerHive.java

Successfully executing the above command created the SampleHiveHandler.class under /u01/app/oracle/product/ggs/11.2.1//dirprm/com/mycompany/bigdata. To create the JAR file that the GoldenGate for Java adapter will need, I then need to change directory to the /dirprm directory under the Golden Gate install, and then run the following commands:

jar cvf myhivehandler.jar com
chmod 755 myhivehandler.jar

I also need to create a properties file for this JAR to use, in the same /dirprm directory. This properties file amongst other things tells the Golden Gate for Java adapter where in HDFS to write the data to (the location where the Hive table keeps its data files), and also references any other JAR files from the Hadoop distribution that it’ll need to get access to.

[oracle@cdh51-node1 dirprm]$ cat tphive.properties 
#Adapter Logging parameters. 
gg.log=log4j
gg.log.level=info
 
#Adapter Check pointing  parameters
goldengate.userexit.chkptprefix=HIVECHKP_
goldengate.userexit.nochkpt=true
 
# Java User Exit Property
goldengate.userexit.writers=jvm
jvm.bootoptions=-Xms64m -Xmx512M -Djava.class.path=/u01/app/oracle/product/ggs/11.2.1/ggjava/ggjava.jar:/u01/app/oracle/product/ggs/11.2.1/dirprm:/u01/app/oracle/product/ggs/11.2.1/dirprm/myhivehandler.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/client/hadoop-common-2.3.0-cdh5.1.0.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/etc/hadoop:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/etc/hadoop/conf.dist:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/hadoop-auth-2.3.0-cdh5.1.0.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/client/hadoop-hdfs-2.3.0-cdh5.1.0.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/client/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/client/protobuf-java-2.5.0.jar
 
#Properties for reporting statistics
# Minimum number of {records, seconds} before generating a report
jvm.stats.time=3600
jvm.stats.numrecs=5000
jvm.stats.display=TRUE
jvm.stats.full=TRUE
 
#Hive Handler.  
gg.handlerlist=hivehandler
gg.handler.hivehandler.type=com.mycompany.bigdata.SampleHandlerHive
gg.handler.hivehandler.HDFSFileName=/user/hive/warehouse/department/dep_data
gg.handler.hivehandler.RegularFileName=cinfo_hive.txt
gg.handler.hivehandler.RecordDelimiter=;
gg.handler.hivehandler.mode=tx

Now, the final step on the Hadoop side is to start its Golden Gate Manager process, and then start the Replicat and apply process.

GGSCI (cdh51-node1.rittmandev.com) 5> start mgr
 
Manager started. 
 
GGSCI (cdh51-node1.rittmandev.com) 6> start tphive
 
Sending START request to MANAGER ...
EXTRACT TPHIVE starting

Testing it All Out

So now I’ve got the extract and pump processes running on the Oracle Database side, and the apply process running on the Hadoop side, let’s do a quick test and see if it’s working. I’ll start by looking at what data is in each table at the beginning.

SQL> select * from dept;     

    DEPTNO DNAME  LOC
 ---------- -------------- -------------

10 ACCOUNTING  NEW YORK
20 RESEARCH  DALLAS
30 SALES  CHICAGO
40 OPERATIONS  BOSTON
50 TESTE  PORTO
60 NELIO  STS
70 RAQUEL  AVES
 
7 rows selected.

Over on the Hadoop side, there’s just one row in the Hive table:

hive> select * from customer;

OK 80MARCIA   ST

Now I’ll go back to Oracle and insert a new row in the DEPT table:

SQL> insert into dept (deptno, dname, loc)
  2  values (75, 'EXEC','BRIGHTON'); 

1 row created. 
SQL> commit; 

Commit complete.

And, going back over to Hadoop, I can see Golden Gate has added that record to the Hive table, by the Golden Gate for Java adapter writing the transaction to the underlying HDFS file.

hive> select * from customer;

OK 80MARCIA   ST
75 EXEC       BRIGHTON

So there you have it; Golden Gate replicating Oracle RBDMS transactions into HDFS and Hive, to complement Apache Flume’s ability to replicate log and event data into Hadoop. Moreover, as Michael Rainey explained in this three part blog series, Golden Gate is closely integrated into the new 12c release of Oracle Data Integrator, making it even easier to manage Golden Gate replication processes into your overall data loading project, and giving Hadoop developers and Golden Gate users access to the full set of load orchestration and data quality features in that product rather than having to rely on home-grown scripting, or Oozie.

Categories: BI & Warehousing

Oracle: How to move OMF datafiles in 11g and 12c

Yann Neuhaus - Wed, 2014-09-10 13:22

With OMF datafiles, you don't manage the datafile names. But how do you set the destination when you want to move them to another mount point? Let's see how easy (and online) it works in 12c. And how to do it with minimal downtime in 11g.

 

Testcase

Let's create a tablespace with two datafiles. It's OMF and goes into /u01:

 

SQL> alter system set db_create_file_dest='/u01/app/oracle/oradata' scope=memory;
System altered.

SQL> show parameter db_create_file_dest
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest                  string      /u01/app/oracle/oradata

SQL> create tablespace DEMO_OMF datafile size 5M;
Tablespace created.

SQL> alter tablespace DEMO_OMF add datafile size 5M;
Tablespace altered.

 

And I want to move those files in /u02.

 

12c online move

Here is how I generate my MOVE commands for all datafiles in /u01:

 

set echo off linesize 1000 trimspool on pagesize 0 feedback off
spool _move_omf.rcv
prompt set echo on;;
prompt report schema;;
prompt alter session set db_create_file_dest='/u02/app/oracle/oradata';;
select 'alter database move datafile '||file_id||';' from dba_data_files where file_name like '/u01/%' 
/
prompt report schema;;
spool off

 

which generates the following:

 

set echo on;
report schema;
alter session set db_create_file_dest='/u02/app/oracle/oradata';
alter database move datafile 7;
alter database move datafile 2;
report schema;

 

This works straightforward and online. That is the right solution if you are in 12c Enterprise Edition. The OMF destination is set at session level here. The move is done online, without any lock. The only overhead is that writes occured twice during the move operation. And in 12c we can run any SQL statement from RMAN, which is great.

 

11g backup as copy

How do you manage that in 11g? I like to do it with RMAN COPY. If you're in ARCHIVELOG then you can copy the datafiles one by one: backup it as copy, offline it, recover it, switch to it, online it. This is the fastest way. You can avoid the recovery step by putting the tablespace offline but:

  • you will have to wait that the earliest transaction finishes.
  • your downtime includes the whole copy. When activity is low the recovery is probably faster.

 

Here is how I generate my RMAN commands for all datafiles in /u01:

 

set echo off linesize 1000 trimspool on pagesize 0 feedback off
spool _move_omf.rcv
prompt set echo on;;
prompt report schema;;
with files as (
 select file_id , file_name , bytes from dba_data_files where file_name like '/u01/%' and online_status ='ONLINE' 
)
select stmt from (
select 00,bytes,file_id,'# '||to_char(bytes/1024/1024,'999999999')||'M '||file_name||';' stmt from files
union all
select 10,bytes,file_id,'backup as copy datafile '||file_id||' to destination''/u02/app/oracle/oradata'';' stmt from files
union all
select 20,bytes,file_id,'sql "alter database datafile '||file_id||' offline";' from files
union all
select 30,bytes,file_id,'switch datafile '||file_id||' to copy;' from files
union all
select 40,bytes,file_id,'recover datafile '||file_id||' ;' from files
union all
select 50,bytes,file_id,'sql "alter database datafile '||file_id||' online";' from files
union all
select 60,bytes,file_id,'delete copy of datafile '||file_id||';' from files
union all
select 90,bytes,file_id,'report schema;' from files
union all
select 91,bytes,file_id,'' from files
order by 2,3,1
)
/

 

which generates the following:

 

set echo on;
report schema;
#          5M /u01/app/oracle/oradata/DEMO/datafile/o1_mf_demo_omf_b0vg07m8_.dbf;
backup as copy datafile 2 to destination'/u02/app/oracle/oradata';
sql "alter database datafile 2 offline";
switch datafile 2 to copy;
recover datafile 2 ;
sql "alter database datafile 2 online";
delete copy of datafile 2;
report schema;

 

(I have reproduced the commands for one datafile only here.)

And I can run it in RMAN. Run it as cmdfile or in a run block so that it stops if an error is encountered. Of course, it's better to run them one by one and check that the datafiles are online at the end. Note that it does not concern SYSTEM tablespace for which the database must be closed.

Online datafile move is my favorite Oracle 12c feature. And it's the first new feature that you will practice if you come at our 12c new features workshop. And in any versions RMAN is my preferred way to manipulate database files.

Brookings Institution analysis on student debt becoming a farce

Michael Feldstein - Wed, 2014-09-10 12:39

I have previously written about the deeply flawed Brookings Institution analysis on student debt with its oft-repeated lede:

These data indicate that typical borrowers are no worse off now than they were a generation ago …

Their data is based on the triennial Survey of Consumer Finances (SCF) by the Federal Reserve Board, with the report based on 2010 data. With the release of the 2013 SCF data, Brookings Institution put out an update this week on their report, and they continue with the lede:

The 2013 data confirm that Americans who borrowed to finance their educations are no worse off today than they were a generation ago. Given the rising returns to postsecondary education, they are probably better off, on average. But just because higher education is still a good investment for most students does not mean that high and rising college costs should be left unquestioned.

This conclusion is drawn despite the following observations of changes from 2010 – 2013 in their own update:

  • The share of young (age 20 – 40) households with student debt rose from 36% to 38%;
  • The average amount of debt per household rose 14%;
  • The distribution of debt holders rose by 50% for debt levels of $20k – $75k and dropped by 19% for debt levels of $1k – $10k; and
  • Wage income is stagnant and same level as ~1999, yet debt amounts have risen by ~50% in that same time period (see below).

Wage and borrowing over time

Brookings’ conclusion from this chart?

The upshot of the 2013 data is that households with education debt today are still no worse off than their counterparts were more than 20 years ago. Even though rising debt continued to cut into stagnant incomes, the average household with debt is better off than it used to be.

The strongest argument that Brookings presents is that the median monthly payment-to-income ratios have stayed fairly consistent at ~4% over the past 20 years. What they fail to mention is that households are taking much longer to pay off student loans now.

More importantly, the Brookings analysis ignores the simple and direct measurement of loan delinquency. See this footnote from the original report [emphasis added]:

These statistics are based on households that had education debt, annual wage income of at least $1,000, and that were making positive monthly payments on student loans. Between 24 and 36 percent of borrowers with wage income of at least $1,000 were not making positive monthly payments, likely due to use of deferment and forbearance …

That’s what I call selective data analysis. In the same SCF report that Brookings used for its update:

Delinquencies

The delinquency rate for student loans has gone up ~50% from 2010 to 2013!

How can anyone claim that Americans with student debt are no worse off when:

  • More people have student debt;
  • The average amount of debt has risen;
  • Wage income has not risen; and
  • The delinquency rate for student loans has risen.

None of the secondary spreadsheet jockeying from Brookings counters these basic facts. This ongoing analysis by Brookings on student debt is a farce.

The post Brookings Institution analysis on student debt becoming a farce appeared first on e-Literate.

What the Apple Watch Tells Us About the Future of Ed Tech

Michael Feldstein - Wed, 2014-09-10 12:20

Nothing.

So please, if you’re thinking about writing that post or article, don’t.

I’m begging you.

The post What the Apple Watch Tells Us About the Future of Ed Tech appeared first on e-Literate.

ADF BC View Object Change Notification Listener

Andrejus Baranovski - Wed, 2014-09-10 11:15
ADF BC allows to define triggers to listen for row changes on VO level. We can listen for row updates, inserts and deletes. This can be useful, if you would like to invoke specific audit method or call custom methods to populate dependent transient VO's with updated data.

To enable such triggers, you must add a listener for VO, this can be done during VO creation from standard create method:


ADF BC API methods, such as rowInserted, rowUpdated, rowDeleted can be overridden. These method will be invoked automatically by the framework, when change happens. You can check rowUpdated method, I'm getting changed attribute names (actually it calls this method for each change separately). Getting changed value from current row, also retrieving posted value:


CountryId attribute is set to be refreshed on update/insert, this means change event should be triggered as well, even we would not change this attribute directly:


We should do a test now. Change two attributes - Street Address and State Province, press Save button:


Method rowUpdated is invoked two times, first time for Street Address change (method is invoked before data is posted to DB):


Second time is invoked for State Province change. This means, we can't get all changes in single call, each change is logged separately. It would be much more useful to get all changes in the current row through a single call:


After data is posted, Country ID attribute is updated - this changed is tracked successfully:


Let's try to create new row:


Method rowInserted is invoked, however it doesn't get data yet - key is Null:


Right after rowInserted event, rowUpdated event is called in the same transaction - we can access data from that method. This means rowUpdated generally is more reliable:


Try to remove a record:


Method rowDeleted will be invoked, row data is accessed and key is printed correctly:


Download sample application - ADFBCListenerApp.zip.

Webinar: 21st Century Education Goes Digital with Oracle WebCenter

Oracle Corporation Banner 21st Century Education Goes Digital with Oracle WebCenter

Learn how The Digital Campus with WebCenter can address top-of-mind issues for creating exceptional digital learning experiences, put content in context for the user and optimize business processes

The global education market is under-going a fundamental transformation — from the printed textbook and physical classroom to newer digital, online and mobile experiences.  Today, students can learn anywhere, anytime, from anyone on any device, bridging administrative and academic systems into single universal view.

Oracle WebCenter is at the center of innovation and engagement for any digital enterprise looking to empower exceptional experiences for students, faculty, administrators and researchers. It powerfully connects people, processes, and information with the most complete portfolio of portal, content management, Web experience management and collaboration technologies to enable student success.

Join this special event featuring the University of Pretoria, Fishbowl Solutions and Oracle, whose experts will illustrate successful design patterns and solution delivery for:

  • Student Portals. Create rich, interactive student experiences
  • Digital Repository. Deliver advanced content capture, tagging and sharing while securing enterprise data
  • Admissions. Leverage image capture and business process design to enable improved self-service

Attendees will benefit from the use-case insights and strategies of a world re-knowned university as well as a pre-built solution approach from Oracle and solutions partner Fishbowl to enable a truly modern digital campus.

Audio information:

Dial in Numbers: U.S / Canada: 877-698-7943 (toll free)
International: 706-679-0060(chargeable)
Passcode:
solutions2 Red Button Top Register Now Red Button Bottom

Calendar Sep 11, 2014
10:00 AM PT |
01:00 PM ET

If you are an employee or official of a government organization, please click here for important ethics information regarding this event. Hardware and Software Engineered to Work Together Copyright © 2014, Oracle Corporation and/or its affiliates.
All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement SEO100151617

Oracle Corporation – Worldwide Headquarters, 500 Oracle Parkway, OPL – E-mail Services, Redwood Shores, CA 94065, United States

Your privacy is important to us. You can login to your account to update your e-mail subscriptions or you can opt-out of all Oracle Marketing e-mails at any time.

Please note that opting-out of Marketing communications does not affect your receipt of important business communications related to your current relationship with Oracle such as Security Updates, Event Registration notices, Account Management and Support/Service communications.

The post Webinar: 21st Century Education Goes Digital with Oracle WebCenter appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Index Growing Larger Than The Table

Hemant K Chitale - Wed, 2014-09-10 08:52
Here is a very simple demonstration of a case where an Index can grow larger than the table.  This happens because the pattern of data deleted and inserted doesn't allow deleted entries to be reused.  For every 10 rows that are inserted, 7 rows are subsequently deleted after their status is changed to "Processed".  But the space for the deleted entries from the index cannot be reused.

SQL>
SQL>REM Demo Index growth larger than table !
SQL>
SQL>drop table hkc_process_list purge;

Table dropped.

SQL>
SQL>create table hkc_process_list
2 (transaction_id number,
3 status_flag varchar2(1),
4 last_update_date date,
5 transaction_type number,
6 details varchar2(25))
7 /

Table created.

SQL>
SQL>create index hkc_process_list_ndx
2 on hkc_process_list
3 (transaction_id, status_flag)
4 /

Index created.

SQL>
SQL>
SQL>REM Cycle 1 -------------------------------------
> -- create first 1000 transactions
SQL>insert into hkc_process_list
2 select rownum, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
3 from dual
4 connect by level < 1001
5 /

1000 rows created.

SQL>commit;

Commit complete.

SQL>
SQL>-- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
2 from user_tables
3 where table_name like 'HKC_PROCE%'
4 union
5 select 'Index', index_name, leaf_blocks
6 from user_indexes
7 where index_name like 'HKC_PROCE%'
8 order by 1
9 /

OBJ_T TABLE_NAME BLOCKS
----- ------------------------------ ----------
Index HKC_PROCESS_LIST_NDX 3
Table HKC_PROCESS_LIST 5

2 rows selected.

SQL>
SQL>-- change status flag for 70% of the transactions to 'P'
SQL>update hkc_process_list
2 set status_flag='P'
3 where mod(transaction_id,10) < 7
4 /

700 rows updated.

SQL>commit;

Commit complete.

SQL>
SQL>-- delete processed rows
SQL>delete hkc_process_list
2 where status_flag='P'
3 /

700 rows deleted.

SQL>commit;

Commit complete.

SQL>
SQL>REM Cycle 2 -------------------------------------
> -- insert another 1000 rows
SQL>insert into hkc_process_list
2 select rownum+1000, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
3 from dual
4 connect by level < 1001
5 /

1000 rows created.

SQL>commit;

Commit complete.

SQL>
SQL>-- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
2 from user_tables
3 where table_name like 'HKC_PROCE%'
4 union
5 select 'Index', index_name, leaf_blocks
6 from user_indexes
7 where index_name like 'HKC_PROCE%'
8 order by 1
9 /

OBJ_T TABLE_NAME BLOCKS
----- ------------------------------ ----------
Index HKC_PROCESS_LIST_NDX 7
Table HKC_PROCESS_LIST 13

2 rows selected.

SQL>
SQL>-- change status flag for 70% of the transactions to 'P'
SQL>update hkc_process_list
2 set status_flag='P'
3 where mod(transaction_id,10) < 7
4 /

700 rows updated.

SQL>commit;

Commit complete.

SQL>
SQL>-- delete processed rows
SQL>delete hkc_process_list
2 where status_flag='P'
3 /

700 rows deleted.

SQL>commit;

Commit complete.

SQL>
SQL>
SQL>REM Cycle 3 -------------------------------------
> -- insert another 1000 rows
SQL>insert into hkc_process_list
2 select rownum+2000, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
3 from dual
4 connect by level < 1001
5 /

1000 rows created.

SQL>commit;

Commit complete.

SQL>
SQL>-- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
2 from user_tables
3 where table_name like 'HKC_PROCE%'
4 union
5 select 'Index', index_name, leaf_blocks
6 from user_indexes
7 where index_name like 'HKC_PROCE%'
8 order by 1
9 /

OBJ_T TABLE_NAME BLOCKS
----- ------------------------------ ----------
Index HKC_PROCESS_LIST_NDX 11
Table HKC_PROCESS_LIST 13

2 rows selected.

SQL>
SQL>-- change status flag for 70% of the transactions to 'P'
SQL>update hkc_process_list
2 set status_flag='P'
3 where mod(transaction_id,10) < 7
4 /

700 rows updated.

SQL>commit;

Commit complete.

SQL>
SQL>-- delete processed rows
SQL>delete hkc_process_list
2 where status_flag='P'
3 /

700 rows deleted.

SQL>commit;

Commit complete.

SQL>
SQL>
SQL>REM Cycle 4 -------------------------------------
> -- insert another 1000 rows
SQL>insert into hkc_process_list
2 select rownum+3000, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
3 from dual
4 connect by level < 1001
5 /

1000 rows created.

SQL>commit;

Commit complete.

SQL>
SQL>-- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
2 from user_tables
3 where table_name like 'HKC_PROCE%'
4 union
5 select 'Index', index_name, leaf_blocks
6 from user_indexes
7 where index_name like 'HKC_PROCE%'
8 order by 1
9 /

OBJ_T TABLE_NAME BLOCKS
----- ------------------------------ ----------
Index HKC_PROCESS_LIST_NDX 15
Table HKC_PROCESS_LIST 13

2 rows selected.

SQL>
SQL>-- change status flag for 70% of the transactions to 'P'
SQL>update hkc_process_list
2 set status_flag='P'
3 where mod(transaction_id,10) < 7
4 /

700 rows updated.

SQL>commit;

Commit complete.

SQL>
SQL>-- delete processed rows
SQL>delete hkc_process_list
2 where status_flag='P'
3 /

700 rows deleted.

SQL>commit;

Commit complete.

SQL>
SQL>
SQL>REM Latest State size -------------------------
> -- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
2 from user_tables
3 where table_name like 'HKC_PROCE%'
4 union
5 select 'Index', index_name, leaf_blocks
6 from user_indexes
7 where index_name like 'HKC_PROCE%'
8 order by 1
9 /

OBJ_T TABLE_NAME BLOCKS
----- ------------------------------ ----------
Index HKC_PROCESS_LIST_NDX 17
Table HKC_PROCESS_LIST 13

2 rows selected.

SQL>
SQL>

Note how the Index grew from 3 blocks to 17 blocks, larger than the table that grew to 13 and seemed to have reached a "steady-state" at 13 blocks.

The Index is built on only 2 of the 5 columns of the table and these two columns are also "narrow" in that they are a number and a single character.  Yet it grows faster through the INSERT - DELETE - INSERT cycles.

Note the difference between the Index definition (built on TRANSACTION_ID as the leading column) and the pattern of DELETEs (which is on STATUS_FLAG).

Deleted rows leave "holes" in the index but these are entries that cannot be reused by subsequent
Inserts.  The Index is ordered on TRANSACTION_ID.  So if an Index entry for TRANSACTION_ID = n is deleted, the entry can be reused only for the same (or very close) TRANSACTION_ID.

Assume that an Index Leaf Block contains entries for TRANSACTION_IDs 1, 2, 3, 4 and so on upto 10.  If rows for TRANSACTION_IDs 2,3,5,6,8 and 9 are deleted but 1,4,7 and 10  are not deleted then the Leaf Block has "free" space for new rows only with TRANSACTION_IDs 2,3,5,6,8 and 9.  New rows with TRANSACTION_IDs 11 and above will take a new Index Leaf Block and not re-use the "free" space in the first Index Leaf Block.  The first Leaf Block remains with deleted entries that are not reused.
On the other hand, when the rows are delete from the Table Block, new rows can be reinserted into the same Table Block.  The Table is Heap Organised, not Ordered like the Index.  Therefore, new rows are permitted to be inserted into any Block(s) that contain space for those new rows -- e.g. blocks from which rows are deleted.  Therefore, after deleting TRANSACTION_IDs 2,3,5,6 from a Table Block, new TRANSACTION_IDs 11,12,13,14 can be re-inserted into the *same* Block.

.
.
.
Categories: DBA Blogs

OOW - Focus On Support and Services for EBS

Chris Warticki - Wed, 2014-09-10 08:00
Focus On Support and Services for EBS Monday, Sep 29, 2014

Conference Sessions

Best Practices for Upgrading to Oracle E-Business Suite 12: Customer Insights
Damon Venger, Sr. IT Director, ERP Systems, Office Depot
Pamela Fisher Alexander, Consulting Senior Director, Oracle
10:15 AM - 11:00 AM Intercontinental - Grand Ballroom A CON8157 Prevention: Best Practices for Proactively Supporting Oracle E-Business Suite
Deidre Engstrom, Sr. Director, EBS, Proactive Support, Oracle
1:30 PM - 2:15 PM Moscone West - 2020 CON8575 Upgrading to Oracle E-Business Suite 12.1.3: Tips from ADP
Mukarram Mohammed, DBA Manager, ADP
Ed Fleming, Director, ACS Service Management, Oracle
Sushil Motwani, Senior Principal Technical Account Manager, Oracle
5:15 PM - 6:00 PM Intercontinental - Grand Ballroom C CON5061 Tuesday, Sep 30, 2014

Conference Sessions

Fast-Track Big Data Implementation with the Oracle Big Data Platform
Suraj Krishnan, Director, Applications & Middleware, Oracle
Jegannath Sundarapandian, Technical Lead, Oracle
10:45 AM - 11:30 AM Intercontinental - Union Square CON7183 Wednesday, Oct 01, 2014

Conference Sessions

Is Your Organization Trying to Focus on an ERP Cloud Strategy?
Patricia Burke, Director, Oracle
Bary Dyer, Vice President, Oracle
10:00 AM - 10:45 AM Westin Market Street - Concordia CON7614 Compensation in the Cloud: Proven Business Case
ARUL_SENAPATHI@AJG.COM ARUL_SENAPATHI@AJG.COM, Director, Global Oracle HRIS
Rich Isola, Sr. Practice Director, Oracle
Kishan Kasety, Consulting Technical Manager, Oracle
12:30 PM - 1:15 PM Palace - Gold Ballroom CON2709 Proactive Support Best Practices: Oracle E-Business Suite Payables and Payments
Stephen Horgan, Senior Principal Technical Support Engineer, Oracle
Andrew Lumpe, Senior Principal Support Engineer, Oracle
2:00 PM - 2:45 PM Moscone West - 3006 CON8479 Succession and Talent Review at Newfield Exploration
Blane Kingsmore, HRIS Manager, Newfield Exploration
Rich Isola, Sr. Practice Director, Oracle
Louann Weaver, Practice Director, Oracle
3:00 PM - 3:45 PM Palace - Gold Ballroom CON2712 Thursday, Oct 02, 2014

Conference Sessions

Oracle E-Business Suite Architecture Best Practices: Tips from CBS
John Basone, CBS
Greg Jerry, Director - Oracle Enterprise Architecture, Oracle
12:00 PM - 12:45 PM Marriott Marquis - Salon 4/5/6* CON3829 Latin America Transformational Success Stories
Juan Gutierrez, GVP LAD Consulting, Oracle
1:15 PM - 2:00 PM Moscone West - 3004 CON7348 Best Practices for Patching and Maintaining Oracle E-Business Suite
Jason Brincat, Senior Principal Technical Support Engineer, Oracle
Bill Burbage, Sr Principal Technical Support Engineer, Oracle
1:15 PM - 2:00 PM Moscone West - 3006 CON8478   My Oracle Support Monday Mix

Monday, Sep 29

Join us for a fun and relaxing happy hour at the annual My Oracle Support Monday Mix. This year’s gathering is Monday, September 29 from 6:00 to 8:00 p.m. at the ThirstyBear Brewing Company – just a 3 minute walk from Moscone Center. Admission is free for Premier Support customers with your Oracle OpenWorld badge. Visit our web site for more details: http://www.oracle.com/goto/mondaymix 6:00 PM - 8:00 PM ThirstyBear Brewing Company Oracle Support Stars Bar & Mini Briefing Center

Monday, Sep 29

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Tuesday, Sep 30

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Wednesday, Oct 01

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 3:45 PM Moscone West Exhibition Hall, 3461 and 3908

To secure a seat in a session, please use Schedule Builder to add to your Schedule.

APEX - Default Branch on Submit

Denes Kubicek - Wed, 2014-09-10 06:55
Yesterday I spent two hours on debugging one of my applications and searching for an answer. Of course, the problem was caused by myself. Here is what happened:

I copied a page and deleted the region including buttons and items using the Cascade to Delete Buttons option.



Then I created a form and had there a process on submit to upload files in my table. Everything was working fine except the fact that after the process the page would show up empty having the funny

wwv_flow.accept

message.



Looking at the page I couldn't see anything what would point to the actual problem. The process was there and it was running and the branch was there. Only after a while I looked into the branching and saw there that it was conditional - firing when the deleted button was pressed.



Then it came to my mind that this wasn't the first time I had this problem. I think this is a bug and it has to do with the file browse item. I expected the page to redirect after submit since this feature is there since 4.0. However, it didn't. This issue seems to be fixed in 5.0.

And one more thing to add: It does make sense to create conditional branches but it doesn't make sense to create unconditional ones. The feature of creating unconditional branches to the actual page, where the branch resides, should be removed completely.

You can see it in "action" here
Categories: Development

Getting Started with Windows VDI by Andrew Fryer

Surachart Opun - Wed, 2014-09-10 05:55
Virtual desktop infrastructure (VDI) is the practice of hosting a desktop operating system within a virtual machine (VM) running on a centralized server. VDI is a variation on the client/server computing model, sometimes referred to as server-based computing.
VDI is the new technology that gives lots of benefits.
• Efficient use of CPU and memory resources
• Reduced desktop downtime and increased availability
• Patches and upgrades performed in data center
• New users can be up and running quickly
• Data and applications reside in secure data centers
• Centralized management reduces operational expenses
Reference
Additional, VDI can be deployed with Microsoft Windows and suggest to learn What’s New in VDI for Windows Server 2012 R2 and 8.1
Anyway, I explained much more before starting to mention a book that was written by Andrew FryerGetting Started with Windows VDI - This book guides readers to build VDI by using Windows Server 2012 R2 and 8.1 quickly and easy to follow each chapter.

What Readers Will Learn:
  • Explore the various server roles and features that provide Microsoft's VDI solution
  • Virtualize desktops and the other infrastructure servers required for VDI using server virtualization in Windows Server Hyper-V
  • Build high availability clusters for VDI with techniques such as failover clustering and load balancing
  • Provide secure VDI to remote users over the Internet
  • Use Microsoft's Deployment Toolkit and Windows Server Update Services to automate the creation and maintenance of virtual desktops
  • Carry out performance tuning and monitoring
  • Understand the complexities of VDI licensing irrespective of the VDI solution you have opted for
  • Deploy PowerShell to automate all of the above techniques

Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Partner Webcast – Managing Exadata with Oracle Enterprise Manager 12c

Oracle Enterprise Manager 12c is system management software that delivers centralized monitoring, administration, and life cycle management functionality for the complete Oracle IT infrastructure,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

OBIEE SampleApp in The Cloud: Importing VirtualBox Machines to AWS EC2

Rittman Mead Consulting - Wed, 2014-09-10 01:40

Virtualisation has revolutionised how we work as developers. A decade ago, using new software would mean trying to find room on a real tin server to install it, hoping it worked, and if it didn’t, picking apart the pieces probably leaving the server in a worse state than it was to begin with. Nowadays, we can just launch a virtual machine to give a clean environment and if it doesn’t work – trash it and start again.
The sting in the tail of virtualisation is that full-blown VMs are heavy – for disk we need several GB just for a blank OS, and dozens of GB if you’re talking about a software stack such as Fusion MiddleWare (FMW), and the host machine needs to have the RAM and CPU to support it all too. Technologies such as Linux Containers go some way to making things lighter by abstracting out a chunk of the OS, but this isn’t something that’s reached the common desktop yet.

So whilst VMs are awesome, it’s not always practical to maintain a library of all of them on your local laptop (even 1TB drives fill up pretty quickly), nor will your laptop have the grunt to run more than one or two VMs at most. VMs like this are also local to your laptop or server – but wouldn’t it be neat if you could duplicate that VM and make a server based on it instantly available to anyone in the world with an internet connection? And that’s where The Cloud comes in, because it enables us to store as much data as we can eat (and pay for), and provision “hardware” at the click of a button for just as long as we need it, accessible from anywhere.

Here at Rittman Mead we make extensive use of Amazon Web Services (AWS) and their Elastic Computing Cloud (EC2) offering. Our website runs on it, our training servers run on it, and it scales just as we need it to. A class of 3 students is as easy to provision for as a class of 24 – no hunting around for spare servers or laptops, no hardware sat idle in a cupboard as spare capacity “just in case”.

One of the challenges that we’ve faced up until now is that all servers have had to be built from scratch in the cloud. Obviously we work with development VMs on local machines too, so wouldn’t it be nice if we could build VMs locally and then push them to the cloud? Well, now we can. Amazon offer a route to import virtual machines, and in this article I’m going to show how that works. I’ll use the superb SampleApp v406 VM that Oracle provide, because this is a great real-life example of a VM that is so useful, but many developers can find too memory-intensive to be able to run on their local machines all the time.

This tutorial is based on exporting a Linux guest VM from a Linux host server. A Windows guest probably behaves differently, but a Mac or Windows host should work fine since VirtualBox is supported on both. The specifics are based on SampleApp, but the process should be broadly the same for all VMs. 

Obtain the VM

We’re going to use SampleApp, which can be downloaded from Oracle.

  1. Download the six-part archive from http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples–167534.html
  2. Verify the md5 checksums against those published on the download page:
    [oracle@asgard sampleapp406]$ ll
    total 30490752
    -rw-r--r-- 1 oracle oinstall 5242880000 Sep  9 01:33 SampleAppv406.zip.001
    -rw-r--r-- 1 oracle oinstall 5242880000 Sep  9 01:30 SampleAppv406.zip.002
    -rw-r--r-- 1 oracle oinstall 5242880000 Sep  9 02:03 SampleAppv406.zip.003
    -rw-r--r-- 1 oracle oinstall 5242880000 Sep  9 02:34 SampleAppv406.zip.004
    -rw-r--r-- 1 oracle oinstall 5242880000 Sep  9 02:19 SampleAppv406.zip.005
    -rw-r--r-- 1 oracle oinstall 4977591522 Sep  9 02:53 SampleAppv406.zip.006
    [oracle@asgard sampleapp406]$ md5sum *
    2b9e11f69ada5f889088dd74b5229322  SampleAppv406.zip.001
    f8a1a5ae6162b20b3e9c6c888698c071  SampleAppv406.zip.002
    68438cfea87e8d3a2e2f15ff00dadf12  SampleAppv406.zip.003
    b71d9ace4f75951198fc8197da1cfe62  SampleAppv406.zip.004
    4f1a5389c9e0addc19dce6bbc759ec20  SampleAppv406.zip.005
    2c430f87e22ff9718d5528247eff2da4  SampleAppv406.zip.006
  3. Unpack the archive using 7zip — the instructions for SampleApp are very clear that you must use 7zip, and not another archive tool such as winzip.
    [oracle@asgard sampleapp406]$ time 7za x SampleAppv406.zip.001</code>7-Zip (A) [64] 9.20 Copyright (c) 1999-2010 Igor Pavlov 2010-11-18
    p7zip Version 9.20 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,80 CPUs)
    
    Processing archive: SampleAppv406.zip.001
    
    Extracting SampleAppv406Appliance
    Extracting SampleAppv406Appliance/SampleAppv406ga-disk1.vmdk
    Extracting SampleAppv406Appliance/SampleAppv406ga.ovf
    
    Everything is Ok
    
    Folders: 1
    Files: 2
    Size: 31191990916
    Compressed: 5242880000
    
    real 1m53.685s
    user 0m16.562s
    sys 1m15.578s
  4. Because we need to change a couple of things on the VM first (see below), we’ll have to import the VM to VirtualBox so that we can boot it up and make these changes.You can import using the VirtualBox GUI, or as I prefer, the VBoxManage command line interface. I like to time all these things (just because, numbers), so stick a time command on the front:
    time VBoxManage import --vsys 0 --eula accept SampleAppv406Appliance/SampleAppv406ga.ovf

    This took 12 minutes or so, but that was on a high-spec system, so YMMV.
    [...]
    0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
    Successfully imported the appliance.
    
    real    12m15.434s
    user    0m1.674s
    sys     0m2.807s
Preparing the VM

Importing Linux VMs to Amazon EC2 will only work if the kernel is supported, which according to an AWS blog post includes Red Hat Enterprise Linux 5.1 – 6.5. Whilst SampleApp v406 is built on Oracle Linux 6.5 (which isn’t listed by AWS as supported), we have the option of telling the VM to use a kernel that is Red Hat Enterprise Linux compatible (instead of the default Unbreakable Enterprise Kernel – UEK). There are some other pre-requisites that you need to check if you’re trying this with your own VM, including a network adaptor configured to use DHCP. The aforementioned blog post has details.

  1. Boot the VirtualBox VM, which should land you straight in the desktop environment, logged in as the oracle user.
  2. We need to modify a file as root (superuser). Here’s how to do it graphically, or use vi if you’re a real programmer:
    1. Open a Terminal window from the toolbar at the top of the screen
    2. Enter
      sudo gedit /etc/grub.conf

      The sudo bit is important, because it tells Linux to run the command as root. (I’m on an xkcd-roll here: 1, 2)

    3. In the text editor that opens, you will see a header to the file and then a set of repeating sections beginning with title. These are the available kernels that the machine can run under. The default is 3, which is zero-based, so it’s the fourth title section. Note that the kernel version details include uek which stands for Unbreakable Enterprise Kernel – and is not going to work on EC2.
    4. Change the default to 0, so that we’ll instead boot to a Red Hat Compatible Kernel, which will work on EC2
    5. Save the file
  3. Optional steps:
    1. Whilst you’ve got the server running, add your SSH key to the image so that you can connect to it easily once it is up on EC2. For more information about SSH keys, see my previous blog post here, and a step-by-step for doing it on SampleApp here.
    2. Disable non-SSH key logins (in /etc/ssh/sshd_config, set PasswordAuthentication no and PubkeyAuthentication yes), so that your server once on EC2 is less vulnerable to attack. Particularly important if you’re using the stock image with Admin123 as the root password.
    3. Set up screen, and OBIEE and the database as a Linux service, both covered in my article here.
  4. Shutdown the instance by entering this at a Terminal window:

    sudo shutdown -h now

Export the VirtualBox VM to Amazon EC2

Now we’re ready to really get going. The first step is to export the VirtualBox VM to a format that Amazon EC2 can work with. Whilst they don’t explicitly support VMs from VirtualBox, they do support the VMDK format – which VirtualBox can create. You can do the export from the graphical interface, or as before, from the command line:

time VBoxManage export "OBIEE SampleApp v406" --output OBIEE-SampleApp-v406.ovf

0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Successfully exported 1 machine(s).

real    56m51.426s
user    0m6.971s
sys     0m12.162s

If you compare the result of this to what we downloaded from Oracle it looks pretty similar – an OVF file and a VMDK file. The only difference is that the VMDK file is updated with the changes we made above, including the modified kernel settings which are crucial for the success of the next step.

[oracle@asgard sampleapp406]$ ls -lh
total 59G
-rw------- 1 oracle oinstall  30G Sep  9 10:55 OBIEE-SampleApp-v406-disk1.vmdk
-rw------- 1 oracle oinstall  15K Sep  9 09:58 OBIEE-SampleApp-v406.ovf

We’re ready now to get all cloudy. For this, you’ll need:

  1. An AWS account
    1. You’ll also need your AWS account’s Access Key and Secret Key
  2. AWS EC2 commandline tools installed, along with a Java Runtime Environment (JRE) 1.7 or greater:

    wget http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip
    sudo mkdir /usr/local/ec2
    sudo unzip ec2-api-tools.zip -d /usr/local/ec2
    # You might need to fiddle with the following paths and version numbers: 
    sudo yum install -y java-1.7.0-openjdk.x86_64
    cat >> ~/.bash_profile <<EOF
    export JAVA_HOME="/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.65.x86_64/jre"
    export EC2_HOME=/usr/local/ec2/ec2-api-tools-1.7.1.1/
    export PATH=$PATH:$EC2_HOME/bin
    EOF<

  3. Set your credentials as environment variables:
    export AWS_ACCESS_KEY=xxxxxxxxxxxxxx
    export AWS_SECRET_KEY=xxxxxxxxxxxxxxxxxxxxxx
  4. Ideally a nice fat pipe to upload the VM file over, because at 30GB it is not trivial (not in 2014, anyway)

What’s going to happen now is we use an EC2 command line tool to upload our VMDK (virtual disk) file to Amazon S3 (a storage platform), from where it gets converted into an EBS volume (Elastic Block Store, i.e. a EC2 virtual disk), and from there attached to a new EC2 instance (a “server”/”VM”).

Before we can do the upload we need an S3 “bucket” to put the disk image in that we’re uploading. You can create one from https://console.aws.amazon.com/s3/. In this example, I’ve got one called rmc-vms – but you’ll need your own.

Once the bucket has been created, we build the command line upload statement using ec2-import-instance:

time ec2-import-instance OBIEE-SampleApp-v406-disk1.vmdk --instance-type m3.large --format VMDK --architecture x86_64 --platform Linux --bucket rmc-vms --region eu-west-1 --owner-akid $AWS_ACCESS_KEY --owner-sak $AWS_SECRET_KEY

Points to note:

  • m3.large is the spec for the VM. You can see the available list here. In the AWS blog post it suggests only a subset will work with the import method, but I’ve not hit this limitation yet.
  • region is the AWS Region in which the EBS volume and EC2 instance will be built. I’m using ew-west-1 (Ireland), and it makes sense to use the one geographically closest to where you or your users are located. Still waiting for uk-yorks-1
  • architecture and platform relate to the type of VM you’re importing.

The upload process took just over 45 minutes for me, and that’s from a data centre with a decent upload:

[oracle@asgard sampleapp406]$ time ec2-import-instance OBIEE-SampleApp-v406-disk1.vmdk --instance-type m3.large --format VMDK --architecture x86_64 --platform Linux --bucket rmc-vms --region eu-west-1 --owner-akid $AWS_ACCESS_KEY --owner-sak $AWS_SECRET_KEY
Requesting volume size: 200 GB
TaskType        IMPORTINSTANCE  TaskId  import-i-fh08xcya       ExpirationTime  2014-09-16T10:07:44Z    Status  active  StatusMessage   Pending InstanceID      i-b07d3bf0
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   31191914496     VolumeSize      200     AvailabilityZone        eu-west-1a      ApproximateBytesConverted       0       Status       active  StatusMessage   Pending : Downloaded 0
Creating new manifest at rmc-vms/d77672aa-0e0b-4555-b368-79d386842112/OBIEE-SampleApp-v406-disk1.vmdkmanifest.xml
Uploading the manifest file
Uploading 31191914496 bytes across 2975 parts
0% |--------------------------------------------------| 100%
   |==================================================|
Done
Average speed was 11.088 MBps
The disk image for import-i-fh08xcya has been uploaded to Amazon S3
where it is being converted into an EC2 instance.  You may monitor the
progress of this task by running ec2-describe-conversion-tasks.  When
the task is completed, you may use ec2-delete-disk-image to remove the
image from S3.

real    46m59.871s
user    10m31.996s
sys     3m2.560s

Once the upload has finished Amazon automatically converts the VMDK (now residing on S3) into a EBS volume, and then attaches it to a new EC2 instance (i.e. a VM). You can monitor the status of this task using ec2-describe-conversion-tasks, optionally filtered on the TaskId returned by the import command above:

ec2-describe-conversion-tasks --region eu-west-1 import-i-fh08xcya

TaskType        IMPORTINSTANCE  TaskId  import-i-fh08xcya       ExpirationTime  2014-09-16T10:07:44Z    Status  active  StatusMessage   Pending InstanceID      i-b07d3bf0
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   31191914496     VolumeSize      200     AvailabilityZone        eu-west-1a      ApproximateBytesConverted       3898992128
Status  active  StatusMessage   Pending : Downloaded 31149971456

This is now an ideal time to mention as a side note the Linux utility watch, which simply re-issues a command for you every x seconds (2 by default). This way you can leave a window open and keep an eye on the progress of what is going to be a long-running job

watch ec2-describe-conversion-tasks --region eu-west-1 import-i-fh08xcya

Every 2.0s: ec2-describe-conversion-tasks --region eu-west-1 import-i-fh08xcya                                                             Tue Sep  9 12:03:24 2014

TaskType        IMPORTINSTANCE  TaskId  import-i-fh08xcya       ExpirationTime  2014-09-16T10:07:44Z    Status  active  StatusMessage   Pending InstanceID      i-b07d3bf0
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   31191914496     VolumeSize      200     AvailabilityZone        eu-west-1a      ApproximateBytesConverted       5848511808
Status  active  StatusMessage   Pending : Downloaded 31149971456

And whilst we’re at it, if you’re using a remote server to do this (as I am, to take advantage of the large bandwidth), you will find screen invaluable for keeping tasks running and being able to reconnect at will. You can read more about screen and watch here.

So back to our EC2 import job. To start with, the task will be Pending: (NB unlike lots of CLI tools, you read the output of this one left-to-right, rather than as columns with headings)

$ ec2-describe-conversion-tasks --region eu-west-1
TaskType        IMPORTINSTANCE  TaskId  import-i-ffvx6z86       ExpirationTime  2014-09-12T15:32:01Z    Status  active  StatusMessage   Pending InstanceID      i-b2245ef2
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   5021144064      VolumeSize      60      AvailabilityZone        eu-west-1a      ApproximateBytesConverted       4707330352      Status  active  StatusMessage   Pending : Downloaded 5010658304

After a few moments it gets underway, and you can see a Progress percentage indicator: (scroll right in the code snippet below to see)

TaskType        IMPORTINSTANCE  TaskId  import-i-fgr0djcc       ExpirationTime  2014-09-15T15:39:28Z    Status  active  StatusMessage   Progress: 53%   InstanceID      i-c7692e87
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   5582545920      VolumeId        vol-f71368f0    VolumeSize      20      AvailabilityZone        eu-west-1a      ApproximateBytesConverted       5582536640      Status  completed

Note that at this point you’ll see also see an Instance in the EC2 list, but it won’t launch (no attached disk – because it’s still being imported!)

If something goes wrong you’ll see the Status as cancelled, such as in this example here where the kernel in the VM was not a supported one (observe it is the UEK kernel, which isn’t supported by Amazon):

TaskType        IMPORTINSTANCE  TaskId  import-i-ffvx6z86       ExpirationTime  2014-09-12T15:32:01Z    Status  cancelled       StatusMessage   ClientError: Unsupported kernel version 2.6.32-300.32.1.el5uek       InstanceID      i-b2245ef2
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   5021144064      VolumeId        vol-91b1c896    VolumeSize      60      AvailabilityZone        eu-west-1a      ApproximateBytesConverted    5021128688      Status  completed

After an hour or so, the task should complete:

TaskType        IMPORTINSTANCE  TaskId  import-i-fh08xcya       ExpirationTime  2014-09-16T10:07:44Z    Status  completed       InstanceID      i-b07d3bf0
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   31191914496     VolumeId        vol-a383f8a4    VolumeSize      200     AvailabilityZone        eu-west-1a      ApproximateBy
tesConverted    31191855472     Status  completed

At this point you can remove the VMDK from S3 (and should do, else you’ll continue to be charged for it), following the instructions for ec2-delete-disk-image

Booting the new server on EC2

Go to your EC2 control panel, where you should see an instance (EC2 term for “server”) in Stopped state and with no name.

Select the instance, and click Start on the Actions menu. After a few moments a Public IP will be shown in the details pane. But, we’re not home free quite yet…read on.

Firewalls

So this is where it gets a bit tricky. By default, the instance will have launched with Amazon’s Firewall (known as a Security Group) in place which – unless you have an existing AWS account and have modified the default security group’s configuration – is only open on port 22, which is for ssh traffic.

You need to head over to the Security Group configuration page, accessed in several ways but easiest is clicking on the security group name from the instance details pane:

Click on the Inbound tab and then Edit, and add “Custom TCP Rule” for the following ports:

  • 7780 (OBIEE front end)
  • 7001 (WLS Console / EM)
  • 5902 (oracle VNC)

You can make things more secure by allowing access to the WLS admin (7001) and VNC port (5902) to a specific IP address or range only.

Whilst we’re talking about security, your server is now open to the internet and all the nefarious persons out there, so you’ll be wanting to harden your server not least by resetting all the passwords to ones which aren’t publicly documented in the SampleApp user documentation!

Once you’ve updated your Security Group, you can connect to your server! If you installed the OBIEE and database auto start scripts (and if not, why not??) you should find OBIEE running just nicely on http://[your ip]:7780/analytics – note that the port is 7780, not 9704.

2014-09-09_20-21-23

If you didn’t install the script, you will need to start the services manually per the SampleApp documentation. To connect to the server you can ssh (using Terminal, PuTTY, etc) to the server or connect on VNC (Admin123 is the password). For VNC clients try Screen Share on Macs (installed by default), or RealVNC on Windows.

Caveats & Disclaimers
  • Running a server on AWS EC2 costs real money, so watch out. Once you’ve put your credit card details in, Amazon will continue to charge your card whilst there are chargeable items on your account (EBS volumes, instances – running or not- , and so on). You can get an idea of the scale of charges here.
  • As mentioned above, a server on the open internet is a lot more vulnerable than one virtualised on your local machine. You will get poked and probed, usually by automated scripts looking for open ports, weak passwords, and so on. SampleApp is designed to open the toybox of a pimped-out OBIEE deployment to you, it is not “hardened”, and you risk learning the tough way about the need for it if you’re not careful.
Cloning

Amazon EC2 supports taking a snapshot of a server, either for backup/rollback purposes or spinning up as a clone, using an Amazon Machine Image (AMI). From the Instances page, simply select “Create an Image” to build your AMI. You can then build another instance (or ten) from this AMI as needed, exact replicas of the server as it was at the point that you created the image.

Lather, Rinse, and Repeat

There’s a whole host of VirtualBox “appliances” out there, and some of them such as the developer-tools-focused ones only really make sense as local VMs. But there are plenty that would benefit from a bit of “Cloud-isation”, where they’re too big or heavy to keep on your laptop all the time, but are handy to be able to spin up at will. A prime example of this for me is the EBS Vision demo database that we use for our BI Apps training. Oracle used to provide an pre-built Amazon image (know as an AMI) of this, but since withdrew it. However, Oracle do publish Oracle VM VirtualBox templates for EBS 12.1.3 and 12.2.3 (related blog), so from this with a bit of leg-work and a big upload pipe, it’s a simple matter to brew your own AWS version of it — ready to run whenever you need it.

Categories: BI & Warehousing

CIFS performance problem

Bas Klaassen - Wed, 2014-09-10 01:04
Today we encountered some performance problems at our customer site. After checking with the customer it seemed that especially the CIFS shares were having problems. At first CIFS was getting slower until the shares were not even accessible anymore. Restarting the CIFS on the filer did solve the problem for a few minutes, but within half an hour the the problems were back again.  Checking the /Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com0
Categories: APPS Blogs

2009 honda s2000 ultimate edition for sale

Ameed Taylor - Tue, 2014-09-09 18:40
Drive the S2000 tenderly and you presumably won't be satisfied with the buzzy powertrain and occupied ride. Tuned to perform on tight clips, the S2000 can feel rigid and jittery on open streets. Wind out the motor and push its points of confinement in corners, and you're in for a totally diverse, smile actuating background; that is the thing that the Honda S2000 is about.

Mazda's Miata feels very nearly large in correlation to the S2000. The cockpit is confined regardless of how little the tenants. The high shoulders of the S2000 keep the driver and traveler, and the controlling wheel sits low even at its most noteworthy alteration point. Strangely for Honda, the controls aren't laid out neatly (there's not a considerable measure of dash space to do so), and the enormous red Start catch appears to be more like a contrivance. There's a lot of dark plastic, as well, for the sake of sparing weight.

The 2009 Honda S2000 is one of the slightest reasonable large scale manufacture autos on the planet. There's practically no inside or trunk stockpiling, the cockpit's more confined than the mentor situates on a Boeing 757, and its evaluated above $30,000. It is an exemplary roadster sportscar with back wheel drive, a ragtop to open on sunny days, a six-pace manual transmission, and a rev-cheerful four-barrel motor.
2009 red honda convertible s2000
A year ago Honda presented the S2000 CR, the club-racer adaptation of the standard S2000. The CR gets a full-body flight optimized unit, superior Bridgestone tires, firmer suspension settings, a thicker hostile to move bar, and new wheels. A lightweight aluminum hardtop that cuts weight by around 90 pounds replaces the delicate top component. Inside, the CR gets different material seats with yellow sewing, another aluminum shifter handle, and carbon-fiber resemble the other much the same trim boards.

Standard supplies on the 2009 Honda S2000 incorporates electronic dependability control and non-freezing stopping devices, however side airbags—a gimmick now found on almost all new vehicles—aren't accessi

although the 2009 Honda S2000 has a dated design, the bottom edition stands out for its spectacular mix of fashion and performance, regardless of the overwhelming additions on the CR.

automobiles.com studies other exterior highlights embody trendy “excessive-intensity-discharge headlamps and 17-inch alloy wheels” that come usual on the 2009 Honda S2000. Edmunds resorts essentially the most distinguished criticism of the exterior of the 2009 Honda S2000, noting that whereas the brand new aerodynamic items on the CR “reduce excessive-velocity aerodynamic lift by way of about 70 p.c,” additionally they “cut back the car’s overall visual appeal with the aid of, oh, 79 %.” evaluations read through ebizsig.blogspot.com convey that the exterior styling of the 2009 Honda S2000 is a large success, and Kelley Blue e-book says the Honda S2000 “strikes an awfully un-Honda like, slightly depraved poise” that may “resemble an angry cobra about to strike.”
honda s2000 fiche technique 2009
Kelley Blue e book notes that “CR models include an aerodynamic physique kit,” together with “raise-reducing front and rear spoilers and a removable aluminum onerous high instead of the traditional cloth” model on the standard Honda S2000.
according to the reviewers at Edmunds, the “2009 Honda S2000 is a compact two-seat roadster that’s provided in two trims: same old and CR.” each trims share the same normal profile, which automobiles.com calls a “wedge-formed profile that stands except for different roadsters.”

ConsumerGuide approves of the internal structure on the 2009 Honda S2000, claiming that the “S2000 has a comfortable cockpit, so everything is shut at hand,” and whereas the “digital bar-graph tachometer and digital speedometer usually are not the sports activities-automotive norm,” they're “simple to learn.” Edmunds chimes in, noting that “just about all the controls you’ll ever want are set up inside a finger’s extension of the guidance wheel.” one of the most cooler interior features to find its method right into a manufacturing car is the “new top-power Indicator” on the 2009 Honda S2000 CR, a feature that cars.com says will flash “a inexperienced light when top power is reached.” Kelley Blue ebook gushes the 2009 Honda S2000’s “inside is stuffed with excellent surprises,” including a “giant pink start button on the sprint” and “the long heart console [that] sits up excessive, affording you the perfect perch on which to rest your arm.”
2009 honda s2000 performance specs
The 2009 Honda S2000 enjoys better handling because of the quicker guidance ratio and new tires, and the CR variation is a monitor-necessary contender that can hold its personal against more expensive European and American competition.

The EPA estimates that the 2009 Honda S2000, whether in standard or CR kind, will get 18 mpg within the city and 25 on the highway. Most cars as robust because the 2009 Honda S2000 pay a big penalty on the gasoline pump, however the small engine blended with lightweight development on the Honda S2000 yields a moderately frugal efficiency machine.

evaluations read by way of ebizsig.blogspot.com convey that the engine is happiest when operating flat-out. cars.com notes that “once it reaches 5,000 rpm or so, the S2000 lunges ahead like a rocket,” and Edmunds adds that “piloting the 2009 Honda S2000 takes some getting used to, on the grounds that height energy is delivered at nearly eight,000 rpm.” ConsumerGuide reviewers love the engine and find the Honda S2000 “offers a stunning provide of usable power across a extensive rpm vary, mixed with ultrahigh-revving excitement.” although two diverse versions of the 2009 Honda S2000 are on hand, Edmunds studies that the only engine offered is a “2.2-liter four-cylinder that churns out 237 hp at a lofty 7,800 rpm and 162 pound-feet of torque at 6,800 rpm.” Honda has tuned the engine on the Honda S2000 almost to the breaking point, with automobile and Driver commenting that “the S2000’s 2.2-liter four is mainly maxed out.”
modified honda s2000 turbo 2009 picture
evaluations learn by using ebizsig.blogspot.com additionally compliment the S2000’s transmission for its easy shifts and brief throws. Kelley Blue e book claims that the engine and transmission combination makes for “startlingly-quick efficiency,” whereas the chassis provides “outstanding nimbleness” to the 2009 Honda S2000 package deal. vehicles.com states that the four-cylinder engine on the S2000 Honda “mates with a six-speed handbook transmission” that ConsumerGuide says will supply “manageable take hold of motion” and a “slick, quick-throw gearbox.”

As excellent as the engine/transmission mixture is, coping with continues to be a trademark of the 2009 S2000. automobiles.com holds nothing back in praising the “razor-sharp steerage, disciplined coping with and athletic cornering ability” of the 2009 Honda S2000. Kelley Blue e book reviewers rave about the “just about flat cornering conduct and intensely crisp response that allows” the 2009 Honda S2000 “to barter the corners with positive tenacity.” The membership Racer is even more impressive, with automotive and Driver reporting it “is simply harder and sharper, with much less physique roll and tire scrubbing and extra nook composure and stability underneath braking.” sadly, the associated fee for all that efficiency is bad journey quality, and ConsumerGuide points out that “nearly every small bump and tar strip registers during the seats.” On the positive aspect, ConsumerGuide also comments that “braking is swift and simply modulated” whether or not you might be driving on the street or the monitor.
2009 honda s2000 horsepower
2009 honda s2000 owner's manual
2009 honda s2000 pictures
2009 honda s2000 price new
Categories: DBA Blogs

2009 honda s2000 ultimate edition for sale

EBIZ SIG BLOG - Tue, 2014-09-09 18:40
Drive the S2000 tenderly and you presumably won't be satisfied with the buzzy powertrain and occupied ride. Tuned to perform on tight clips, the S2000 can feel rigid and jittery on open streets. Wind out the motor and push its points of confinement in corners, and you're in for a totally diverse, smile actuating background; that is the thing that the Honda S2000 is about.

Mazda's Miata feels very nearly large in correlation to the S2000. The cockpit is confined regardless of how little the tenants. The high shoulders of the S2000 keep the driver and traveler, and the controlling wheel sits low even at its most noteworthy alteration point. Strangely for Honda, the controls aren't laid out neatly (there's not a considerable measure of dash space to do so), and the enormous red Start catch appears to be more like a contrivance. There's a lot of dark plastic, as well, for the sake of sparing weight.

The 2009 Honda S2000 is one of the slightest reasonable large scale manufacture autos on the planet. There's practically no inside or trunk stockpiling, the cockpit's more confined than the mentor situates on a Boeing 757, and its evaluated above $30,000. It is an exemplary roadster sportscar with back wheel drive, a ragtop to open on sunny days, a six-pace manual transmission, and a rev-cheerful four-barrel motor.
2009 red honda convertible s2000
A year ago Honda presented the S2000 CR, the club-racer adaptation of the standard S2000. The CR gets a full-body flight optimized unit, superior Bridgestone tires, firmer suspension settings, a thicker hostile to move bar, and new wheels. A lightweight aluminum hardtop that cuts weight by around 90 pounds replaces the delicate top component. Inside, the CR gets different material seats with yellow sewing, another aluminum shifter handle, and carbon-fiber resemble the other much the same trim boards.

Standard supplies on the 2009 Honda S2000 incorporates electronic dependability control and non-freezing stopping devices, however side airbags—a gimmick now found on almost all new vehicles—aren't accessi

although the 2009 Honda S2000 has a dated design, the bottom edition stands out for its spectacular mix of fashion and performance, regardless of the overwhelming additions on the CR.

automobiles.com studies other exterior highlights embody trendy “excessive-intensity-discharge headlamps and 17-inch alloy wheels” that come usual on the 2009 Honda S2000. Edmunds resorts essentially the most distinguished criticism of the exterior of the 2009 Honda S2000, noting that whereas the brand new aerodynamic items on the CR “reduce excessive-velocity aerodynamic lift by way of about 70 p.c,” additionally they “cut back the car’s overall visual appeal with the aid of, oh, 79 %.” evaluations read through ebizsig.blogspot.com convey that the exterior styling of the 2009 Honda S2000 is a large success, and Kelley Blue e-book says the Honda S2000 “strikes an awfully un-Honda like, slightly depraved poise” that may “resemble an angry cobra about to strike.”
honda s2000 fiche technique 2009
Kelley Blue e book notes that “CR models include an aerodynamic physique kit,” together with “raise-reducing front and rear spoilers and a removable aluminum onerous high instead of the traditional cloth” model on the standard Honda S2000.
according to the reviewers at Edmunds, the “2009 Honda S2000 is a compact two-seat roadster that’s provided in two trims: same old and CR.” each trims share the same normal profile, which automobiles.com calls a “wedge-formed profile that stands except for different roadsters.”

ConsumerGuide approves of the internal structure on the 2009 Honda S2000, claiming that the “S2000 has a comfortable cockpit, so everything is shut at hand,” and whereas the “digital bar-graph tachometer and digital speedometer usually are not the sports activities-automotive norm,” they're “simple to learn.” Edmunds chimes in, noting that “just about all the controls you’ll ever want are set up inside a finger’s extension of the guidance wheel.” one of the most cooler interior features to find its method right into a manufacturing car is the “new top-power Indicator” on the 2009 Honda S2000 CR, a feature that cars.com says will flash “a inexperienced light when top power is reached.” Kelley Blue ebook gushes the 2009 Honda S2000’s “inside is stuffed with excellent surprises,” including a “giant pink start button on the sprint” and “the long heart console [that] sits up excessive, affording you the perfect perch on which to rest your arm.”
2009 honda s2000 performance specs
The 2009 Honda S2000 enjoys better handling because of the quicker guidance ratio and new tires, and the CR variation is a monitor-necessary contender that can hold its personal against more expensive European and American competition.

The EPA estimates that the 2009 Honda S2000, whether in standard or CR kind, will get 18 mpg within the city and 25 on the highway. Most cars as robust because the 2009 Honda S2000 pay a big penalty on the gasoline pump, however the small engine blended with lightweight development on the Honda S2000 yields a moderately frugal efficiency machine.

evaluations read by way of ebizsig.blogspot.com convey that the engine is happiest when operating flat-out. cars.com notes that “once it reaches 5,000 rpm or so, the S2000 lunges ahead like a rocket,” and Edmunds adds that “piloting the 2009 Honda S2000 takes some getting used to, on the grounds that height energy is delivered at nearly eight,000 rpm.” ConsumerGuide reviewers love the engine and find the Honda S2000 “offers a stunning provide of usable power across a extensive rpm vary, mixed with ultrahigh-revving excitement.” although two diverse versions of the 2009 Honda S2000 are on hand, Edmunds studies that the only engine offered is a “2.2-liter four-cylinder that churns out 237 hp at a lofty 7,800 rpm and 162 pound-feet of torque at 6,800 rpm.” Honda has tuned the engine on the Honda S2000 almost to the breaking point, with automobile and Driver commenting that “the S2000’s 2.2-liter four is mainly maxed out.”
modified honda s2000 turbo 2009 picture
evaluations learn by using ebizsig.blogspot.com additionally compliment the S2000’s transmission for its easy shifts and brief throws. Kelley Blue e book claims that the engine and transmission combination makes for “startlingly-quick efficiency,” whereas the chassis provides “outstanding nimbleness” to the 2009 Honda S2000 package deal. vehicles.com states that the four-cylinder engine on the S2000 Honda “mates with a six-speed handbook transmission” that ConsumerGuide says will supply “manageable take hold of motion” and a “slick, quick-throw gearbox.”

As excellent as the engine/transmission mixture is, coping with continues to be a trademark of the 2009 S2000. automobiles.com holds nothing back in praising the “razor-sharp steerage, disciplined coping with and athletic cornering ability” of the 2009 Honda S2000. Kelley Blue e book reviewers rave about the “just about flat cornering conduct and intensely crisp response that allows” the 2009 Honda S2000 “to barter the corners with positive tenacity.” The membership Racer is even more impressive, with automotive and Driver reporting it “is simply harder and sharper, with much less physique roll and tire scrubbing and extra nook composure and stability underneath braking.” sadly, the associated fee for all that efficiency is bad journey quality, and ConsumerGuide points out that “nearly every small bump and tar strip registers during the seats.” On the positive aspect, ConsumerGuide also comments that “braking is swift and simply modulated” whether or not you might be driving on the street or the monitor.
2009 honda s2000 horsepower
2009 honda s2000 owner's manual
2009 honda s2000 pictures
2009 honda s2000 price new
Categories: APPS Blogs