Skip navigation.

Feed aggregator

Create GoldenGate 12.2 Manager

Michael Dinh - Sun, 2016-05-22 14:14

I typically don’t like to see WARNING if I can help it.

Goldengate 12c has some security features to allow/prevent unauthorized access.

Be careful. Incorrect IPADDR or PROG is used will prevent Pump Extract delivery to target server.

oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$ tail -100 ggserr.log
2016-05-22 12:25:07  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): start mgr.
2016-05-22 12:25:07  WARNING OGG-01877  Oracle GoldenGate Manager for Oracle, mgr.prm:  Missing explicit accessrule for server collector.
2016-05-22 12:25:07  INFO    OGG-00960  Oracle GoldenGate Manager for Oracle, mgr.prm:  Access granted (rule #7).
2016-05-22 12:25:07  INFO    OGG-00983  Oracle GoldenGate Manager for Oracle, mgr.prm:  Manager started (port 7901).
2016-05-22 12:25:09  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): info all.
2016-05-22 12:25:46  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): info all.
2016-05-22 12:25:51  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): stop mgr.
2016-05-22 12:25:51  INFO    OGG-00963  Oracle GoldenGate Manager for Oracle, mgr.prm:  Command received from GGSCI on host [127.0.0.1]:39551 (STOP).
2016-05-22 12:25:51  INFO    OGG-00960  Oracle GoldenGate Manager for Oracle, mgr.prm:  Access granted (rule #7).
2016-05-22 12:25:51  WARNING OGG-00938  Oracle GoldenGate Manager for Oracle, mgr.prm:  Manager is stopping at user request.
2016-05-22 12:26:00  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): start mgr.
2016-05-22 12:26:00  INFO    OGG-00960  Oracle GoldenGate Manager for Oracle, mgr.prm:  Access granted (rule #2).
2016-05-22 12:26:00  INFO    OGG-00983  Oracle GoldenGate Manager for Oracle, mgr.prm:  Manager started (port 7901).

oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$ cat dirprm/mgr.prm
PORT 7901
DYNAMICPORTLIST 15100-15120
ACCESSRULE, PROG server, IPADDR *, ALLOW
ACCESSRULE, PROG *, IPADDR *, ALLOW
USERIDALIAS ggs_admin
PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS, MINKEEPDAYS 3
-- AUTOSTART ER *
-- AUTORESTART ER *, RETRIES 5, WAITMINUTES 2, RESETMINUTES 60
CHECKMINUTES 5
LAGCRITICALMINUTES 15
oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$

Create GoldenGate 12.2 Wallet

Michael Dinh - Sun, 2016-05-22 13:44

So what’s different from this post versus other posts? I share my mistakes with you.

Did you know there was a DEFAULT domain? If you didn’t, neither did I and only found out by using

info credentialstore

alter credentialstore add user ggs_admin alias ggs_admin domain admin
USERIDALIAS ggs_admin DOMAIN admin

alter credentialstore add user ggs_admin alias ggs_admin
USERIDALIAS ggs_admin

oracle@arrow:thor:/u01/app/12.2.0.1/ggs02
$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Dec 12 2015 00:54:38
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.

GGSCI (arrow.localdomain) 1> create wallet

Created wallet at location 'dirwlt'.

Opened wallet at location 'dirwlt'.

GGSCI (arrow.localdomain) 2> add credentialstore

Credential store created in ./dircrd/.

GGSCI (arrow.localdomain) 3> alter credentialstore add user ggs_admin alias ggs_admin domain admin
Password:

Credential store in ./dircrd/ altered.

GGSCI (arrow.localdomain) 4> info credentialstore

Reading from ./dircrd/:

No information found in default domain OracleGoldenGate.

Other domains:

admin

To view other domains, use INFO CREDENTIALSTORE DOMAIN <domain>

GGSCI (arrow.localdomain) 5> info credentialstore domain admin

Reading from ./dircrd/:

Domain: admin

Alias: ggs_admin
Userid: ggs_admin

GGSCI (arrow.localdomain) 6> exit


oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Dec 12 2015 00:54:38
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.



GGSCI (arrow.localdomain) 1> alter credentialstore add user ggs_admin alias ggs_admin
Password:

Credential store in ./dircrd/ altered.

GGSCI (arrow.localdomain) 2> info credentialstore

Reading from ./dircrd/:

Default domain: OracleGoldenGate

  Alias: ggs_admin
  Userid: ggs_admin

Other domains:

  admin

To view other domains, use INFO CREDENTIALSTORE DOMAIN 

GGSCI (arrow.localdomain) 3> exit
oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$

 


TRUNCATEing a Table makes an UNUSABLE Index VALID again

Hemant K Chitale - Sun, 2016-05-22 09:54
Here's something I learned from Jonathan Lewis sometime ago.

If you set an Index to be UNUSABLE but later issue a TRUNCATE TABLE, the Index becomes VALID again --- i.e. the Index gets updated with rows subsequently inserted.

SQL> connect hemant/hemant
Connected.
SQL> drop table target_data purge;

Table dropped.

SQL> create table target_data as select * from source_data where 1=2;

Table created.

SQL> create index target_data_ndx_1
2 on target_data(owner, object_type, object_name);

Index created.

SQL> insert /*+ APPEND */ into target_data
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> col segment_name format a30
SQL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name like 'TARGET_DATA%'
4 order by 1;

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
TARGET_DATA TABLE 49
TARGET_DATA_NDX_1 INDEX 19

SQL>
SQL> col index_name format a30
SQL> select index_name, status
2 from user_indexes
3 where table_name = 'TARGET_DATA';

INDEX_NAME STATUS
------------------------------ --------
TARGET_DATA_NDX_1 VALID

SQL>


So, I have a VALID Index on my Table.

I now make it UNUSABLE and add rows to it.

SQL> alter index target_Data_ndx_1 unusable;

Index altered.

SQL> select status
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_1';

STATUS
--------
UNUSABLE

SQL> insert /*+ APPEND */ into target_data
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> select index_name, status
2 from user_indexes
3 where table_name = 'TARGET_DATA';

INDEX_NAME STATUS
------------------------------ --------
TARGET_DATA_NDX_1 UNUSABLE

SQL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name like 'TARGET_DATA%'
4 order by 1;

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
TARGET_DATA TABLE 104

SQL>


Oracle actually drops the Index segment (so you don't see it in USER_SEGMENTS) when it is set to UNUSABLE alhough the Index definition is still present.  The Index doesn't "grow" as the Segment doesn't exist.

Let me TRUNCATE the table.

SQL> truncate table target_data;

Table truncated.

SQL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name like 'TARGET_DATA%'
4 order by 1;

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
TARGET_DATA TABLE .0625
TARGET_DATA_NDX_1 INDEX .0625

SQL> select index_name, status
2 from user_indexes
3 where table_name = 'TARGET_DATA';

INDEX_NAME STATUS
------------------------------ --------
TARGET_DATA_NDX_1 VALID

SQL>


Immediately after the TRUNCATE TABLE, the Index Segment is instantiated and the Index becomes VALID again.  So inserting rows will update the Index.  My last explicit command against the Index was ALTER INDEX ... UNUSABLE but that seems to be not the current state now !

SQL> insert /*+ APPEND */ into target_data
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name like 'TARGET_DATA%'
4 order by 1;

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
TARGET_DATA TABLE 49
TARGET_DATA_NDX_1 INDEX 19

SQL>


So, repopulating the Table has expanded the Index again.
.
.
.


Categories: DBA Blogs

Video : Indexing JSON Data in Oracle Database 12c

Tim Hall - Sun, 2016-05-22 03:25

Following on from last week’s post, today’s video is about indexing JSON data in Oracle Database 12c.

If videos aren’t your thing, you might want to read these articles, which the videos are based on.

The cameo in this video comes courtesy of Bertrand Drouvot, who was a silent extra in the previous video too. 

duplicate to a future date

Laurent Schneider - Sat, 2016-05-21 07:48

If you work with large databases, you often wait way to long for the clones. Typically you want to duplicate a 10TB database to production timestamp 9am, and you start at 9am and then you wait for hours.

Is it possible to start the clone, let’s say, at midnight, and set until time 9am?

No! You’ll get


RMAN-06617: UNTIL TIME (2016-05-21 09:00:00) is ahead of last NEXT TIME in archived logs (2016-05-20 23:58:52)

But… you could start to restore the datafiles at midnight.


sqlplus sys/***@db02 as sysdba <<EOF
  alter system set db_name='DB01' scope=spfile;
  alter system set db_unique_name='DB02' scope=spfile;
  startup force nomount
EOF

rman target sys/***@db01 auxiliary sys/***@db02 <<EOF
   restore clone primary controlfile;
   alter clone database mount;

run {
   set newname for datafile  1 to
 "/db02/system01.dbf";
   set newname for datafile  2 to
 "/db02/sysaux01.dbf";
   set newname for datafile  3 to
 "/db02/undotbs1_02.dbf";
   set newname for datafile  4 to
 "/db02/users01.dbf";
   restore clone database
   ;
}
EOF

This is exactly when RMAN does when you issue a duplicate. You could use the supported RESTORE command instead of the unsupported RESTORE CLONE command. But then it’ll get a bit more complex as you need to find out the location of your backup and so on.

At 9am, you issue your duplicate, and you’ll see


skipping datafile 1; already restored to file /db02/system01.dbf
skipping datafile 2; already restored to file /db02/sysaux01.dbf
skipping datafile 3; already restored to file /db02/undotbs1_02.dbf
skipping datafile 4; already restored to file /db02/users01.dbf

You just saved nine hours &#x1f642;

“What do you mean there’s line breaks in the address?” said SQLLDR

RDBMS Insight - Fri, 2016-05-20 18:55

I had a large-ish CSV to load and a problem: line breaks inside some of the delimited fields.

Like these two records:

one, two, "three beans", four
five, six, "seven
beans", "eight wonderful beans"

SQL Loader simply won’t handle this, as plenty of sad forum posts attest. The file needs pre-processing and here is a little python script to do it, adapted from Jmoreland91’s solution on Stack Overflow.

import sys, csv, os
 
def hrtstrip (inputfile,outputfile,newtext):
    print("Input file " + inputfile)
    print("Output file " + outputfile)
    with open(inputfile, "r") as input:
       with open(outputfile, "w") as output:
          w = csv.writer(output, delimiter=',', quotechar='"', 
quoting=csv.QUOTE_NONNUMERIC, lineterminator='\n')
          for record in csv.reader(input):
             w.writerow(tuple(s.replace("\n", newtext) for s in record))
    print("All done")

Thanks to Jmoreland91 for this. If you use it, give him an updoot.

edit – Jason Bucata (@tech31842) tweeted me another StackOverflow with a number of scripts in assorted languages: http://stackoverflow.com/questions/33994244/how-to-remove-newlines-inside-csv-cells-using-regex-terminal-tools

Categories: DBA Blogs

Application Development Platform (Java Cloud, Application Container Cloud, Developer Cloud) New Release 16.2.1

Oracle Cloud has already moved to the latest 16.2.1 release of Application Development platform with Java Cloud Service,  Application Container Cloud Service, and ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

The Digital Hotel Key

Oracle AppsLab - Fri, 2016-05-20 15:04

Generally, I’m not in favor of consolidating important stuff onto my phone, e.g. credit cards, etc. because if I lose my phone, I’ll lose all that stuff too.

However, I’ve been waiting to try out a digital hotel key, i.e. using my phone to unlock my hotel room. Only a few hotels and hotel chains have this technology in place, and recently, I finally stayed at one that does, the Hilton San Jose.

Much to my surprise, and Noel’s (@noelportugal), the digital key doesn’t use NFC. We’d assumed it would, given NFC is fairly common in newer hotels.

Nope, it uses Bluetooth, and when you get close to your room or any door you have access to unlock, e.g. the fitness center, the key enables.

Screenshot_20160519-160029

Screenshot_20160519-160010

Then, touch to unlock, just like it says, and within a second or so, the door is unlocked. It’s not instantaneous, like using the key, which uses NFC, but still pretty cool.

Screenshot_20160519-160047

Ironically, the spare, physical key they gave me for “just in case” scenarios failed to work. I made the mistake of leaving my phone in the room to charge, taking the spare key while I ran downstairs to get some food, and the physical key didn’t work.

Anyway, the feature worked as expected, which is always a win. Those plastic keys won’t disappear anytime soon, and if you lose your phone while you’re using the digital hotel key, you’re extra hosed.

Still, I liked it and will definitely use it again whenever it’s available because it made me feel like future man and stuff.

Find the comments.Possibly Related Posts:

Understanding The Database Options – AutoClose

Pythian Group - Fri, 2016-05-20 13:03

In this blog post we’ll cover how AutoClose works, and why it’s recommended to disable this property.

At the time that the SQL Server service is started, the operating system logs to find the location of the master system database, and requests exclusive lock of the data and log files.

After that, SQL Server performs the reading in view of sys.master_files system and finds all the data files (.mdf, .NDF) and transaction log (.ldf) from all databases stored on the instance, and also requests the exclusive lock these files to initialize each of the databases.

The first situation in which the AutoClose property can influence the performance drop is in the acquisition of this exclusive lock on the data and log files.

If the property is off, this lock is held since service startup until you stop, however if the property is enabled, from the time when there is no more activity in the database, this lock is released and the data and log files are available to any other process.

Initially this situation may seem very interesting, because we could manipulate the data and log files and perform some administrative tasks, such as a backup.

Now imagine that during any backup, an application needs to access the database, what would happen?

The SQL Server would return an error alerting the impossibility of acquiring exclusive lock on files and the database initialization would fail.

Another big performance problem resulting from use of the AutoClose property is related to the use of the Memory Cache and Buffer areas, Plan Cache.

Whenever a query is performed, generates an execution plan that is kept in an area of memory called the Plan Cache. This area of memory is to store the generated execution plans so that they can be reused if the query is executed again.

After the execution plan generation, all search operators are executed and the data pages selected by the query are stored in an area of memory called Cache Buffer. This area of memory is to store the pages of data so that you don’t have to perform new accesses to the disk subsystem and thus optimize the next i/o requests.

When the AutoClose property is enabled and there are no more connections to the database, all the data pages and execution plans that are in memory will be deleted, thus creating a big drop of performance.

We ran a small demonstration just to be clear on this behavior.

Initially you will enable the AutoClose property in the AdventureWorks2012 database, as script below:

USE master
GO
ALTER DATABASE [AdventureWorks2012] SET AUTO_CLOSE ON WITH NO_WAIT
GO

Then let’s perform some queries in the database AdventureWorks2012, as script below:

USE AdventureWorks2012
GO
SELECT * FROM Person.person
GO
SELECT * FROM Sales.salesorderheader

 

After the execution of queries, it is possible to analyze, through the DMV sys.dm os_buffer_descriptors, the amount of data pages that have been allocated in memory to the database AdventureWorks2012, as illustrated on Figure 1:

Fava_AutoClose_1
Figure 1 – Data pages that have been allocated in memory

With the DMVs sys.dm_exec_cached_plans and sys.dm_exec_sql_text we can check execution plans that were stored in memory for queries executed, as illustrated on Figure 2.

Fava_AutoClose_2
Figure 2 – Execution plans stored in memory

So when all connections to the database AdventureWorks2012 are finished, all the memory areas will be cleaned, as the image below:

--Amount of data pages in memory
SELECT
Count (*) TotalPages,
DB_NAME (database_id) DBname
FROM
sys.dm_os_buffer_descriptors
GROUP BY
Db_name (database_id)
ORDER BY
1 DESC

--Amount of in-memory execution plans
SELECT
COUNT (*) TotalPlanos
FROM
SYS.dm_sys.dm_exec_cached_plans
CROSS APPLY
sys.dm_exec_sql_text sys.dm (plan_handle)
Where
[dbid] = 7 and objtype = ' Adhoc '

Fava_AutoClose_3
Figure 3 – Memory usage after close all connections

With this demonstration is extremely simple to conclude that the AutoClose property is always disabled due to performance problems that can bring to a high performance database.

Categories: DBA Blogs

An Effective Approach to Migrate Dynamic Thrift Data to CQL: Part 2

Pythian Group - Fri, 2016-05-20 12:22

Note that this post is Part 2 of a 3-part blog series. If you haven’t read Part 1 of this series, please do so before continuing. Part 1 gives some background knowledge of the problem we’re trying to solve, as well as some fundamental concepts used in the following discussion. The chapter number sequencing also follows that from Part 1. (note: Part 3 can be found here)

4. Static and Dynamic Cassandra Table Definition

In Thrift, Cassandra tables can be defined either statically, or dynamically. When a table is defined dynamically, an application can add new columns on the fly and the column definition for each storage row doesn’t necessary need to be the same. Although a little bit flexible this way, it can be problematic as well because the dynamic column definition information is merely available in the application and invisible to outside world.

In CQL, however, tables are always statically defined, which means that any column and its type information has to be defined in advance before it can be used. Each row in a CQL table has exactly the same columns and type definitions.

In CQL, the dynamism of a Thrift table definition can be achieved through clustering column and/or more advanced column type definition like collections and user defined types (UDTs).

4.1 Static Table Definition

In Thrift, a statically defined table has column_metadata information in the table definition statement, as following:

create column family avg_grade
    with key_validation_class = Int32Type
     and comparator = UTF8Type
     and column_metadata = [
       {column_name: class_id, validation_class: Int32Type},
       {column_name: grade, validation_class: FloatType}
     ]

A strictly equivalent CQL table definition is like this (note the “WITH COMPACT STORAGE” table property):

create table avg_grade (
    key int primary key,
    class_id int,
    grade float
) with compact storage

A statically table defined in either a Thrift utility (cassandra-cli) or a CQL utility (cqlsh) can be accessed in another one with no problem. One difference between the Thrift and CQL definition is that in CQL definition, the row key definition has a name, but Thrift definition doesn’t. In this case, when accessing a table defined in Thrift, CQL uses a default name (“key”) for the row key.

4.2 Dynamic Table Definition

In Thrift, a dynamically defined table does NOT have column_metadata information in the table definition statement. Typically, time-series application adopts dynamic table definition. For example, for a sensor monitoring application, we may use sensor id as the storage row key and for each sensor, we want to record different event values detected by the sensor within a period of time. An example table definition is as following:

create column family sensor_data
   with key_validation_class = Int32Type
    and comparator = TimeUUIDType
    and default_validation_class = DecimalType;

Suppose for this table, 2 events are recorded for sensor 1 and 1 event is recorded for sensor 2. The output from cassandra-cli utility is like below:

-------------------
RowKey: 1
=> (name=1171d7e0-14d2-11e6-858b-5f3b22f4f11c, value=21.5, timestamp=1462680314974000)
=> (name=23371940-14d2-11e6-858b-5f3b22f4f11c, value=32.1, timestamp=1462680344792000)
-------------------
RowKey: 2
=> (name=7537fcf0-14d2-11e6-858b-5f3b22f4f11c, value=11.0, timestamp=1462680482367000)

The above shows output that the columns for each row are dynamically generated by the application and can be different between rows. In CQL, a strictly equivalent table definition and output for the above dynamic Thrift able is as below:

CREATE TABLE sensor_data (
    key int,
    column1 timeuuid,
    value decimal,
    PRIMARY KEY (key, column1)
) WITH COMPACT STORAGE
key  | column1                              | value
-----+--------------------------------------+-------------------------
   1 | 1171d7e0-14d2-11e6-858b-5f3b22f4f11c | 0E-1077248000
   1 | 23371940-14d2-11e6-858b-5f3b22f4f11c | -8.58993459E-1077939396
   2 | 7537fcf0-14d2-11e6-858b-5f3b22f4f11c | 0E-1076232192

Since the columns are generated on the fly by the application, CQL doesn’t know the column names in advance. So it uses the default column name “column1” (and also the default key name “key”) in its definition. Functionally, it can be transformed equally to a more descriptive definition as below by using “ALTER TABLE” CQL command to rename the column names (e.g. “key” to “sensor_id”, “column1” to “event_time”), as below:

CREATE TABLE sensor_data (
    sensor_id int,
    event_time timeuuid,
    value decimal,
    PRIMARY KEY (sensor_id, event_time)
) WITH COMPACT STORAGE
4.3 Mixed Table Definition

In thrift, a table can also be in a mixed mode, which means that when a table is created, it has part of its column information being defined in column_metadata, just like a static table. But during runtime, a Thrift based application can add more columns on the go.

The table below, blogs, is such an example.  This table has one column “author” as statically defined. There are also 3 more columns (tags:category, tags:rating, and tags:recommend) for RowKey 1 defined on the fly.

create column family blogs
    with key_validation_class = Int32Type
     and comparator = UTF8Type
     and column_metadata = [
       {column_name: author, validation_class: UTF8Type}
     ]
-------------------
RowKey: 1
=> (name=author, value=Donald, timestamp=1462720696717000)
=> (name=tags:category, value=music, timestamp=1462720526053000)
=> (name=tags:rating, value=A, timestamp=1462720564590000)
=> (name=tags:recommend, value=Yes, timestamp=1462720640817000)
-------------------
RowKey: 2
=> (name=author, value=John, timestamp=1462720690911000)

When examining in CQL, in both table schema and data output, we can only see the columns as statically defined. The dynamically columns are NOT displayed.

CREATE TABLE blogs (
    key int PRIMARY KEY,
    author text
) WITH COMPACT STORAGE
key  | author
-----+--------
1    | Donald
2    | John
Categories: DBA Blogs

UNPIVOT Examples

Jeff Moss - Fri, 2016-05-20 07:47

LittleArduinoProjects#203 Homopolar Motor

Paul Gallagher - Fri, 2016-05-20 06:32
What happens when electric fields cut across magnetic fields? A force is generated, and the homopolor motor is the classic demonstration.

As always, all notes, schematics and code are in the Little Electronics & Arduino Projects repo on GitHub.


Oracle DBA and SQL Celebration Day June 7th, 2016

Marco Gralike - Fri, 2016-05-20 05:34
On Tuesday, June 7th, 2016 the Dutch Oracle Usergroup (OGh) will organize the Oracle DBA…

Creating custom Fusion Application User Interfaces using Oracle JET

Angelo Santagata - Fri, 2016-05-20 05:16
Introduction

JET is Oracle's new mobile toolkit specifically written for developers to help them build client slide applications using JavaScript. Oracle Fusion Applications implementers are often given the requirement to create mobile, or desktop browser, based custom screens for Fusion Applications. There are many options available to the developer for example Oracle ADF (Java Based) and Oracle JET (JavaScript based). This blog article gives the reader a tutorial style document on how to build a hybrid application using data from Oracle Fusion Sales Cloud Data. It is worth highlighting that although this tutorial is using Sales Cloud, the technique below is equally applicable to HCM cloud, or any other Oracle SaaS cloud product which exposes a REST API.

Main ArticlePre-Requisites

It is assumed that you've already read the getting started guide on the Oracle Jet website and installed all the pre-requisites. In addition if you are to create a mobile application then you will also need to install the mobile SDKs from either Apple (XCode) or Android (Android SDK).

 

You must have a Apple Mac to be able to install the Apple IOS developer kit (XCode), it is not possible to run XCode on a Windows PC

 Dealing with SaaS Security

Before building the application itself we need to start executing the REST calls and getting our data and security is going to be the first hurdle we need to cross.Most Sales Cloud installations allow "Basic Authentication" to their APIs,  so in REST this involves creating a HTTP Header called "Authorization" with the value "Basic <your username:password>" , with the <username:password> section encoded as Base64. An alternative approach used when embedding the application within Oracle SaaS is to use a generated JWT token. This token is generated by Oracle SaaS using either groovy or expression language. When embedding the application in Oracle SaaS you have the option of passing parameters, the JWT token would be one of these parameters and can subsequently be used instead of the <username:password>. When using JWT token the Authorization string changes slightly so that instead of "Basic" it become "Bearer",

 

UsageHeader NameHeader ValueBasic AuthenticationAuthorizationBasic <your username:password base64 encoded>JWT AuthenticationAuthorizationBearer <JWT Token>

 

Groovy Script in SalesCloud to generate a JWT Token

def thirdpartyapplicationurl = oracle.topologyManager.client.deployedInfo.DeployedInfoProvider.getEndPoint("My3rdPartyApplication" )
def crmkey= (new oracle.apps.fnd.applcore.common.SecuredTokenBean().getTrustToken())
def url = thirdpartyapplicationurl +"?jwt ="+crmkey
return (url)

Expression Language in Fusion SaaS (HCM, Sales, ERP etc) to generate a JWT Token

#{EndPointProvider.externalEndpointByModuleShortName['My3rdPartApplication']}?jwt=#{applCoreSecuredToken.trustToken}
Getting the data out of Fusion Applications using the REST API

When retrieving  data from Sales Cloud we need to make sure we get the right data, not too much and not too little. Oracle Sales Cloud, like many other Oracle SaaS products, now supports the REST API for inbound and outbound data access. Oracle HCM also has a REST API but at the time of writing this article, the API is in controlled availability.

Looking at the documentation hosted at Oracle Help Center :http//docs.oracle.com/cloud/latest/salescs_gs/FAAPS/ 

The REST call to get all Sales Cloud Opportunities looks like this :

https://yourCRMServer/salesApi/resources/latest/opportunities

If you executed the above REST call you will notice that the resulting payload is large, some would say huge. There are good reasons for this, namely that the Sales Cloud Opportunity object contains a large number fields, secondly the result not only contains data but also contains metadata and finally the request above is a select all query. The metadata includes links to child collections, links to List of Values, what tabs are visible in Sales Cloud , custom objects, flexfields etc. Additionally the query we just executed is a the equivalent of a select * from table, i.e. it brings back everything so we'll also need to fix that.

 

Example snippet of a SalesCloud Opportunity REST Response showing custom fields,tabs visible, child collections etc

"Opportunity_NewQuote_14047462719341_Layout6": "https://mybigm.bigmachines.com/sso/saml_request.jsp?RelayState=/commerce/buyside/document.jsp?process=quickstart_commerce_process_bmClone_4%26formaction=create%26_partnerOpportunityId=3000000xxx44105%26_partnerIdentifier=fusion%26_partnerAccountId=100000001941037",
  "Opportunity_NewQuote_14047462719341_Layout6_Layout7": "https://mybigMmachine.bigmachines.com/sso/saml_request.jsp?RelayState=/commerce/buyside/document.jsp?process=quickstart_commerce_process_bmClone_4%26formaction=create%26_partnerOpportunityId=300000060xxxx5%26_partnerIdentifier=fusion%26_partnerAccountId=100000001941037",
  "ExtnFuseOpportunityEditLayout7Expr": "false",
  "ExtnFuseOpportunityEditLayout6Expr": "false",
  "ExtnFuseOpportunityCreateLayout3Expr": "false",
  "Opportunity_NewQuote_14047462719341_Layout8": "https://mybigm-demo.bigmachines.com/sso/saml_request.jsp?RelayState=/commerce/buyside/document.jsp?process=quickstart_commerce_process_bmClone_4%26formaction=create%26_partnerOpportunityId=300000060744105%26_partnerIdentifier=fusion%26_partnerAccountId=100000001941037",
  "ExtnFuseOpportunityEditLayout8Expr": "false",
  "CreateProject_c": null,
  "Opportunity_DocumentsCloud_14399346021091": "https://mydoccloud.documents.us2.oraclecloud.com/documents/embed/link/LF6F00719BA6xxxxxx8FBEFEC24286/folder/FE3D00BBxxxxxxxxxxEC24286/lyt=grid",
  "Opportunity_DocsCloud_14552023624601": "https://mydocscserver.domain.com:7002/SalesCloudDocCloudServlet/doccloud?objectnumber=2169&objecttype=OPPORTUNITY&jwt=eyJhxxxxxy1pqzv2JK0DX-xxxvAn5r9aQixtpxhNBNG9AljMLfOsxlLiCgE5L0bAI",
  "links": [
    {
      "rel": "self",
      "href": "https://mycrmserver-crm.oracledemos.com:443/salesApi/resources/11.1.10/opportunities/2169",
      "name": "opportunities",
      "kind": "item",
      "properties": {
        "changeIndicator": "ACED0005737200136A6176612E7574696C2E41727261794C6973747881D21D99C7619D03000149000473697A65787000000002770400000010737200116A6176612E6C616E672E496E746567657212E2A0A4F781873802000149000576616C7565787200106A6176612E6C616E672E4E756D62657286AC951D0B94E08B020000787200106A6176612E6C616E672E4F626A65637400000000000000000000007870000000017371007E00020000000178"
      }
    },
    {
      "rel": "canonical",
      "href": "https://mycrmserver-crm.oracledemos.com:443/salesApi/resources/11.1.10/opportunities/2169",
      "name": "opportunities",
      "kind": "item"
    },
    {
      "rel": "lov",
      "href": "https://mycrmserver-crm.oracledemos.com:443/salesApi/resources/11.1.10/opportunities/2169/lov/SalesStageLOV",
      "name": "SalesStageLOV",
      "kind": "collection"
    },

Thankfully we can tell the REST API that we :

  • Only want to see the data, achieved by adding onlyData=true parameter
  • Only want to see the following fields OpportunityNumber,Name,CustomerName (TargetPartyName), achieved by adding a fields=<fieldName,fieldname> parameter
  • Only want to see a max of 10 rows, achieved by adding the limit=<value> parameter
  • Only want to see open opportunities, achieved by adding the q= parameter with a query string, in our case StatusCode=OPEN

If we want to get the data in pages/blocks we can use the offset parameter. The offset parameter tells the REST service to get the data "from" this offset. Using offset and limit we can effectively page through the data returned by Oracle Fusion Applications REST Service.

Our final REST request URL would look like :

https://myCRMServeroracledemos.com/salesApi/resources/latest/opportunities?onlyData=true&fields=OptyNumber,Name,Revenue,TargetPartyName,StatusCode&q=StatusCode=OPEN&offset=0&limit=10

The Oracle Fusion Applications REST API is documented in the relevant Oracle Fusion Applications Documentation, e.g. for Sales Cloud, http://docs.oracle.com/cloud/latest/salescs_gs/FAAPS/ but it is also worth noting that the Oracle Fusion Applications REST Services are simply an implementation of the Oracle ADF Business Components REST Services, these are very well documented here  https://docs.oracle.com/middleware/1221/adf/develop/GUID-8F85F6FA-1A13-4111-BBDB-1195445CB630.htm#ADFFD53992

Our final tuned JSON result from the REST service will look something like this (truncated) :

{
  "items": [
    {
      "Name": "Custom Sentinel Power Server @ Eagle",
      "OptyNumber": "147790",
      "StatusCode": "OPEN",
      "TargetPartyName": "Eagle Software Inc",
      "Revenue": 104000
    },
    {
      "Name": "Ultra Servers @ Beutelschies & Company",
      "OptyNumber": "150790",
      "StatusCode": "OPEN",
      "TargetPartyName": "Beutelschies & Company",
      "Revenue": 175000
    },
    {
      "Name": "Diablo Technologies 1012",
      "OptyNumber": "176800",
      "StatusCode": "OPEN",
      "TargetPartyName": "Diablo Technologies",
      "Revenue": 23650
    }
}
Creating the Hybrid Application

Now we have our datasource defined we can start to build the application. We want this application to be available on a mobile device and therefore we will create a "Mobile Hybrid" application using Oracle JET, using the --NavDrawer template.

yo oraclejet:hybrid OSCOptyList --template=navDrawer --platforms=android

Once the yeoman script has built your application, you can test the (basic) application using the following two commands.

grunt build --platform=android 
grunt serve --platform=android --web=true

The second grunt serve command has a web=true parameter at the end, this is telling the script that we're going to be testing this in our browser and not on the device itself. When this is run you should see basic shell [empty] application in your browser window.

For

Building Our JavaScript UI

Now that we have our data source defined we can now get onto to task of building the JET User Interface. Previously you executed the yo oraclejet:hybrid command, this created you a hybrid application using a template. Opening the resulting project in an IDE, like NetBeans, we can see that the project template has created a collection of files and that one of them is "dashboard.html" (marked 1 in the image), edit this file using your editor.

dashboard.html

 

Within the file delete everything and replace it with this snippet of html code

<div class="oj-hybrid-padding">
    <div class="oj-flex">
        <div class="oj-flex-item">
            <button id= "prevButton" 
                    data-bind="click: previousPage, 
                       ojComponent: { component: 'ojButton', label: 'Previous' }">
            </button>
            <button id= "nextButton"
                    data-bind="click: nextPage, 
                       ojComponent: { component: 'ojButton', label: 'Next' }">
            </button>
        </div>
    </div>
    <div class="oj-flex-item">    
        <div class="oj-panel oj-panel-alt1 oj-margin">
            <table id="table" summary="Opportunity List" aria-label="Opportunity List"
                   data-bind="ojComponent: {component: 'ojTable', 
                                data: opportunityDataSource, 
                                columnsDefault: {sortable: 'none'}, 
                                columns: [{headerText: 'Opty Number', 
                                           field: 'OptyNumber'},
                                          {headerText: 'Name', 
                                           field: 'Name'},
                                          {headerText: 'Revenue', 
                                           field: 'Revenue'},
                                          {headerText: 'Customer Name', 
                                           field: 'TargetPartyName'},
                                          {headerText: 'Status Code', 
                                           field: 'StatusCode'}
           ]}">
            </table>
        </div>    
    </div>
</div>

The above piece of html adds a JET table to the page, for prettiness we've wrapped the table in a decorative panel and added a next and previous buttons. The table definition tells Oracle JET that the data is coming from a JavaScript object called "opportunityDataSource", it also defines defines the columns, column header text and that the columns are not sortable. The button definitions reference two functions in our JavaScript (to follow) which will paginate the data.

Building The logic

We can now move onto the JavaScript side of things, that is the part where we get the data from Sales Cloud and makes it available to the table object in the html file. For this simplistic example we'll get the data direct from Sales Cloud and display it in the table, with no caching and nothing fancy like collection models for pagination .

Edit the dashboard.js file, this is marked as 2 in the above image. This file is a RequiresJS AMD (Application Module Definition File) and is pre-populated to support the dashboard.html page.

Within this file, cut-n-paste the following JavaScript snippet.

define(['ojs/ojcore', 'knockout', 'jquery', 'ojs/ojtable', 'ojs/ojbutton'],
        function (oj, ko, $) {
            function DashboardViewModel() {
                var self = this;
                var offset = 0;
                var limit = 10;
                var pageSize = 10;
                var nextButtonActive = ko.observable(true);
                var prevButtonActive = ko.observable(true);
                //
                self.optyList = ko.observableArray([{Name: "Fetching data"}]);
                console.log('Data=' + self.optyList);
                self.opportunityDataSource = new oj.ArrayTableDataSource(self.optyList, {idAttribute: 'Name'});
                self.refresh = function () {
                    console.log("fetching data");
                    var hostname = "https://yourCRMServer.domain.com";
                    var queryString = "/salesApi/resources/latest/opportunities?onlyData=true&fields=OptyNumber,Name,Revenue,TargetPartyName,StatusCode&q=StatusCode=OPEN&limit=10&offset=" + offset;
                    console.log(queryString);
                    $.ajax(hostname + queryString,
                            {
                                method: "GET",
                                dataType: "json",
                                headers: {"Authorization": "Basic " + btoa("username:password")},
                                // Alternative Headers if using JWT Token
                                // headers : {"Authorization" : "Bearer "+ jwttoken; 
                                success: function (data)
                                {
                                    self.optyList(data.items);
                                    console.log('Data returned ' + JSON.stringify(data.items));
                                    console.log("Rows Returned"+self.optyList().length);
                                    // Enable / Disable the next/prev button based on results of query
                                    if (self.optyList().length < limit)
                                    {
                                        $('#nextButton').attr("disabled", true);
                                    } else
                                    {
                                        $('#nextButton').attr("disabled", false);
                                    }
                                    if (self.offset === 0)
                                        $('#prevButton').attr("disabled", true);
                                },
                                error: function (jqXHR, textStatus, errorThrown)
                                {
                                    console.log(textStatus, errorThrown);
                                }
                            }
                    );
                };
                // Handlers for buttons
                self.nextPage = function ()
                {

                    offset = offset + pageSize;
                    console.log("off set=" + offset);
                    self.refresh();
                };
                self.previousPage = function ()
                {
                    offset = offset - pageSize;
                    if (offset < 0)
                        offset = 0;
                    self.refresh();
                };
                // Initial Refresh
                self.refresh();
            }
            
            return new DashboardViewModel;
        }
);
Lets examine the code

Line 1: Here we've modified the standard define so that it includes a ojs/table reference. This is telling RequiresJS , which the JET toolkit uses, that this piece of JavaScript uses a JET Table object
Line 8 & 9 : These lines maintain variables to indicate if the button should be enabled or not
Line 11: Here we created a variable called optyList, this is importantly created as a knockout observableArray.
Line 13: Here we create another variable called "opportunityDataSource", which is the variable the HTML page will reference. The main difference here is that this variable is of type oj.ArrayTableDataSource and that the primary key is OptyNumber
Lines 14-47 :  Here we define a function called "refresh". When this javascript function is called we execute a REST Call back to SalesCloud using jquery's ajax call. This call retrieves the data and then populates the optyList knockout data source with data from the REST call. Specifically here note that we don't assign the results to the optyData variable directly but we purposely pass a child array called "items". If you execute the REST call, we previously discussed, you'll note that the data is actually stored in an array called items
Line 23 : This line is defining the headers, specifically in this case we're defining a header called "Authorization" , with the username & password formatted as "username:password" and then the base64 encoded.
Line 24-25  :These lines define an alternative header which would be appropriate if a JWT token was being used. This token would be passed in as a parameter rather than being hardcoded
Lines 31-40 : These query the results of the query and determine if the next and previous buttons should be enabled or not using jQuery to toggle the disabled attribute
Lines 50-63 : These manage the next/previous button events
Finally on line 65 we execute the refresh() method when the module is initiated.

Running the example on your mobile

To run the example on your mobile device execute the follow commands

grunt build --platform=android 
grunt serve --platform=android

or if you want to test on a device

grunt serve --platform=android -destination=[device or emulator name]

If all is well you should see a table of data populated from Oracle Sales Cloud

 

For more information on building JavaScript applications with the Oracle JET tool make sure to check out our other blog articles on JET here , the Oracle JET Website here and the excellent Oracle JET You Tube channel here

Running the example on the browser and CORS

If you try and run the example on your browser you'll find it probably won'twork. If you look at the browser console (control+shift+I on most browsers) you'll probably see that the error was something like "XMLHttpRequest cannot load..." etc

cors

This is because the code has violated "Cross Origin Scripting" rules. In a nut shell "A JavaScript application cannot access a resource which was not served up by the server which itself was served up from".. In my case the application was served up by Netbeans on http://localhost:8090, whereas the REST Service from Sales Cloud is on a different server, thankfully there is a solution called "CORS". CORS stands for Cross Origin Resource Sharing and is a standard for solving this problem, for more information on CORS see this wikipedia article, or other articles on the internet.

Configuring CORS in Fusion Applications

For our application to work on a web browser we need to enable CORS in Fusion Applications, we do this by the following steps :

  1. 1. Log into Fusion Applications (SalesCloud, HCM etc) using a user who has access to "Setup and Maintenance"
  2. 2. Access setup and Maintenance screens
  3. 3. Search for Manage Administrator Profile Values and then navigate to that task
  4. 4. Search for "Allowed Domains" profile name (case sensitive!!).
  5. 5. Within this profile name you see a profile option called "site", this profile option has a profile value
  6. 6. Within the profile value add the hostname, and port number, of the application hosting your JavaScript application. If you want to allow "ALL" domains set this value to "*" (a single asterisk )
  7. WARNING : Ensure you understand the security implication of allowing ALL Domains using the asterisk notation!
  8. 7. Save and Close and then retry running your JET Application in your browser.
[caption id="attachment_38119" align="alignnone" width="300"]setupandMaiteanceCORS CORS Settings in Setup and Maintenance (Click to enlarge)[/caption]

If all is good when you run the application on your browser, or mobile device, you'll now see the application running correctly.

[caption id="attachment_38120" align="alignnone" width="300"]JETApplication Running JET Application (Click to enlarge)[/caption]

 

Final Note on Security

To keep this example simple the security username/password was hard-coded in the mobile application, not suitable for a real world application. For a real application you would create a configuration screen, or use system preferences, to collect and store the username , password and the SalesCloud Server url which would then be used in the application.

If the JET Application is to be embedded inside a Fusion Applications Page then you will want to use JWT Token authentication. Modify the example so that the JWT token is passed into the application URL as a parameter and then use that in the JavaScript (lines 24-25) accordingly.

For more information on JWT Tokens in Fusion Applications see these blog entries (Link 1, Link 2) and of course the documentation

Conclusion

As we've seen above its quite straightforward to create mobile, and browser, applications using the Oracle JET Framework. The above example was quite simple and only queried data, a real application would also have some write/delete/update operations and therefore you would want to start to look at the JET Common Model and Collection Framework (DocLink) instead. Additionally in the above example we queried data direct from a single SalesCloud instance and did no processing on it.. Its very likely that a single mobile application will need to get its data from multiple data sources and require some backend services to "preprocess" and probably post process the data, in essence provide an API.. We call this backend a  "MBaaS", ie Mobile Backend As A Service, Oracle also provides a MBaaS in its PaaS suite of products and it is called Oracle Product is called "Mobile Cloud Service"..

In a future article we will explore how to use Oracle Mobile Cloud Service (Oracle MCS) to query SalesCloud and Service cloud and provide an API to the client which would be using the more advanced technique of using the JET Common Model/Collection framework.

 

 

Database in Microsoft Azure

Pat Shuff - Fri, 2016-05-20 01:07
Today we are going to look at what it takes to install Oracle Database Enterprise Edition 12c in Microsoft Azure. We had previously looked at deploying Application Express in Azure. The steps to deploy Enterprise Edition are almost the same. We start with the same process by logging into the portal, click on New, search for Oracle and look for the enterprise edition of the database.

In this example we are going to select Enterprise Edition 12c.

The two links at the bottom link you to the licensing and privacy statements from the Oracle website. Note that the license is not included for this edition of the database and you need to adhere to the licensing restrictions of a perpetual license for a cloud deployment. If we refer back to our calculations for perpetual license in AWS we amortize the database license over four years brings this cost to $3,720/month for a four core server as recommended by Microsoft. Note that we can go with a smaller core count and smaller memory count unlike with Amazon. AWS restricts us to a minimum core count for the Oracle database but Azure allows you to go below the suggested minimums to a system that is unusable. It is impossible to run the database on a single core 1 GB of RAM but the option is presented to you. From the previous screen, we click Create to start the deployment. We can only deploy into a Classic Virtual Machine instance.

The first things that we need to define are the server name, username to log in as, and password or ssh keys for the username. We can also define a new storage group or pull from an existing storage group. For our test either works.

When we look at the shapes suggested by Microsoft, a D12 Standard shape (4 cores and 28 GB) is the smallest configuration. This comes in at $290/month or roughly $10/day. This is a little more than we want to pay for a simple test system. We can get by with 2 cores and 3.75 GB for a simple experiment. We can do this at $89/month or roughly $3/day with an A2 Standard shape. We select the shape and click Select.

On the next screen we select the storage profile. The first option is Standard or Premium disk. If we select Premium SSD our shape gets resized to D2 Standard at a much higher per month charge. This gives us a higher IOP to storage which might or might not be required for our deployment. If we default back to Standard to get the lower shape cost, we have the option or locally replicated data, replication between data centers, and read access in a second geo the price goes from $2.40/100 GB/month to $4.80 to $6.10. We will go for the locally replicated data to minimize cost. We can define a new domain name for this account or accept the default. We can also define a virtual network for this instance as well. We can select the subnet routine as well as dynamic or static ip address assignment. We are going to accept the defaults for the network.

We do need to open port 1521 by adding an endpoint to this instance. If we scroll down on the network screen we can add a port by adding an endpoint. We might or might not want to open up this port. When we do this it opens up the port to the world. We can tunnel through ssh to access port 1521 but for demonstration purposes we are going to open up this port to the world and potentially look at white listing or ip address restricting access to this instance. We might also want to open port 1158 to see the enterprise manager console, port 80 for application express which is also available in enterprise edition of the database.

We do have the option of monitoring extensions to look at how things are performing. We are going to skip this option for our experiment but it is interesting to note that you do have additional options for monitoring.

We are not going to explore the diagnostics storage or availability sets because they really don't apply to the database. They are more concerned with operating system and do not extend into the database. At this point we are ready to launch the instance so we click Ok. We do get one final review before we provision the instance with the database installed.

When we click Ok we get a message that the instance is deploying. We can look at more detail by clicking on the bell icon at the top and drilling down into the deployment detail.

It is important to note that the database binaries are installed by the database is not configured. There is no listener running. The ORACLE_SID has not been set. We need to run the odbca to create a database instance.

Other tutorials on installing an Oracle Database on Azure can be found at

To create a database at this point we need to run the dbca command. When I first tried to execute this command I got a strange error in that the system asked for a password then cleared the screen. This is a known issue relating to line wrap and XTERM configurations. It can be fixed by going into the putty settings and turning off line wrap.

If we look at the command line needed to create a database with dbca we notice that we first need -silent to disable the system from using a default X-Window screen to walk you through the installation. We do not have the X-Window system enabled or the ports configured so we need to install the database from the command line. This is done with the -silent option. The second option is -createDatabase. This tells dbca to create a new database. We also need to define a template to use as the foundation. Fortunately we have pre-defined templates in the /u01/app/oracle/product/12.1.0/dbhome_1/assistants/dbca/templates directory. We will be usign the General_Purpose.dbc template. We could use the Data_Warehouse.dbc or create a new one with the New_Database.dbt template. We also need to define the ORACLE_SID and characterset with the -gdbName, -sid, and -characterSet parameters. We finally wrap up the command options with -responseFile set to NO_VALUE. The entire command looks like

dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbname orcl -sid orcl -responseFile NO_VALUE -characterSet AL32UTF8 -memoryPercentage 30 -emConfiguration LOCAL
This will create a database with ORACLE_SID set to orcl. We add a couple of other paramters to configure enterprise manager to be local rather than a central enterprise manager agent and limit the memory that we will use to 30% of the memory on the system.

The database creation agent will configure the database. This step will take 10-15 minutes to get to 100%. Some tutorials on how to use dbca in the silent mode can be found at

There are really no videos on youtube showing an install. In our example we should have include the -pdbName option to create an initial pluggable database as part of our database installation. Once we see the 100%, the database is complete. We then need to set our ORACLE_SID, ORACLE_HOME, PATH, and start the listener so that we can connect to the database. This is done with the commands


oraenv
export ORACLE_HOME=/u01/app/oracle/product/12.0.1/db_home
export PATH=$PATH:$ORACLE_HOME/bin
lsnrctl start
https://blogs.oracle.com/pshuff/resource/azure_db_12c_dbca_lsnrctl.png" width="90%">

From here we can look at the header information to verify that we installed a 12c Enterprise Edition and look at the location of the data files with the following commands


select * from v$version;
select con_id, name from v$datafile order by 1;

We can connect with SQL Developer because we opened up port 1521.

In summary, we can deploy Oracle Database 12c into the Microsoft Azure cloud. We get a partial install when we provision the database from the Marketplace. We still need to go through the dbca configuration as well as spinning up the listener and opening up the right ports for the database. The solution is not PaaS but database on IaaS. We can not size up the database with a single command. We do not get patching or automated backup, in fact we have not event setup backup at this point. This is similar to the Amazon AWS installation in EC2 but falls short of the database as a service delivered as PaaS in the Oracle Public Cloud. Pricing has the same considerations as the Database on AWS EC2 discussion we had yesterday with the only difference being the price for the compute and storage instance. We did not need to look at the online calculator because Microsoft does a very good job of presenting pricing options when you are configuring the instance. Again, we are not trying to say that once implementation is better or worse than the other but provide information so that you can decide your tradeoffs when selecting one cloud vendor over another.

Links for 2016-05-19 [del.icio.us]

Categories: DBA Blogs

Restore database schema from full expdp backup

Learn DB Concepts with me... - Thu, 2016-05-19 15:01

Import schema from full db expdp backup:

In some situations you might want to restore a single schema from entire EXPDP backup. In this example I want to explain how to import a single schema from full DB expdp backup.





Lets backup the full database using the EXPDP:

F:\>expdp atoorpu directory=dpump dumpfile=fulldb_%U.dmp logfile=fulldb.log full=y compression=all parallel=8

Export: Release 11.2.0.4.0 - Production on Tue May 17 14:55:21 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Password:

UDI-28002: operation generated ORACLE error 28002
ORA-28002: the password will expire within 7 days

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "ATOORPU"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
Starting "ATOORPU"."SYS_EXPORT_SCHEMA_01":  atoorpu/******** directory=dpump dumpfile=fulldb_%U.dmp logfile=fulldb.log full=y compression=all parallel=8

.
.
.
.
.
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
. . exported "SCOTT"."DEPT"                             4.976 KB       4 rows
. . exported "SCOTT"."EMP"                              5.625 KB      14 rows
. . exported "SCOTT"."SALGRADE"                         4.890 KB       5 rows
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
.
.
.
.
.
Master table "ATOORPU"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ATOORPU.SYS_EXPORT_FULL_01 is:
  C:\DPUMP\FULLDB_01.DMP
  C:\DPUMP\FULLDB_02.DMP
  C:\DPUMP\FULLDB_03.DMP
  C:\DPUMP\FULLDB_04.DMP
  C:\DPUMP\FULLDB_05.DMP
  C:\DPUMP\FULLDB_06.DMP
  C:\DPUMP\FULLDB_07.DMP
  C:\DPUMP\FULLDB_08.DMP
Job "ATOORPU"."SYS_EXPORT_FULL_01" successfully completed at Tue May 17 14:27:13 2016 elapsed 0 00:05:36

Restore schema into a temp SCOTT1 schema:


Now that I have completed my full DB backup. I want to restore just SCOTT schema from full backup into a temp schema called SCOTT1. For this I will need to pass two parameters
1. schemas = Schema name you want to import
2. remap_schema= This is to remap that schema to a temp SOCTT1 schema


F:\>impdp atoorpu directory=dpump dumpfile=fulldb_%U.dmp logfile=fulldb_imp.log schemas=SCOTT remap_schema=SCOTT:SCOTT1

Import: Release 11.2.0.4.0 - Production on Tue May 17 14:57:21 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Password:

UDI-28002: operation generated ORACLE error 28002
ORA-28002: the password will expire within 7 days

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "ATOORPU"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "ATOORPU"."SYS_IMPORT_SCHEMA_01":  atoorpu/******** directory=dpump dumpfile=fulldb_%U.dmp logfile=fulldb_imp.log schemas=SCOTT remap_schema=SCOTT:SCOTT1
Processing object type DATABASE_EXPORT/SCHEMA/USER
Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
. . imported "SCOTT1"."DEPT"                             4.976 KB       4 rows
. . imported "SCOTT1"."EMP"                              5.625 KB      14 rows
. . imported "SCOTT1"."SALGRADE"                         4.890 KB       5 rows
. . imported "SCOTT1"."BONUS"                                0 KB       0 rows
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
Job "ATOORPU"."SYS_IMPORT_SCHEMA_01" successfully completed at Tue May 17 14:57:34 2016 elapsed 0 00:00:08


Restore SCOTT into SCOTT schema and replace exiting tables :

If you want to restore it with same schema SCOTT and replace existing tables use this


impdp atoorpu directory=dpump dumpfile=fulldb_%U.dmp logfile=fulldb_imp.log schemas=SCOTT ABLE_EXISTS_ACTION=REPLACE


If you want to restore it with same schema SCOTT but append (add) rows to existing tables use this
impdp atoorpu directory=dpump dumpfile=fulldb_%U.dmp logfile=fulldb_imp.log schemas=SCOTT ABLE_EXISTS_ACTION=APPPEND
Categories: DBA Blogs

Messed-Up App of the Day: Tables of Numbers

Cary Millsap - Thu, 2016-05-19 13:51
Quick, which database is the biggest space consumer on this system?
Database                  Total Size   Total Storage
-------------------- --------------- ---------------
SAD99PS 635.53 GB 1.24 TB
ANGLL 9.15 TB 18.3 TB
FRI_W1 2.14 TB 4.29 TB
DEMO 6.62 TB 13.24 TB
H111D16 7.81 TB 15.63 TB
HAANT 1.1 TB 2.2 TB
FSU 7.41 TB 14.81 TB
BYNANK 2.69 TB 5.38 TB
HDMI7 237.68 GB 476.12 GB
SXXZPP 598.49 GB 1.17 TB
TPAA 1.71 TB 3.43 TB
MAISTERS 823.96 GB 1.61 TB
p17gv_data01.dbf 800.0 GB 1.56 TB
It’s harder than it looks.

Did you come up with ANGLL? If you didn’t, then you should look again. If you did, then what steps did you have to execute to find the answer?

I’m guessing you did something like I did:
  1. Skim the entire list. Notice that HDMI7 has a really big value in the third column.
  2. Read the column headings. Parse the difference in meaning between “size” and “storage.” Realize that the “storage” column is where the answer to a question about space consumption will lie.
  3. Skim the “Total Storage” column again and notice that the wide “476.12” number I found previously has a GB label beside it, while all the other labels are TB.
  4. Skim the table again to make sure there’s no PB in there.
  5. Do a little arithmetic in my head to realize that a TB is 1000× bigger than a GB, so 476.12 is probably not the biggest number after all, in spite of how big it looked.
  6. Re-skim the “Total Storage” column looking for big TB numbers.
  7. The biggest-looking TB number is 15.63 on the H111D16 row.
  8. Notice the trap on the ANGLL row that there are only three significant digits showing in the “18.3” figure, which looks physically the same size as the three-digit figures “1.24” and “4.29” directly above and below it, but realize that 18.3 (which should have been rendered “18.30”) is an order of magnitude larger.
  9. Skim the column again to make sure I’m not missing another such number.
  10. The answer is ANGLL.
That’s a lot of work. Every reader who uses this table to answer that question has to do it.

Rendering the table differently makes your readers’ (plural!) job much easier:
Database          Size (TB)  Storage (TB)
---------------- --------- ------------
SAD99PS .64 1.24
ANGLL 9.15 18.30
FRI_W1 2.14 4.29
DEMO 6.62 13.24
H111D16 7.81 15.63
HAANT 1.10 2.20
FSU 7.41 14.81
BYNANK 2.69 5.38
HDMI7 .24 .48
SXXZPP .60 1.17
TPAA 1.71 3.43
MAISTERS .82 1.61
p17gv_data01.dbf .80 1.56
This table obeys an important design principle:
The amount of ink it takes to render each number is proportional to its relative magnitude.I fixed two problems: (i) now all the units are consistent (I have guaranteed this feature by adding unit label to the header and deleting all labels from the rows); and (ii) I’m showing the same number of significant digits for each number. Now, you don’t have to do arithmetic in your head, and now you can see more easily that the answer is ANGLL, at 18.30 TB.

Let’s go one step further and finish the deal. If you really want to make it as easy as possible for readers to understand your space consumption problem, then you should sort the data, too:
Database          Size (TB)  Storage (TB)
---------------- --------- ------------
ANGLL 9.15 18.30
H111D16 7.81 15.63
FSU 7.41 14.81
DEMO 6.62 13.24
BYNANK 2.69 5.38
FRI_W1 2.14 4.29
TPAA 1.71 3.43
HAANT 1.10 2.20
MAISTERS .82 1.61
p17gv_data01.dbf .80 1.56
SAD99PS .64 1.24
SXXZPP .60 1.17
HDMI7 .24 .48
Now, your answer comes in a glance. Think back at the comprehension steps that I described above. With the table here, you only need:
  1. Notice that the table is sorted in descending numerical order.
  2. Comprehend the column headings.
  3. The answer is ANGLL.
As a reader, you have executed far less code path in your brain to completely comprehend the data that the author wants you to understand.

Good design is a topic of consideration. And even conservation. If spending 10 extra minutes formatting your data better saves 1,000 readers 2 minutes each, then you’ve saved the world 1,990 minutes of wasted effort.

But good design is also a very practical matter for you personally, too. If you want your audience to understand your work, then make your information easier for them to consume—whether you’re writing email, proposals, reports, infographics, slides, or software. It’s part of the pathway to being more persuasive.

restore single table from expdp full backup

Learn DB Concepts with me... - Thu, 2016-05-19 12:24
Lets take a full Backup:


F:\>expdp atoorpu directory=dpump dumpfile=fulldb_%U.dmp logfile=fulldb.log full=y compression=all parallel=8


Export: Release 11.2.0.4.0 - Production on Tue May 17 14:55:21 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Password:

UDI-28002: operation generated ORACLE error 28002

ORA-28002: the password will expire within 7 days

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

Master table "ATOORPU"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded

Starting "ATOORPU"."SYS_EXPORT_SCHEMA_01":  atoorpu/******** directory=dpump dumpfile=fulldb_%U.dmp logfile=fulldb.log full=y compression=all parallel=8


.

.

.

.

.

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX

. . exported "SCOTT1"."DEPT"                             4.976 KB       4 rows

. . exported "SCOTT1"."EMP"                              5.625 KB      14 rows

. . exported "SCOTT1"."SALGRADE"                         4.890 KB       5 rows

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX

.

.

.

.

.

Master table "ATOORPU"."SYS_EXPORT_FULL_01" successfully loaded/unloaded

******************************************************************************

Dump file set for ATOORPU.SYS_EXPORT_FULL_01 is:

  C:\DPUMP\FULLDB_01.DMP

  C:\DPUMP\FULLDB_02.DMP

  C:\DPUMP\FULLDB_03.DMP

  C:\DPUMP\FULLDB_04.DMP

  C:\DPUMP\FULLDB_05.DMP

  C:\DPUMP\FULLDB_06.DMP

  C:\DPUMP\FULLDB_07.DMP

  C:\DPUMP\FULLDB_08.DMP

Job "ATOORPU"."SYS_EXPORT_FULL_01" successfully completed at Tue May 17 14:27:13 2016 elapsed 0 00:05:36


****************************************************************************************************************

FROM THIS FULL BACKUP I WOULD LIKE TO RESTORE A TABLE FROM SCHEMA: SCOTT & TABLE: BONUS FULL BACKUP INTO SCHEMA.

****************************************************************************************************************


F:\>impdp atoorpu directory=dpump dumpfile=fulldb_%U.dmp logfile=fulldb_imp.log tables=SCOTT.BONUS


Import: Release 11.2.0.4.0 - Production on Tue May 17 16:04:51 2016


Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Password:


UDI-28002: operation generated ORACLE error 28002

ORA-28002: the password will expire within 7 days


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

Master table "ATOORPU"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded

Starting "ATOORPU"."SYS_IMPORT_TABLE_01":  atoorpu/******** directory=dpump dumpfile=fulldb_%U.dmp logfile=fulldb_imp.log tables=SCOTT.BONUS

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA

. . imported "SCOTT"."BONUS"                                 0 KB       0 rows

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS

Job "ATOORPU"."SYS_IMPORT_TABLE_01" successfully completed at Tue May 17 16:05:05 2016 elapsed 0 00:00:08


****************************************************************************************************************

FROM THIS FULL BACKUP I WOULD LIKE TO RESTORE A TABLE FROM FULL BACKUP AND RESTORE IT FROM SCHEMA : SCOTT INTO ANOTHER SCHEMA : SCOTT1.

****************************************************************************************************************


F:\>impdp atoorpu directory=dpump dumpfile=fulldb_%U.dmp logfile=fulldb_imp.log tables=SCOTT.BONUS REMAP_SCHEMA=scott:scott1


Import: Release 11.2.0.4.0 - Production on Tue May 17 16:20:31 2016


Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Password:


UDI-28002: operation generated ORACLE error 28002

ORA-28002: the password will expire within 7 days


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

Master table "ATOORPU"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded

Starting "ATOORPU"."SYS_IMPORT_TABLE_01":  atoorpu/******** directory=dpump dumpfile=fulldb_%U.dmp logfile=fulldb_imp.log tables=SCOTT.BONUS REMAP_SCHEMA=scott:scot

1

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA

. . imported "SCOTT1"."BONUS"                                0 KB       0 rows

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS

Job "ATOORPU"."SYS_IMPORT_TABLE_01" successfully completed at Tue May 17 16:20:45 2016 elapsed 0 00:00:08
Categories: DBA Blogs