Skip navigation.

Feed aggregator

I am Oracle ACE

Oracle in Action - Thu, 2016-03-31 23:49

RSS content

It gives me immense pleasure to share the news that I have been honored with the prestigious  Oracle ACE award. I am grateful to Oracle ACE Program for accepting my nomination. I would like to thank to Murali Vallath Sir who nominated me for this award. I am also thankful to my family members without whose support and motivation, this would not have been possible.  Thanks a lot to all the readers of my blog whose comments and suggestions helped me to learn and share whatever little knowledge I have.

I will do my best to participate in the Oracle ACE program.

 

 



Tags:  

Del.icio.us
Digg

Comments:  36 comments on this item
You might be interested in this:  
Copyright © ORACLE IN ACTION [I am Oracle ACE], All Right Reserved. 2016.

The post I am Oracle ACE appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Catch Me at Collaborate in Las Vegas

David Haimes - Thu, 2016-03-31 23:43

The OAUG Collaborate conference kicks off in Las Vegas on Sunday April 10th.  This conference is organized by Oracle user groups and is packed with content from customers, partners and Oracle have some sessions too.  If you follow this blog you will know I am a big fan of the conference, I learn a lot from our customers and try to give back as much as I can by sharing information and answering questions.  The twitter conversations are already starting, follow #C16LV now and certainly during conference week to see what people are talking about.

I will be presenting a session on Accounting Hub Reporting Cloud Service for Oracle E-Business Suite

1:00 PM–2:00 PM Apr 12, 2016 – South Pacific I

Register now for that one, I will leave plenty of time for questions because this cloud service is generating a lot of buzz.

I will also be at the OAUG GL SIG Meeting 

3:00 PM–4:00 PM Apr 13, 2016 – South Seas J

I’m also planning to attend Meet the Experts: Oracle E-Business Suite Financials 

2:15 PM–3:15 PM Apr 12, 2016 – Breakers G

I will help take questions, you can decide if that qualifies me as an expert or not.

I’ll also be attending Oracle E-Business Suite and Oracle Cloud Solutions: Update, Strategy and Roadmap
Nadia Bendjedou, Sr. Director, Product Strategy, Oracle
Monday, April 11, 2:00 – 3:00 PM – Banyan A

I am combing through the sessions to identify others that I want to go to and learn from, I will share those later, watch this space.


Categories: APPS Blogs

The Oracle ACE Program : My 10 Year Anniversary

Tim Hall - Thu, 2016-03-31 23:02

ace-directorOn 1st April 2006 I got an email telling me I was an Oracle ACE, so either this is the most drawn out April Fools’ Day joke ever, or I’ve been in the Oracle ACE Program for 10 years. Wow!

The numbers look something like this.

  • Nearly 21 years working with Oracle technology. (August 1995)
  • Nearly 16 years doing my website. (Original name: 03 July 2000 or Current name: 31 August 2001)
  • Nearly 11 years blogging. (15 June 2005)
  • 10 years on the Oracle ACE Program. (01 April 2006)
  • Nearly 10 months making videos. (6 July 2015)

It’s quite scary when you look back at it.

5 Phases for Migrating to a Cloud Platform

Pythian Group - Thu, 2016-03-31 13:11

Businesses today are increasingly looking to migrate to the cloud to realize lower costs and increase software velocity. They are now asking themselves “when” they should migrate rather than if they “should”, and with many vendors and solutions in the market, it can be difficult to take the first steps in creating a cloud strategy.   

In our latest on-demand webinar, Chris Presley, Solution Architect at Pythian, and Jim Bowyer, Solution Architect at Azure-Microsoft Canada, discuss a five phase framework for cloud transformations, and the benefits of migrating to the cloud with Microsoft Azure.

The five phase framework helps businesses understand the journey to successfully migrate current applications to a cloud platform. Here is a snapshot of the five phases:

 

1. Assessment: Analysis and Planning

A majority of the time investment should be upfront in assessment and preparation because it sets the stage for the actual development and migration, resulting in faster projects, lower costs, and less risk.

In this phase, businesses want to begin understanding the performance and user characteristics of their applications, and any other additional information that will be important during the transformation, such as regulatory, compliance, and legal requirements.

 

2. Preparation: POC, Validation and Final Road Map

The preparation phase is meant to help understand what the rest of the migration is going to look like.

While beneficial in any project, proof of concepts (POCs) are increasingly simple to create and are a great strength when leveraging the cloud. POCs are used to show some functionality and advantage early so you can get everyone – especially business owners – excited about the migration.

 

3. Build: Construct Infrastructure

Once the expectations around the final migration road map are developed, the infrastructure can be built. Jim discusses that beginning to think about automation during this phase is important, and Chris agrees, in particular with developing an automated test bed to help smooth out the migration.

 

4. Migration: Execute Transformation

The migration activity for cloud environments is very short. By this stage, if the planning and preparation has been done properly, “flicking the light switch” to the new environment should be seamless and feel like the easiest part.

Chris talks about creating both detailed success and rollback criteria and how they are both crucial for success in the migration phase. Jim mentions that Microsoft Azure provides a variety of tools to help make rollbacks easier and safer.

 

5. Optimization: IaaS Enhancements

Continually transforming and enhancing after the migration is complete is important for increasing software velocity, which is why businesses migrate to the cloud in the first place. While a piece of functionality may not available today, it may be available tomorrow.

By going back to iterate and take advantage of new functionalities, businesses are able to squeeze out more improvements and create opportunities for differentiation.

 

Learn More

To learn about these five cloud transformation phases in more depth, and how to leverage the cloud with Microsoft Azure, download our free on-demand webinar.

Azure_Webinar (1)

Categories: DBA Blogs

Last Chance! 27 Technical Sessions Delivered by Oracle and Community Experts

OTN TechBlog - Thu, 2016-03-31 11:08

Register today for the last Virtual Technology Network event this spring - April 5, 2016 - 9:30 - 13:30 BST / 10:30 - 14:30 CEST.

Get insights, hands-on and expert views on a wild mix of expert topics across Database, Java Development, Middleware and Systems.
We are looking forward to meet you at the Oracle Technology Network Virtual Technology Summit next Tuesday morning (BST) for a dense mix of technology sessions. View the full agenda and all session abstracts here.

Register today!


Working with MCS Offline? Need some hints and tips?

Angelo Santagata - Thu, 2016-03-31 10:40

Mobile Cloud Service
Another collection of videos I was highlighted today on MCS on how to do data offline sync. Awesome stuff , one note however is that the videos are IOS based today but Im being told that the Android versions are enroute.... These videos provide a end2end discussion on how to work with these capabilities not just the APIs themselves.. Very nice.... Also remember to read the docs which cover both Android and IOS

Log file parallel write wait graph

Bobby Durrett's DBA Blog - Thu, 2016-03-31 09:50

I got a chance to use my onewait Python based graph to help with a performance problem. I’m looking at slow write time from the log writer on Thursday mornings. Here is the graph with the database name erased:

log_file_parallel_write_waits

We are still trying to track down the source of the problem but there seems to be a backup on another system that runs at times that correspond to the spike in log file parallel write wait times. The nice thing about this graph is that it shows you activity on the top and average wait time on the bottom so you can see if the increased wait time corresponds to a spike in activity. In this case there does not seem to be any increase in activity on the problematic database.  But that makes sense if the real problem is contention by a backup on another system.

Anyway, my Python graphs are far from perfect but still helpful in this case.

Bobby

Categories: DBA Blogs

GoldenGate 12.2 Big Data Adapters: part 3 – Kafka

Pythian Group - Thu, 2016-03-31 09:39

This post continues my review of GoldenGate Big Data adapters started by review of HDFS and FLUME adapters. Here is list of all posts in the series:

  1. GoldenGate 12.2 Big Data Adapters: part 1 – HDFS
  2. GoldenGate 12.2 Big Data Adapters: part 2 – Flume
  3. GoldenGate 12.2 Big Data Adapters: part 3 – Kafka

In this article I will try the Kafka adapter and see how it works. Firstly, I think it may be worth reminding readers what Kafka is. Kafka is a streaming subscriber-publisher system. One can ask how it is different from Flume, and that question I’ve asked myself when I’ve heard about the Kafka. I think one of the best comparisons between Flume and Kafka has been made by Gwen Shapira & Jeff Holoman in the blog post Apache Kafka for Beginners . In essence, Kafka is general purpose system where most of the control and consumer functionality relays on your own built consumer programs. When in Flume you have pre-created sources, sinks, and can use interceptors for changing data. So, in Kafka you are getting on the destination exactly what you put on the source. Kafka and Flume can work together pretty well, and in this article I am going to use them both.
Let’s recall what we have in our configuration. We have an Oracle database running as a source, and Oracle GoldenGate for Oracle capturing changes for one schema in this database. We have OGG 12.2 and integrated extract on the source. The replication is going directly to trail files on the destination side where we have OGG for BigData installed on a Linux box. You can get more details about the installation on source and target from the first post in the series. I’ve made configuration as simple as possible dedicating most attention to the Big Data adapters functionality, which is after all the main point of the article.

Having installed OGG for Big Data, we need to setup the Kafka adapter. As for other adapters, we are copying the configuration files from $OGG_HOME/AdapterExamples/big-data directory.

bash$ cp $OGG_HOME/AdapterExamples/big-data/kafka/* $OGG_HOME/dirdat/

We need to adjust our kafka.props file to define Kafka/Zookeper topics for data and schema changes (TopicName and SchemaTopicName parameters), and the gg.classpath for Kafka and Avro java classes. I left rest of the parameters default including format for the changes which was defined as “avro_op” in the example.

[oracle@sandbox oggbd]$ cat dirprm/kafka.props

gg.handlerlist = kafkahandler
gg.handler.kafkahandler.type = kafka
gg.handler.kafkahandler.KafkaProducerConfigFile=custom_kafka_producer.properties
gg.handler.kafkahandler.TopicName =oggtopic
gg.handler.kafkahandler.format =avro_op
gg.handler.kafkahandler.SchemaTopicName=mySchemaTopic
gg.handler.kafkahandler.BlockingSend =false
gg.handler.kafkahandler.includeTokens=false

gg.handler.kafkahandler.mode =tx
#gg.handler.kafkahandler.maxGroupSize =100, 1Mb
#gg.handler.kafkahandler.minGroupSize =50, 500Kb


goldengate.userexit.timestamp=utc
goldengate.userexit.writers=javawriter
javawriter.stats.display=TRUE
javawriter.stats.full=TRUE

gg.log=log4j
gg.log.level=INFO

gg.report.time=30sec

gg.classpath=dirprm/:/u01/kafka/libs/*:/usr/lib/avro/*:

javawriter.bootoptions=-Xmx512m -Xms32m -Djava.class.path=ggjava/ggjava.jar

[oracle@sandbox oggbd]$

The next file we have to correct is custom_kafka_producer.properties which contains information about our running Kafka server and define some addition parameters like compression. I left all the parameters unchanged except “bootstrap.servers” where I put information about my Kafka service.

[oracle@sandbox oggbd]$ cat dirprm/custom_kafka_producer.properties
bootstrap.servers=sandbox:9092
acks=1
compression.type=gzip
reconnect.backoff.ms=1000

value.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
key.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
# 100KB per partition
batch.size=102400
linger.ms=10000
[oracle@sandbox oggbd]$

If we plan an initial load through Kafka we can use something like that parameter file I prepared for a passive replicat :

[oracle@sandbox oggbd]$ cat dirprm/irkafka.prm
-- Trail file for this example is located in "dirdat" directory
-- Command to run passive REPLICAT
-- ./replicat paramfile dirprm/irkafka.prm reportfile dirrpt/irkafka.rpt
SPECIALRUN
END RUNTIME
EXTFILE /u01/oggbd/dirdat/initld
--
TARGETDB LIBFILE libggjava.so SET property=dirprm/kafka.props
REPORTCOUNT EVERY 1 MINUTES, RATE
GROUPTRANSOPS 10000
MAP ggtest.*, TARGET bdtest.*;
[oracle@sandbox oggbd]$

Before starting any replicat we need to prepare our system to receive the data. Since the Kafka itself is pure streaming system it cannot pass files to HDFS without other program or connector. In the first case we will be using Kafka passing data to Flume and from Flume will use its sink to HDFS. Please be aware that you need a Zookeeper to manage topics for Kafka. I am not going to discuss setting up Zookeeper in this article, just assume that we have it already and it is up and running on port 2181.
I used Kafka version 0.9.0.1 downloading it from http://kafka.apache.org/downloads.html. After downloading the archive I unpacked it, slightly corrected configuration and started it in standalone mode.

[root@sandbox u01]# wget http://apache.parentingamerica.com/kafka/0.9.0.1/kafka_2.11-0.9.0.1.tgz
--2016-03-15 15:22:09--  http://apache.parentingamerica.com/kafka/0.9.0.1/kafka_2.11-0.9.0.1.tgz
Resolving apache.parentingamerica.com... 70.38.15.129
Connecting to apache.parentingamerica.com|70.38.15.129|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 35650542 (34M) [application/x-gzip]
Saving to: `kafka_2.11-0.9.0.1.tgz'

100%[=========================================================================================================================================>] 35,650,542  2.95M/s   in 16s

2016-03-15 15:22:26 (2.10 MB/s) - `kafka_2.11-0.9.0.1.tgz' saved [35650542/35650542]

[root@sandbox u01]# tar xfz kafka_2.11-0.9.0.1.tgz

[root@sandbox u01]# ln -s kafka_2.11-0.9.0.1 kafka

[root@sandbox u01]# cd kafka

[root@sandbox kafka]# vi config/server.properties
[root@sandbox kafka]# grep -v '^$\|^\s*\#' config/server.properties
broker.id=0
listeners=PLAINTEXT://:9092
num.network.threads=3

num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
delete.topic.enable=true
[root@sandbox kafka]#
[root@sandbox kafka]# nohup bin/kafka-server-start.sh config/server.properties > /var/log/kafka/server.log &
[1] 30669
[root@sandbox kafka]# nohup: ignoring input and redirecting stderr to stdout

Now we need to prepare our two topics for the data received from the GoldenGate. As you remember we have defined topic “oggdata” for our data flow using parameter gg.handler.kafkahandler.TopicName in our kafka.props file and topic “mySchemaTopic” for schema changes. So, let’s create the topic using Kafka’s supplemented scripts:

[root@sandbox kafka]# bin/kafka-topics.sh --zookeeper sandbox:2181 --create --topic oggtopic --partitions 1 --replication-factor 1
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/kafka_2.11-0.9.0.1/libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Created topic "oggtopic".
[root@sandbox kafka]# bin/kafka-topics.sh --zookeeper sandbox:2181 --list
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/kafka_2.11-0.9.0.1/libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
oggtopic
[root@sandbox kafka]#

As matter of fact, all the necessary topics will also be created automatically when you start your GoldenGate replicat. You need to create the topic explicitly if you want to use some custom parameters for it. You also have the option to alter the topic later on when setting up configuration parameters.
Here is list of the topics we have when one of them is created manually and the second one is created automatically by the replicat process.

[root@sandbox kafka]# bin/kafka-topics.sh --zookeeper sandbox:2181 --describe --topic oggtopic
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/kafka_2.11-0.9.0.1/libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Topic:oggtopic	PartitionCount:1	ReplicationFactor:1	Configs:
	Topic: oggtopic	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
[root@sandbox kafka]# bin/kafka-topics.sh --zookeeper sandbox:2181 --describe --topic mySchemaTopic
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/kafka_2.11-0.9.0.1/libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Topic:mySchemaTopic	PartitionCount:1	ReplicationFactor:1	Configs:
	Topic: mySchemaTopic	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
[root@sandbox kafka]#

In our configuration we have only one server and the simplest configuration for Kafka. In a real business case it can be way more complex. Our replicat is going to post data changes to oggtopic, and all changes and definitions for schema to the mySchemaTopic. We’ve already mentioned that we are going to use Flume functionality to write to HDFS. I’ve prepared Flume with two sources and sinks to write data changes to the /user/oracle/ggflume HDFS directory. We had an option to split data and schema changes to different directories if we wish it. Here is my configuration for Flume:

[root@sandbox ~]# cat /etc/flume-ng/conf/flume.conf
# Name/aliases for the components on this agent
agent.sources = ogg1 ogg2
agent.sinks = hdfs1 hdfs2
agent.channels = ch1 ch2

#Kafka source
agent.sources.ogg1.type = org.apache.flume.source.kafka.KafkaSource
agent.sources.ogg1.zookeeperConnect = localhost:2181
agent.sources.ogg1.topic = oggtopic
agent.sources.ogg1.groupId = flume
agent.sources.ogg1.kafka.consumer.timeout.ms = 100

agent.sources.ogg2.type = org.apache.flume.source.kafka.KafkaSource
agent.sources.ogg2.zookeeperConnect = localhost:2181
agent.sources.ogg2.topic = mySchemaTopic
agent.sources.ogg2.groupId = flume
agent.sources.ogg2.kafka.consumer.timeout.ms = 100

# Describe the sink
agent.sinks.hdfs1.type = hdfs
agent.sinks.hdfs1.hdfs.path = hdfs://sandbox/user/oracle/ggflume
agent.sinks.hdfs2.type = hdfs
agent.sinks.hdfs2.hdfs.path = hdfs://sandbox/user/oracle/ggflume
#agent.sinks.hdfs1.type = logger

# Use a channel which buffers events in memory
agent.channels.ch1.type = memory
agent.channels.ch1.capacity = 1001
agent.channels.ch1.transactionCapacity = 1000
agent.channels.ch2.type = memory
agent.channels.ch2.capacity = 1001
agent.channels.ch2.transactionCapacity = 1000

# Bind the source and sink to the channel
agent.sources.ogg1.channels = ch1
agent.sources.ogg2.channels = ch2
agent.sinks.hdfs1.channel = ch1
agent.sinks.hdfs2.channel = ch2

As you can see, we have separate sources for each of our Kafka topics, and we have two sinks pointing to the same HDFS location. The data is going to be written down in Avro format.
All preparations are completed, and we are running Kafka server, two topics, and Flume is ready to write data to HDFS. Our HDFS directory is still empty.

[oracle@sandbox oggbd]$ hadoop fs -ls /user/oracle/ggflume/
[oracle@sandbox oggbd]$

Let’s run the passive replicat with our initial data load trail file :

[oracle@sandbox oggbd]$ cd /u01/oggbd
[oracle@sandbox oggbd]$ ./replicat paramfile dirprm/irkafka.prm reportfile dirrpt/irkafka.rpt
[oracle@sandbox oggbd]$

Now we can have a look to results. We got 3 files on HDFS where first two files describe structure for the TEST_TAB_1 and TEST_TAB_2 accordingly, and the third file contains the data changes, or maybe better to say initial data for those tables. You may see that the schema definition was put on separate files when the data changes were posted altogether to the one file.

[oracle@sandbox ~]$ hadoop fs -ls /user/oracle/ggflume/
Found 3 items
-rw-r--r--   1 flume oracle       1833 2016-03-23 12:14 /user/oracle/ggflume/FlumeData.1458749691685
-rw-r--r--   1 flume oracle       1473 2016-03-23 12:15 /user/oracle/ggflume/FlumeData.1458749691686
-rw-r--r--   1 flume oracle        981 2016-03-23 12:15 /user/oracle/ggflume/FlumeData.1458749691718
[oracle@sandbox ~]$

[oracle@sandbox ~]$ hadoop fs -cat  /user/oracle/ggflume/FlumeData.1458749691685
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable?????k?\??????S?A?%?{
  "type" : "record",
  "name" : "TEST_TAB_1",
  "namespace" : "BDTEST",
  "fields" : [ {
    "name" : "table",
    "type" : "string"
.........................


[oracle@sandbox ~]$ hadoop fs -cat  /user/oracle/ggflume/FlumeData.1458749691686
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable?*
?e????xS?A?%N{
  "type" : "record",
  "name" : "TEST_TAB_2",
  "namespace" : "BDTEST",
  "fields" : [ {
    "name" : "table",
    "type" : "string"
  }, {


...............................

[oracle@sandbox ~]$hadoop fs -cat  /user/oracle/ggflume/FlumeData.1458749691718
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable??????c?C n??S?A?b"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-03-23T12:14:35.373000(00000000-10000002012
PK_ID1371O62FX&2014-01-24:19:09:20RJ68QYM5&2014-01-22:12:14:30"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-03-23T12:14:35.405000(00000000-10000002155
PK_ID2371O62FX&2014-01-24:19:09:20HW82LI73&2014-05-11:05:23:23"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-03-23T12:14:35.405001(00000000-10000002298
PK_ID3RXZT5VUN&2013-09-04:23:32:56RJ68QYM5&2014-01-22:12:14:30"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-03-23T12:14:35.405002(00000000-10000002441
PK_ID4RXZT5VUN&2013-09-04:23:32:56HW82LI73&2014-05-11:05:23:23"BDTEST.TEST_TAB_2I42016-02-16 19:17:40.76289942016-03-23T12:14:35.408000(00000000-10000002926
PK_IDRND_STR_1ACC_DATE7IJWQRO7T&2013-07-07:08:13:52[oracle@sandbox ~]$

Now we need to create our ongoing replication. Our extract was set up the same way as it was described in the first post of the series. It is up and running, passing changes to the replicat side to the directory ./dirdat

GGSCI (sandbox.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     GGEXT       00:00:09      00:00:03


[oracle@sandbox oggbd]$ ls -l dirdat/
total 240
-rw-r-----. 1 oracle oinstall   3028 Feb 16 14:17 initld
-rw-r-----. 1 oracle oinstall 190395 Mar 14 13:00 or000041
-rw-r-----. 1 oracle oinstall   1794 Mar 15 12:02 or000042
-rw-r-----. 1 oracle oinstall  43222 Mar 17 11:53 or000043
[oracle@sandbox oggbd]$

I’ve prepared parameter file for the Kafka replicat :

[oracle@sandbox oggbd]$ cat dirprm/rkafka.prm
REPLICAT rkafka
-- Trail file for this example is located in "AdapterExamples/trail" directory
-- Command to add REPLICAT
-- add replicat rkafka, exttrail dirdat/or, begin now
TARGETDB LIBFILE libggjava.so SET property=dirprm/kafka.props
REPORTCOUNT EVERY 1 MINUTES, RATE
GROUPTRANSOPS 10000
MAP GGTEST.*, TARGET BDTEST.*;

[oracle@sandbox oggbd]$

We need only add and start our rkafka replica for the Big Data GoldenGate.

GGSCI (sandbox.localdomain) 1> add replicat rkafka, exttrail dirdat/or, begin now
REPLICAT added.


GGSCI (sandbox.localdomain) 2> start replicat rkafka

Sending START request to MANAGER ...
REPLICAT RKAFKA starting


GGSCI (sandbox.localdomain) 3> info rkafka

REPLICAT   RKAFKA    Last Started 2016-03-24 11:53   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:06 ago)
Process ID           21041
Log Read Checkpoint  File dirdat/or000000000
                     2016-03-24 11:53:17.388078  RBA 0

You may remember that we don’t have dirdat/or000000000 file in our dirdat directory. So, our replicat has to be slightly corrected to work with proper trail files. I am altering sequence for my replicat to reflect actual sequence number for my last trail file.

GGSCI (sandbox.localdomain) 10> stop replicat rkafka

Sending STOP request to REPLICAT RKAFKA ...
Request processed.


GGSCI (sandbox.localdomain) 11> alter replicat rkafka EXTSEQNO 43

2016-03-24 12:03:27  INFO    OGG-06594  Replicat RKAFKA has been altered through GGSCI. Even the start up position might be updated, duplicate suppression remains active in next startup. To override duplicate suppression, start RKAFKA with NOFILTERDUPTRANSACTIONS option.

REPLICAT altered.


GGSCI (sandbox.localdomain) 12> start replicat rkafka

Sending START request to MANAGER ...
REPLICAT RKAFKA starting


GGSCI (sandbox.localdomain) 13> info rkafka

REPLICAT   RKAFKA    Last Started 2016-03-24 12:03   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:12 ago)
Process ID           21412
Log Read Checkpoint  File dirdat/or000000043
                     First Record  RBA 0


GGSCI (sandbox.localdomain) 14>

Let’s change some data:

orclbd> select * from test_tab_2;

           PK_ID RND_STR_1  ACC_DATE
---------------- ---------- ---------------------------
               7 IJWQRO7T   07/07/13 08:13:52


orclbd> insert into test_tab_2 values (8,'TEST_INS1',sysdate);

1 row inserted.

orclbd> commit;

Commit complete.

orclbd>
[oracle@sandbox oggbd]$ hadoop fs -ls /user/oracle/ggflume/
Found 5 items
-rw-r--r--   1 flume oracle       1833 2016-03-23 12:14 /user/oracle/ggflume/FlumeData.1458749691685
-rw-r--r--   1 flume oracle       1473 2016-03-23 12:15 /user/oracle/ggflume/FlumeData.1458749691686
-rw-r--r--   1 flume oracle        981 2016-03-23 12:15 /user/oracle/ggflume/FlumeData.1458749691718
-rw-r--r--   1 flume oracle        278 2016-03-24 12:18 /user/oracle/ggflume/FlumeData.1458836268086
-rw-r--r--   1 flume oracle       1473 2016-03-24 12:18 /user/oracle/ggflume/FlumeData.1458836268130
[oracle@sandbox oggbd]$

[oracle@sandbox oggbd]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1458836268086
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable?Q???n?y?1?R#S?j???"BDTEST.TEST_TAB_2I42016-03-24 16:17:29.00033642016-03-24T12:17:31.733000(00000000430000043889
PK_IDRND_STR_1ACC_DATE8TEST_INS1&2016-03-24:12:17:26[oracle@sandbox oggbd]$
[oracle@sandbox oggbd]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1458836268130
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable?6F!?Z?-?ZA8r^S?j?oN{
  "type" : "record",
  "name" : "TEST_TAB_2",
  "namespace" : "BDTEST",

We got our schema definition file and a file with data changes.

orclbd> update test_tab_2 set RND_STR_1='TEST_UPD1' where pk_id=8;

1 row updated.

orclbd> commit;

Commit complete.

orclbd>

[oracle@sandbox oggbd]$ hadoop fs -ls /user/oracle/ggflume/
Found 6 items
-rw-r--r--   1 flume oracle       1833 2016-03-23 12:14 /user/oracle/ggflume/FlumeData.1458749691685
-rw-r--r--   1 flume oracle       1473 2016-03-23 12:15 /user/oracle/ggflume/FlumeData.1458749691686
-rw-r--r--   1 flume oracle        981 2016-03-23 12:15 /user/oracle/ggflume/FlumeData.1458749691718
-rw-r--r--   1 flume oracle        278 2016-03-24 12:18 /user/oracle/ggflume/FlumeData.1458836268086
-rw-r--r--   1 flume oracle       1473 2016-03-24 12:18 /user/oracle/ggflume/FlumeData.1458836268130
-rw-r--r--   1 flume oracle        316 2016-03-24 12:28 /user/oracle/ggflume/FlumeData.1458836877420
[oracle@sandbox oggbd]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1458836877420
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable]??u????????qS?t,??"BDTEST.TEST_TAB_2U42016-03-24 16:27:39.00035642016-03-24T12:27:42.177000(00000000430000044052
PK_IDRND_STR_1ACC_DATE8TEST_INS1&2016-03-24:12:17:268TEST_UPD1&2016-03-24:12:17:26[oracle@sandbox oggbd]$

You can see that we only got a file with data changes since no DDL changes were made. The transactions will be grouped to the files according to our Flume parameters as we discussed in the previous blog post.

You can also see old value for the updated record and the new one. Using that information we can reconstruct the changes, but we need to apply certain logic to decrypt the changes.

For deletion operation we are getting operation flag “F” and values for the deleted record. Again, no schema definition file since no changes were made.

Let’s try some DDL.

orclbd> truncate table test_tab_2;

Table TEST_TAB_2 truncated.

orclbd>
GGSCI (sandbox.localdomain) 4> info rkafka

REPLICAT   RKAFKA    Last Started 2016-03-24 12:10   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:02 ago)
Process ID           21803
Log Read Checkpoint  File dirdat/or000043
                     2016-03-24 12:40:05.000303  RBA 45760


GGSCI (sandbox.localdomain) 5>

No new files on HDFS.

orclbd> insert into test_tab_2 select * from test_tab_3;

1 row inserted.

orclbd> commit;

Commit complete.

orclbd>
[oracle@sandbox oggbd]$ hadoop fs -ls /user/oracle/ggflume/
Found 8 items
-rw-r--r--   1 flume oracle       1833 2016-03-23 12:14 /user/oracle/ggflume/FlumeData.1458749691685
-rw-r--r--   1 flume oracle       1473 2016-03-23 12:15 /user/oracle/ggflume/FlumeData.1458749691686
-rw-r--r--   1 flume oracle        981 2016-03-23 12:15 /user/oracle/ggflume/FlumeData.1458749691718
-rw-r--r--   1 flume oracle        278 2016-03-24 12:18 /user/oracle/ggflume/FlumeData.1458836268086
-rw-r--r--   1 flume oracle       1473 2016-03-24 12:18 /user/oracle/ggflume/FlumeData.1458836268130
-rw-r--r--   1 flume oracle        316 2016-03-24 12:28 /user/oracle/ggflume/FlumeData.1458836877420
-rw-r--r--   1 flume oracle        278 2016-03-24 12:35 /user/oracle/ggflume/FlumeData.1458837310570
-rw-r--r--   1 flume oracle        277 2016-03-24 12:42 /user/oracle/ggflume/FlumeData.1458837743709
[oracle@sandbox oggbd]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1458837743709
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable*?2??????>iS??\??"BDTEST.TEST_TAB_2I42016-03-24 16:42:04.00020042016-03-24T12:42:06.774000(00000000430000045760
PK_IDRND_STR_1ACC_DATE7IJWQRO7T&2013-07-07:08:13:52[oracle@sandbox oggbd]$

Again, we got only file with data changes. I tried to compare the file we were getting for the previous insert and insert after truncate, but couldn’t find difference except for the binary part of the avro file. It will require additional investigation and maybe clarification from Oracle. In the current state it looks like it is easy to miss a truncate command for a table on the destination side.

Let us change the table and add a column there.

orclbd> alter table test_tab_2 add test_col varchar2(10);
Table TEST_TAB_2 altered.

orclbd>

We are not getting any new files with new table definitions until we do any DML on the table. Both files (with the new schema definition and data changes) will appear after we insert, delete or update any rows there.

orclbd> insert into test_tab_2 values (8,'TEST_INS1',sysdate,'TEST_ALTER');

1 row inserted.

orclbd> commit;

Commit complete.

orclbd>
[oracle@sandbox oggbd]$ hadoop fs -ls /user/oracle/ggflume/
Found 10 items
...................................................
-rw-r--r--   1 flume oracle       1654 2016-03-24 12:56 /user/oracle/ggflume/FlumeData.1458838582020
-rw-r--r--   1 flume oracle        300 2016-03-24 12:56 /user/oracle/ggflume/FlumeData.1458838584891
[oracle@sandbox oggbd]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1458838582020
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable-??ip??/?w?S??/{
  "type" : "record",
  "name" : "TEST_TAB_2",
  "namespace" : "BDTEST",
................
        "name" : "TEST_COL",
        "type" : [ "null", "string" ],
        "default" : null
.................

[oracle@sandbox oggbd]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1458838584891
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritabletr?V?_$???:2??S??/w?"BDTEST.TEST_TAB_2I42016-03-24 16:56:04.00026042016-03-24T12:56:08.370000(00000000430000047682
PK_IDRND_STR_1ACC_DATETEST_COL8TEST_INS1&2016-03-24:12:56:01TEST_ALTER

I used JMeter to generate some load, and it could easily with almost no delays, replicate 225 transactions per second (30% inserts 80% updates). It was not a test for Kafka or Flume, which could sustain way more load, but rather combination of GoldenGate with the Big Data infrastructure. It was stable without any errors. I do understand that the current test is very far from any potential production workflow which may include Oracle Database (or any other RDBMS) + GoldenGate + Kafka + Storm + …. . And maybe the final data format will be completely different. So far the adapters are looking good and doing the job. In the next post I will observe the HBASE adapter. Stay tuned.

Categories: DBA Blogs

Log Buffer #467: A Carnival of the Vanities for DBAs

Pythian Group - Thu, 2016-03-31 08:40

This Log Buffer Edition brings some top of the list blog posts from Oracle, SQL Server and MySQL.

Oracle:

An Exadata quarter rack has two database servers and three storage cells. In a typical setup, such a system would have three ASM disk groups, say DATA, RECO and DBFS_DG. Usually the disk group DATA would be high redundancy and the other two disk groups would be normal redundancy.

Best practice for calling web services from Oracle Process Cloud Service

2 Min Tech Tips at Oracle OpenWorld: Are You Ready for Your Close-Up?

Are your SQL Plus scripts going to ‘ell ?

New ways of input still on the verge of the enterprise

SQL Server:

Why Every SQL Server Installation Should Be a Cluster

When AUTO_UPDATE_STATISTICS Doesn’t Happen

Fixing Maintenance Plan Error code 0x534

SQL Server Table Smells

Some companies have been slow to acquire big data applications. They discovered that modern hardware platforms and database management systems were more than adequate for most of their business analytics needs.

MySQL:

Galera Cluster and Docker Swarm

MariaDB 10.1.13 and Connector/J 1.3.7 now available

Why an App-Centric View Isn’t Enough

How to Install and Configure MySQL Cluster on CentOS 7

Invalid datetime when converting to timestamp

Categories: DBA Blogs

New Oracle Cloud Offering – Indexing as a Service (IDXaaS) (I Pity The Fool)

Richard Foote - Thu, 2016-03-31 07:09
This of course is an April Fools joke. Sorry !! A very exciting announcement !! I’ve recently been promoted within Oracle Corporation to lead their brand new Oracle Cloud offering “Indexing as a Service” (IDXaaS) team, based on my previous work and experience in the indexing space. Yes, I’m both thrilled and excited to be […]
Categories: DBA Blogs

ChitChat: The Importance of BI Integrations

Rittman Mead Consulting - Thu, 2016-03-31 04:00

A user’s workflow shouldn’t change to accommodate a new tool. A new tool should fill a gap in the current workflow and help streamline the user’s process. An application without a clearly defined scope eventually overlaps with existing solutions, creating confusion and distress among users. It takes both time and effort to clarify the appropriate situations to use the application, reconcile different use cases and approaches, and resolve incorrect uses. We designed ChitChat with appropriate scopes in mind, implementing key integrations, to fit seamlessly into existing workflows.

What exactly do we mean by “scope?”

Let’s look at an example with JIRA. JIRA owns the complete ticketing process, meaning tickets are stored and maintained by the tool. Using a competing ticket solution, such as Trello, for the same purpose within the organization will cause havoc among users. However, JIRA tickets are still extremely useful outside of the JIRA application. They can be linked to and displayed inside other applications, but they are still maintained by JIRA itself.

If you can recognize that the ticketing management should be handled solely by JIRA, but exposure of those tickets outside of the tool is also important, then you understand the correct scope of the application. The scope of the application does not determine where the context of an application is useful. It only describes what section of a workflow the application has absolute control over. The question isn’t “Where should we be able to view the information?” The question is “Where should the content be maintained?”

ChitChat respects the appropriate scopes of neighboring applications and allows the flexibility to continue maintaining the scopes of these applications. With integrations to Atlassian JIRA and Confluence and Salesforce Chatter, the information you need is available where you need it, without infringing on your existing workflow.

Examples of Integrations

Let’s look at some examples. As we use a BI dashboard, we stumble upon an issue. Using ChitChat, the issue can be identified and a conversation can be made about temporarily working around the problem. However, the IT team uses JIRA to accept issues and resolves them as appropriate. We obviously want the IT team to know of this issue, so we must create a ticket in JIRA as well. Rather than going to JIRA and creating a ticket manually, we can simply export the initial annotation to JIRA. The workflow remains generally identical, but now requires less time and effort. And this comes with the added benefit of the ticket pointing directly to the location of the issue on the dashboard.

In another instance, let’s say our dashboard has some confusing calculations on it, some of which are not immediately recognizable. The formulas used, and the reasons to use such formulas, are available in Atlassian Confluence for us to view. However, not all users have a Confluence account, and even fewer have access to the document. We could copy and paste the calculations as a document using ChitChat, but now we have two separate instances of the same information. If the calculations are changed, we must ensure both locations are accurate. Alternatively, ChitChat can sync directly with Confluence and pull a page into the application. The page guarantees accuracy by consistently pulling new updates from Confluence, as well as pushing updates to Confluence if the content is changed in ChitChat.

These approaches allow the JIRA ticket and Confluence document to be maintained in the appropriate location, while also being available in a useful context. Chitchat does not impede on the purposes of other applications. ChitChat offers integrations that seamlessly enhance your workflow without making it convoluted. Our tool is designed specifically to fill the missing pieces in your BI workflow, allowing for a seamless transition between analysis and communication.

To learn more about ChitChat’s many commentary features, or to request a demo, click here.

The post ChitChat: The Importance of BI Integrations appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Save MapR Streams messages into MapR DB JSON

Tugdual Grall - Thu, 2016-03-31 01:06
Read this article on my new blog In this article you will learn how to create a MapR Streams Consumer that saves all the messages into a MapR-DB JSON Table. Install and Run the sample MapR Streams application The steps to install and run the applications are the same as the one defined in the following article: MapR Streams application Once you have the default producer and Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0

Oracle Security and Electronics

Pete Finnigan - Wed, 2016-03-30 13:50

How does Oracle Security and Electronic mix together? - Well I started my working life in 1979 as an apprentice electrician in a factory here in York, England where I live. The factory designed and built trains for the national....[Read More]

Posted by Pete On 09/07/15 At 11:24 AM

Categories: Security Blogs

New Conference Speaking Dates Added

Pete Finnigan - Wed, 2016-03-30 13:50

In the last few years I have not done as many conference speaking dates as I used to. This is simply because when offered they usually clashed with pre-booked work. I spoke for the UKOUG in Dublin last year and....[Read More]

Posted by Pete On 06/07/15 At 09:40 AM

Categories: Security Blogs

Gluent Demo Video Launch

Tanel Poder - Wed, 2016-03-30 12:58

Although we are still in stealth mode (kind-of), due to the overwhelming requests for information, we decided to publish a video about what we do :)

It’s a short 5-minute video, just click on the image below or go straight to http://gluent.com:

Gluent Demo video

And this, by the way, is just the beginning.

Gluent is getting close to 20 people now, distributed teams in US and UK – and we are still hiring!

 

 

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Related Posts

SXSWi Recap

Oracle AppsLab - Wed, 2016-03-30 08:05

Austin, beautiful city with a river crossing downtown, music niche, young population, cycling, brisket and the home of SXSW, a big multicultural conference for all tastes; Film, Interactive and Music.

1

This was my first time attending the conference but Noel (@noelportugal), is a year-to-year attendee. It’s well known that this conference is not only a trampoline for small companies and startups to show off all the world what they are cooking up, but also a big exposure for new services, products, trends, you name it; that’s why we are very interested in this kind of conference that are very aligned with our team’s spirit.

VR everywhere.

I mean it.

Since Google I/O 2014, I’ve been following the steps to VR and AR. At that time, they released Google Cardboard; inexpensive googles for visualizing VR content and Project Tango for AR. Yes, I know you can argue VR has been around for quite a long time, but I believe they exposed the right development tools and a cheap way to develop and consume that technology, so a lot of people got engaged. However, some others remained very skeptical about use cases.

2

But now, after two year, guess what? VR is on everyone’s lips, and SXSW wasn’t an exception.

3

I have to say, I’m very impressed at how many companies had adopted this technology so fast. Of course, we all saw this wave coming to us with announcements of products like Oculus Rift, HTC Vive, Noon VR, Microsoft HoloLens and so on. Of course, as emerging technology team, we were already prepared to be hit by the wave.

I still can’t get used to seeing people with a headset over their eyes and headphones on, 100% isolated from reality. I tried most of VR demos presented and my brain/body is still not prepared for many VR experiences; I had headache, and I felt weird after so many demos.

Also, I could see people with red marks all around their faces from wearing the headset all day. Even so, this helped me to analyze and sum up that pretty much all demos follow the same use case: advertising and promoting products.

It’s really interesting that retail and product companies are investing in this technology to get more buyers and explain in a better way how it feels to hold of their product. This can be applied, for example, to automobiles, houses, travel agencies, etc. Funny thing is this technology sometimes is combined with motion to have a complete experience.

Note: don’t ever try a selfie while wearing a VR headset, almost impossible

Change ADF BC Data Update Locking with FOR UPDATE WAIT

Andrejus Baranovski - Wed, 2016-03-30 07:43
Each time when data is changed and updated through ADF BC, before posting changes to DB, SQL query with FOR UPDATE NOWAIT is generated and executed. In case if other process locks row to be updated, or another user in the same moment is updating it, error will be generated and update will be stopped. There might be use cases, when you would like to wait for certain period of time, until row will be unlocked and then commit row changes. This is especially true, if 3rd party (PL/SQL) process is updating rows and you have defined change indicator attribute in ADF BC (see my previous post - ADF BC Version Number and Change Indicator to Track Changed Rows).

We can change default behavior, instead of requesting for immediate lock - we can wait a period of time. If lock becomes available during this period, session acquires lock. If row remains locked, error is returned. Instead of default FOR UDPATE NOWAIT, we can generate FOR UDPATE WAIT (time period in seconds).

To override default behavior, we need to specify custom SQLBuilder class. This can be registered in Application Module configuration jbo.SQLBuilder property:


Class must extend from OracleSQLBuilderImpl and override getSqlVariantLockTrailer() method. My sample application is implemented to return FOR UPDATE WAIT 30:


We can do a test. We can simulate PL/SQL lock by executing SQL query with FOR UPDATE from SQL Developer:


Try to update same row from ADF BC with default behavior, error will be returned - "Failed to lock the record, another user holds the lock". This is expected, because row remains locked:


With overriden SQL Builder, FOR UPDATE WAIT 30 is generated. It waits 30 seconds, as soon as lock is removed - lock from current session is set and row data is updated:


Download sample application - LockUpdateApp.zip.

Logstash and Oracle Database

Kris Rice - Wed, 2016-03-30 06:48
    For anyone that doesn't know what Logstash is head over to http://logstash.net/ and watch the video on the home page.  Robin Moffatt has done a great blog post on the full ELK stack with OBIEE.     This blog post is a first cut at an input filter for Logstash to gather metrics, logs, or anything that can be expressed in sql.  A huge caution that this is a 0.01 attempt and will get better as

Auto DDL: delete obsolete columns from table

Darwin IT - Wed, 2016-03-30 05:55
A quick one. In the past I used to generate ddl based on queries, like the following. But I find myself to re-invent them again. So to have it saved for my offspring: here's one on deleting obsolete columns as generated on importing an excel sheet in SQLDeveloper:


declare
l_schema_name varchar2(30) := 'MY_SCHEMA';
l_table_name varchar2(30) := 'A_TABLE';
cursor c_cols is
select column_name
from all_tab_columns col
where col.table_name = l_table_name
and col.owner = l_schema_name
and col.column_name like 'COLUMN%';
begin
for r_cols in c_cols loop
execute immediate 'alter table '||l_schema_name||'.'||l_table_name||' drop column '||r_cols.column_name;
end loop;
end;
/

And here's one to generate a check constraint on all index colunns of a table:

declare
l_schema_name varchar2(30) := 'MY_SCHEMA';
l_table_name varchar2(30) := 'A_TABLE';
l_constraint_name_pfx varchar2(30) := 'XXX_ALIAS_CHK';
l_idx pls_integer := 1;
cursor c_cols is
select column_name
from all_tab_columns col
where col.table_name = l_table_name
and col.owner = l_schema_name
and col.column_name like 'IND_%';
begin
for r_col in c_col loop
execute immediate 'ALTER TABLE '||l_schema_name||'.'||l_table_name||' ADD CONSTRAINT '||l_constraint_name_pfx||l_idx||' CHECK ('||r_col.column_name||' in (''J'',''N''))ENABLE';
l_idx := l_idx+1;
end loop;
end;
/

New OTN Article – OBIEE Performance Analytics: Analysing the Impact of Suboptimal Design

Rittman Mead Consulting - Wed, 2016-03-30 02:09

I’m pleased to have recently had my first article published on the Oracle Technology Network (OTN). You can read it in its full splendour and glory(!) over there, but I thought I’d give a bit of background to it and the tools demonstrated within.

OBIEE Performance Analytics Dashboards

One of the things that we frequently help our clients with is reviewing and optimising the performance of their OBIEE systems. As part of this we’ve built up a wealth of experience in the kind of suboptimal design patterns that can cause performance issues, as well as how to go about identifying them empirically. Getting a full stack view on OBIEE performance behaviour is key to demonstrating where an issue lies, prior to being able to resolve it and proving it fixed, and for this we use the Rittman Mead OBIEE Performance Analytics Dashboards.

OBIEE Performance Analytics

A common performance issue that we see is analyses and/or RPDs built in such a way that the BI Server inadvertently returns many gigabytes of data from the database and in doing so often has to dump out to disk whilst processing it. This can create large NQS_tmp files, impacting the disk space available (sometimes critically), and the disk I/O subsystem. This is the basis of the OTN article that I wrote, and you can read the full article on OTN to find out more about how this can be a problem and how to go about resolving it.

OBIEE implementations that cause heavy use of temporary files on disk by the BI Server can result in performance problems. Until recently in OBIEE, it was really difficult to track because of the transitory nature of the files. By the time the problem had been observed (for example, disk full messages), the query responsible had moved on and so the temporary files deleted. At Rittman Mead we have developed lightweight diagnostic tools that collect, amongst other things, the amount of temporary disk space used by each of the OBIEE components.

pad_tmp_disk

This can then be displayed as part of our Performance Analytics Dashboards, and analysed alongside other performance data on the system such as which queries were running, disk I/O rates, and more:

OBIEE Temp Disk Usage

Because the Performance Analytics Dashboards are built in a modular fashion, it is easy to customise them to suit specific analysis requirements. In this next example you can see performance data from Oracle being analysed by OBIEE dashboard page in order to identify the cause of poorly-performing reports:

OBIEE Database Performance Analysis

We’ve put online a set of videos here demonstrating the Performance Analytics Dashboards, and explaining in each case how they can help you quickly and accurately diagnose OBIEE performance problems.

You can read more about our Performance Analytics offering here, or get in touch to find out more!

The post New OTN Article – OBIEE Performance Analytics: Analysing the Impact of Suboptimal Design appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing