Skip navigation.

Feed aggregator

Building a MariaDB Galera Cluster with Docker

Pythian Group - Thu, 2014-08-28 08:13

There’s been a lot of talk about Docker for running processes in isolated userspace (or the cloud for that matter) lately. Virtualization is a great way to compartmentalise applications  and processes however the overhead of virtualization isn’t always worth it – in fact, without directly attached storage IO degradation can seriously impact performance. The solution? Perhaps Docker… With its easy to use CLI as well as the lightweight implementation of cgroups and kernel namespaces.

Without further ado, I present a step-bystep guide on how to build a MariaDB 5.5 Galera Cluster on Ubuntu 14.04. The same guide can probably be applied for MariaDB versions 10+ however I’ve stuck with 5.5 since the latest version of MariaDB Galera Cluster is still in beta.

So we start off with modifying the “ufw” firewall policy to accept forwarded packets and perform a “ufw” service restart for good measure:

root@workstation:~# vi /etc/default/ufw

DEFAULT_FORWARD_POLICY="ACCEPT"

root@workstation:~# service ufw restart
ufw stop/waiting
ufw start/running

I’m assuming you already have docker installed – this is available as a package within the Ubuntu repositories and also available in the Docker repositories (see http://docs.docker.com/installation/ubuntulinux/). You’ll also need to have LXC installed (“apt-get install lxc” should suffice) in order to attach to the Linux Containers / Docker Images.

The next step is pulling the Docker / Ubuntu repository in order to customize an image for our purposes

root@workstation:~# docker pull ubuntu
Pulling repository ubuntu
c4ff7513909d: Pulling dependent layers 
3db9c44f4520: Download complete 
c5881f11ded9: Download complete 
c4ff7513909d: Download complete 
463ff6be4238: Download complete 
822a01ae9a15: Download complete 
75204fdb260b: Download complete 
511136ea3c5a: Download complete 
bac448df371d: Download complete 
dfaad36d8984: Download complete 
5796a7edb16b: Download complete 
1c9383292a8f: Download complete 
6cfa4d1f33fb: Download complete 
f127542f0b61: Download complete 
af82eb377801: Download complete 
93c381d2c255: Download complete 
3af9d794ad07: Download complete 
a5208e800234: Download complete 
9fccf650672f: Download complete 
fae16849ebe2: Download complete 
b7c6da90134e: Download complete 
1186c90e2e28: Download complete 
0f4aac48388f: Download complete 
47dd6d11a49f: Download complete 
f6a1afb93adb: Download complete 
209ea56fda6d: Download complete 
f33dbb8bc20e: Download complete 
92ac38e49c3e: Download complete 
9942dd43ff21: Download complete 
aa822e26d727: Download complete 
d92c3c92fa73: Download complete 
31db3b10873e: Download complete 
0ea0d582fd90: Download complete 
cc58e55aa5a5: Download complete

After the download is complete, we can check the Ubuntu images available for customization with the following command:

root@workstation:~# docker images
 REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
 ubuntu              14.04.1             c4ff7513909d        12 days ago         225.4 MB
 ubuntu              trusty              c4ff7513909d        12 days ago         225.4 MB
 ubuntu              14.04               c4ff7513909d        12 days ago         225.4 MB
 ubuntu              latest              c4ff7513909d        12 days ago         225.4 MB
 ubuntu              utopic              75204fdb260b        12 days ago         230.1 MB
 ubuntu              14.10               75204fdb260b        12 days ago         230.1 MB
 ubuntu              precise             822a01ae9a15        12 days ago         108.1 MB
 ubuntu              12.04               822a01ae9a15        12 days ago         108.1 MB
 ubuntu              12.04.5             822a01ae9a15        12 days ago         108.1 MB
 ubuntu              12.10               c5881f11ded9        9 weeks ago         172.2 MB
 ubuntu              quantal             c5881f11ded9        9 weeks ago         172.2 MB
 ubuntu              13.04               463ff6be4238        9 weeks ago         169.4 MB
 ubuntu              raring              463ff6be4238        9 weeks ago         169.4 MB
 ubuntu              13.10               195eb90b5349        9 weeks ago         184.7 MB
 ubuntu              saucy               195eb90b5349        9 weeks ago         184.7 MB
 ubuntu              lucid               3db9c44f4520        4 months ago        183 MB
 ubuntu              10.04               3db9c44f4520        4 months ago        183 MB

Now that we’ve downloaded our images lets create a custom Dockerfile for our customized MariaDB / Galera Docker image, I’ve added a brief description for each line of the file:

root@workstation:~# vi Dockerfile
 # # MariaDB Galera 5.5.39/Ubuntu 14.04 64bit
 FROM ubuntu:14.04
 MAINTAINER Pythian Nikolaos Vyzas <vyzas@pythian.com>

 RUN echo "deb http://archive.ubuntu.com/ubuntu trusty main universe" > /etc/apt/sources.list # add the universe repo
 RUN apt-get -q -y update # update apt
 RUN apt-get -q -y install software-properties-common # install software-properties-common for key management
 RUN apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db # add the key for Mariadb Ubuntu repos
 RUN add-apt-repository 'deb http://ftp.cc.uoc.gr/mirrors/mariadb/repo/5.5/ubuntu trusty main' # add the MariaDB repository for 5.5
 RUN apt-get -q -y update # update apt again
 RUN echo mariadb-galera-server-5.5 mysql-server/root_password password root | debconf-set-selections # configure the default root password during installation
 RUN echo mariadb-galera-server-5.5 mysql-server/root_password_again password root | debconf-set-selections # confirm the password (as in the usual installation)
 RUN LC_ALL=en_US.utf8 DEBIAN_FRONTEND=noninteractive apt-get -o Dpkg::Options::='--force-confnew' -qqy install mariadb-galera-server galera mariadb-client # install the necessary packages
 ADD ./my.cnf /etc/mysql/my.cnf # upload the locally created my.cnf (obviously this can go into the default MariaDB path
 RUN service mysql restart # startup the service - this will fail since the nodes haven't been configured on first boot
 EXPOSE 3306 4444 4567 4568 # open the ports required to connect to MySQL and for Galera SST / IST operations

We’ll also need our base configuration for MariaDB, I’ve included the base configuration variable for Galera – obviously there are more however these are good enough for starting up the service:

root@workstation:~# vi my.cnf
 [mysqld]
 wsrep_provider=/usr/lib/galera/libgalera_smm.so
 wsrep_cluster_address=gcomm://
 wsrep_sst_method=rsync
 wsrep_cluster_name=galera_cluster
 binlog_format=ROW
 default_storage_engine=InnoDB
 innodb_autoinc_lock_mode=2
 innodb_locks_unsafe_for_binlog=1

So far so good, we have Docker installed and our Dockerfile as well as our “my.cnf” file ready to go. Now its time to build our Docker image, check that the image exists and startup 3x separate Docker images for each of our Galera nodes:

root@workstation:~# docker build -t ubuntu_trusty/mariadb-galera .
root@workstation:~# docker images |grep mariadb-galera
 ubuntu_trusty/mariadb-galera   latest              afff3aaa9dfb        About a minute ago   412.5 MB
docker run --name mariadb1 -i -t -d ubuntu_trusty/mariadb-galera /bin/bash
docker run --name mariadb2 -i -t -d ubuntu_trusty/mariadb-galera /bin/bash
docker run --name mariadb3 -i -t -d ubuntu_trusty/mariadb-galera /bin/bash

We’ve started up our Docker images, now lets verify that they are in fact up and retrieve the process information we need to connect. We’ll need two pieces of information, the IP-Address and the Docker image name which can be received using the combination the the “docker ps” and the “docker inspect” commands:

}]root@workstation:~# docker ps
 CONTAINER ID        IMAGE                                 COMMAND             CREATED             STATUS              PORTS                                    NAMES
 b51e74933ece        ubuntu_trusty/mariadb-galera:latest   /bin/bash           About an hour ago   Up About an hour    3306/tcp, 4444/tcp, 4567/tcp, 4568/tcp   mariadb3
 03109c7018c0        ubuntu_trusty/mariadb-galera:latest   /bin/bash           About an hour ago   Up About an hour    3306/tcp, 4444/tcp, 4567/tcp, 4568/tcp   mariadb2
 1db2a9a520f8        ubuntu_trusty/mariadb-galera:latest   /bin/bash           About an hour ago   Up About an hour    3306/tcp, 4444/tcp, 4567/tcp, 4568/tcp   mariadb1
root@workstation:~# docker ps |cut -d' ' -f1 |grep -v CONTAINER | xargs docker inspect |egrep '"ID"|IPAddress'
 "ID": "b51e74933ece2f3f457ec87c3a4e7b649149e9cff2a4705bef2a070f7adbafb0",
 "IPAddress": "172.17.0.3",
 "ID": "03109c7018c03ddd8448746437346f080a976a74c3fc3d15f0191799ba5aae74",
 "IPAddress": "172.17.0.4",
 "ID": "1db2a9a520f85d2cef6e5b387fa7912890ab69fc0918796c1fae9c1dd050078f",
 "IPAddress": "172.17.0.2",

Time to use lxc-attach to connect to our Docker images using the Docker image name, add the mounts to “/etc/mtab” to keep them MariaDB friendly and customize the “gcomm://” address as we would for a usual Galera configuration (the Docker image name is a generated when the instance fires up so make sure to use your own instance name in the following commands):

root@workstation:~# lxc-attach --name b51e74933ece2f3f457ec87c3a4e7b649149e9cff2a4705bef2a070f7adbafb0
 root@b51e74933ece:~# cat /proc/mounts > /etc/mtab
 root@b51e74933ece:~# service mysql restart
 * Starting MariaDB database mysqld                            [ OK ]
 * Checking for corrupt, not cleanly closed and upgrade needing tables.

root@b51e74933ece:~# vi /etc/mysql/my.cnf
 #wsrep_cluster_address=gcomm://
 wsrep_cluster_address=gcomm://172.17.0.2,172.17.0.3,172.17.0.4

root@b51e74933ece:~# exit
 exit

root@workstation:~# lxc-attach --name 03109c7018c03ddd8448746437346f080a976a74c3fc3d15f0191799ba5aae74
 root@03109c7018c0:~# cat /proc/mounts > /etc/mtab
 root@03109c7018c0:~# vi /etc/mysql/my.cnf
 #wsrep_cluster_address=gcomm://
 wsrep_cluster_address=gcomm://172.17.0.2,172.17.0.3,172.17.0.4
 root@03109c7018c0:~# service mysql start
 * Starting MariaDB database server mysqld                            [ OK ]
 * Checking for corrupt, not cleanly closed and upgrade needing tables.
 root@03109c7018c0:~# mysql -uroot -proot
 Welcome to the MariaDB monitor.  Commands end with ; or \g.
 Your MariaDB connection id is 30
 Server version: 5.5.39-MariaDB-1~trusty-wsrep mariadb.org binary distribution, wsrep_25.10.r4014

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show status like 'wsrep_cluster%';
 +--------------------------+--------------------------------------+
 | Variable_name            | Value                                |
 +--------------------------+--------------------------------------+
 | wsrep_cluster_conf_id    | 2                                    |
 | wsrep_cluster_size       | 2                                    |
 | wsrep_cluster_state_uuid | 42bc375b-2bc0-11e4-851c-1a7627c0624c |
 | wsrep_cluster_status     | Primary                              |
 +--------------------------+--------------------------------------+
 4 rows in set (0.00 sec_

MariaDB [(none)]> exit
 Bye
 root@03109c7018c0:~# exit
 exit

root@workstation:~# lxc-attach --name 1db2a9a520f85d2cef6e5b387fa7912890ab69fc0918796c1fae9c1dd050078f
 root@1db2a9a520f8:~# cat /proc/mounts > /etc/mtab
 root@1db2a9a520f8:~# vi /etc/mysql/my.cnf
 root@1db2a9a520f8:~# service mysql start
 * Starting MariaDB database server mysqld                                                                                                                                                     [ OK ]
 root@1db2a9a520f8:~# mysql -uroot -proot
 Welcome to the MariaDB monitor.  Commands end with ; or \g.
 Your MariaDB connection id is 34
 Server version: 5.5.39-MariaDB-1~trusty-wsrep mariadb.org binary distribution, wsrep_25.10.r4014

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show status like 'wsrep_cluster%';
 +--------------------------+--------------------------------------+
 | Variable_name            | Value                                |
 +--------------------------+--------------------------------------+
 | wsrep_cluster_conf_id    | 3                                    |
 | wsrep_cluster_size       | 3                                    |
 | wsrep_cluster_state_uuid | 42bc375b-2bc0-11e4-851c-1a7627c0624c |
 | wsrep_cluster_status     | Primary                              |
 +--------------------------+--------------------------------------+
 4 rows in set (0.00 sec)

MariaDB [(none)]> exit
 Bye
 root@1db2a9a520f8:~# exit
 exit

Now be honest… Wasn’t that easier than creating multiple virtual machines and configuring the OS for each?

Enjoy your new MariaDB Galera Cluster and happy Dockering!

Categories: DBA Blogs

How to Configure an Azure Point-to-Site VPN – Part 3

Pythian Group - Thu, 2014-08-28 07:58

This blog post is the last of this series and which will demonstrate how to configure a Point-to-Site VPN step-by-step. In my first blog post, I demonstrated how to configure a virtual network and a dynamic routing gateway. This was followed by another post about how to deal with the certificate. Today we will learn how to configure the VPN client.

CONFIGURE THE VPN CLIENT
1. In the Management Portal, navigate to virtual network page; in the “quick glance” you have the links to download the VPN package.

Choose the one appropriate to your architecture (x86 or x64).

Screen Shot 2014-07-31 at 14.10.48

2. After successfully download, copy the file to your servers and execute the setup.
Screen Shot 2014-07-31 at 14.49.34

3. Click Yes when it asks if you want to install the VP and let it run.
Screen Shot 2014-07-31 at 15.09.26

4. After successful installation, it will be visible in your network connections.
Screen Shot 2014-07-31 at 15.46.07

5. In Windows 2012 you can click in the network icon, in the notification area icons (close to the clock), and it will show the right-side bar with all the network connections. You can connect from there.
The other option is right-click the connection in the “Network Connections” window (previous step) and click “Connect / Disconnect”.

6. A window will be shown, click Connect.

Screen Shot 2014-07-31 at 15.58.23

7. Now check the box near to “Do not show this message again for this Connection” and click on “Continue”.

If everything is ok, the connection will succeed.

Screen Shot 2014-07-31 at 16.07.04

8. To confirm that you are connected, execute the command “ipconfig /all” in the command line, and you should see and entry for the VPN with an IP assigned.

Screen Shot 2014-07-31 at 16.24.01

9. After a while, you will be also able to see the connection in you vNet dashboard. As you can see in the image you have data in/out in the vNet.

Screen Shot 2014-07-31 at 16.26.39

After this last part, you are done with the point-to-site VPN configuration. You can test the connectivity by executing the “ping” command and also using the “telnet” client to test if some specific port is opened and reachable.

The point-to-site VPN is recommended if you want connect users/devices to your Azure infrastructure, for few different reasons. If you need to connect the entire or part of your on-premises infrastructure, the way to go is configure a Site-to-Site VPN. Stay tuned for a blog post on how it works.

Thank you for reading!

Categories: DBA Blogs

Monitoring the Filesystem for READONLY mounts using Metric Extension in OEM12c

Arun Bavera - Thu, 2014-08-28 07:29

Our Client faced many times the mounted  filesystem going into READONLY status.

We created this User Defined Metrics or now called as Metric Extesnion to monitor and send alert.

image

 

image

 

image

 

#!/bin/sh

#echo "SlNo MountPoint MountStatus"
nl  /etc/mtab |/bin/awk '{print $1"|" $3"|"substr($5,1,2)}'

 

image

 

image

 

Credentials

Host Credentials
: Uses Monitoring Credentials of Target.

 

You have to create a NamedCredential set to test this like this and then set the username and password for this set from Security->Monitoring Credentials:

emcli create_credential_set -set_name=SOA_ORABPEL_STAGE -target_type=oracle_database -auth_target_type=oracle_database -supported_cred_types=DBCreds -monitoring -description='SOA ORABPEL DB Credentials'
Categories: Development

Missing Named Credentials in OEM 12c

Arun Bavera - Thu, 2014-08-28 06:56

We are seeing that the list sometimes doesn’t show all the named credentials.

Yet to see if this resolves the issue but need to restart OMS …

emctl set property -name oracle.sysman.emdrep.creds.region.maxcreds -value 500

Oracle Enterprise Manager Cloud Control 12c Release 3

Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.

SYSMAN password:

Property oracle.sysman.emdrep.creds.region.maxcreds has been set to value 500 for all Management Servers

OMS restart is required to reflect the new property value

Ref:

EM 12c: Missing Named Credentials in the Enterprise Manager 12c Cloud Control Jobs Drop Down List (Doc ID 1493690.1)

Categories: Development

PRECOMPUTE_SUBQUERY hint

XTended Oracle SQL - Wed, 2014-08-27 16:01

I’ve just found out that we can specify query block for PRECOMPUTE_SUBQUERY: /*+ precompute_subquery(@sel$2) */
So we can use it now with SQL profiles, SPM baselines and patches.

SQL> select/*+ precompute_subquery(@sel$2) */ * from dual where dummy in (select chr(level) from dual connect by level<=100);

D
-
X

SQL> @last

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  c437vsqj7c4jy, child number 0
-------------------------------------
select/*+ precompute_subquery(@sel$2) */ * from dual where dummy in
(select chr(level) from dual connect by level<=100)

Plan hash value: 272002086

---------------------------------------------------------------------------
| Id  | Operation         | Name | E-Rows |E-Bytes| Cost (%CPU)| E-Time   |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |        |       |     2 (100)|          |
|*  1 |  TABLE ACCESS FULL| DUAL |      1 |     2 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1 / DUAL@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("DUMMY"='' OR "DUMMY"='' OR "DUMMY"='♥' OR "DUMMY"='♦'
              OR "DUMMY"='♣' OR "DUMMY"='♠' OR "DUMMY"='' OR "DUMMY"=' OR
              "DUMMY"=' ' OR "DUMMY"=' ' OR "DUMMY"='' OR "DUMMY"='' OR "DUMMY"=' '
              OR "DUMMY"='' OR "DUMMY"='' OR "DUMMY"='►' OR "DUMMY"='◄' OR
              "DUMMY"='' OR "DUMMY"='' OR "DUMMY"='' OR "DUMMY"='' OR "DUMMY"=''
              OR "DUMMY"='' OR "DUMMY"='↑' OR "DUMMY"='↓' OR "DUMMY"='' OR
              "DUMMY"=' OR "DUMMY"='' OR "DUMMY"='' OR "DUMMY"='' OR "DUMMY"=''
              OR "DUMMY"=' ' OR "DUMMY"='!' OR "DUMMY"='"' OR "DUMMY"='#' OR
              "DUMMY"='$' OR "DUMMY"='%' OR "DUMMY"='&' OR "DUMMY"='''' OR
              "DUMMY"='(' OR "DUMMY"=')' OR "DUMMY"='*' OR "DUMMY"='+' OR "DUMMY"=','
              OR "DUMMY"='-' OR "DUMMY"='.' OR "DUMMY"='/' OR "DUMMY"='0' OR
              "DUMMY"='1' OR "DUMMY"='2' OR "DUMMY"='3' OR "DUMMY"='4' OR "DUMMY"='5'
              OR "DUMMY"='6' OR "DUMMY"='7' OR "DUMMY"='8' OR "DUMMY"='9' OR
              "DUMMY"=':' OR "DUMMY"=';' OR "DUMMY"='<' OR "DUMMY"='=' OR "DUMMY"='>'
              OR "DUMMY"='?' OR "DUMMY"='@' OR "DUMMY"='A' OR "DUMMY"='B' OR
              "DUMMY"='C' OR "DUMMY"='D' OR "DUMMY"='E' OR "DUMMY"='F' OR "DUMMY"='G'
              OR "DUMMY"='H' OR "DUMMY"='I' OR "DUMMY"='J' OR "DUMMY"='K' OR
              "DUMMY"='L' OR "DUMMY"='M' OR "DUMMY"='N' OR "DUMMY"='O' OR "DUMMY"='P'
              OR "DUMMY"='Q' OR "DUMMY"='R' OR "DUMMY"='S' OR "DUMMY"='T' OR
              "DUMMY"='U' OR "DUMMY"='V' OR "DUMMY"='W' OR "DUMMY"='X' OR "DUMMY"='Y'
              OR "DUMMY"='Z' OR "DUMMY"='[' OR "DUMMY"='\' OR "DUMMY"=']' OR
              "DUMMY"='^' OR "DUMMY"='_' OR "DUMMY"='`' OR "DUMMY"='a' OR "DUMMY"='b'
              OR "DUMMY"='c' OR "DUMMY"='d'))

PS. I’m not sure, but as far as i remember, when I tested it on 10.2, it didn’t work with specifying a query block.
And I have never seen such usage.

Categories: Development

Partner Webcast – Oracle Internet of Things Platform: Java 8 connecting the world

The Internet of Things Revolution is gaining speed. There are more and more devices, data and connections, thus more and more complexity to handle. But in the first place it brings complete...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Subscription Notifier Version 4.0 Enables WebCenter Users to Create Custom Content Email Notifications

Fishbowl Solutions’ Subscription Notifier has been used by many of our customers for years to manage business content stored in Oracle WebCenter Content. Subscription Notifier automatically sends email notifications based on scheduled queries. Fishbowl released version 4.0 of the product last week, and it includes several significant updates.

Now, users of Subscription Notifier can:

  • Attach native or web-viewable files to notification emails
  • Send individual notification emails for each content item
  • Configure hourly notification schedules
  • Run subscription side effects without sending emails

In addition to the latest updates, the product also offers a host of other features that enable WebCenter users to keep track of their high-value content.

You begin by naming the subscription and specifying whether emails should be sent for items matching the query. The scheduler lets you specify exactly when you want email notifications to go out (note the hourly option, new with version 4.0).

 

SubNoti general settings

The email settings specify who you want to send emails to and how they should appear to recipients. The new “Attach Content” feature gives you the option of sending web-viewable or native files, which provides a way for recipients who don’t use Oracle WebCenter to still see important files. Using the query builder is very simple and determines what content items are included in the subscription. Advanced users also have the option to write more complex queries using SQL.

SubNoti email

The Current Subscription Notifications page gives a summary of all subscriptions. In Version 4.0, simple changes such as enabling, disabling, or deleting subscriptions can be done here.

SubNoti current subscription notifications

Subscription Notifier is a very useful tool for any organization that needs to keep tabs on a large amount of business content. It is part of Fishbowl’s Administration Suite, which also includes Advanced User Security Mapping, Workflow Solution Set, and Enterprise BatchLoader. This set of products works together to simplify the most common administrative tasks in Oracle WebCenter Content.

To learn more about Subscription Notifier, visit Fishbowl’s website or read the press release announcing Version 4.0.

The post Subscription Notifier Version 4.0 Enables WebCenter Users to Create Custom Content Email Notifications appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

force_match => TRUE option of DBMS_SQLTUNE.IMPORT_SQL_PROFILE

Bobby Durrett's DBA Blog - Wed, 2014-08-27 14:13

Yesterday and today I’ve read or heard two people mention the force_match => TRUE parameter value for DBMS_SQLTUNE.IMPORT_SQL_PROFILE and how it forces a profile to work on all SQL statements that are the same except for their literal values.  So, I ran a quick test using the coe_xfr_sql_profile.sql utility that comes with the SQLT scripts that are available for download on Oracle’s support site.

I’ve mentioned in earlier posts how we use coe_xfr_sql_profile.sql to force plans on particular SQL statements using the sql_id of the SQL statement and the plan_hash_value of the plan:

July 2013 post

October 2013 post

March 2014 post

May 2014 post

Yesterday I read this post by David Kurtz where he mentions force_match: post

Today I heard Karen Morton mention force_match in her webinar which should soon be posted here: url

So, after the webinar completed I built a test case to see how the force_match=>TRUE option works.  I created a test table and ran a query with a literal in the where clause and got its plan showing its sql_id and plan_hash_value:

ORCL:SYSTEM>create table test as select * from dba_tables;
ORCL:SYSTEM>SELECT sum(blocks) from test
  2  where owner='SYS';

SUM(BLOCKS)
-----------
      34633

ORCL:SYSTEM>select * from
  2  table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------
SQL_ID  10g08ytt2m5mu, child number 0
-------------------------------------
SELECT sum(blocks) from test where owner='SYS'

Plan hash value: 1950795681

---------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |    29 (100)| 
|   1 |  SORT AGGREGATE    |      |     1 |    30 |            |
|*  2 |   TABLE ACCESS FULL| TEST |   992 | 29760 |    29   (0)|
----------------------------------------------------------------

Then I ran coe_xfr_sql_profile.sql to create a profile that forces the plan on the given sql_id:

SQL> @coe_xfr_sql_profile.sql 10g08ytt2m5mu 1950795681

Then, using vi I edited the output of coe_xfr_sql_profile.sql:

vi coe_xfr_sql_profile_10g08ytt2m5mu_1950795681.sql

I searched for force_match and changed the line to read like this:

force_match => TRUE

instead of

force_match => FALSE

There are comments in the script explaining the meaning of these two values but I don’t want to plagiarize the script by including them here.  Next I ran the edited script:

sqlplus system/password < coe_xfr_sql_profile_10g08ytt2m5mu_1950795681.sql

Then I ran a test showing that not only the original query with the where clause literal ‘SYS’ would use the profile but the same query with a different literal ‘SYSTEM’ would use the  created profile.

ORCL:SYSTEM>SELECT sum(blocks) from test
  2  where owner='SYS';

SUM(BLOCKS)
-----------
      34633

ORCL:SYSTEM>select * from
  2  table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------
SQL_ID  10g08ytt2m5mu, child number 0
-------------------------------------
SELECT sum(blocks) from test where owner='SYS'

Plan hash value: 1950795681

----------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |    29 (100)|
|   1 |  SORT AGGREGATE    |      |     1 |    30 |            |
|*  2 |   TABLE ACCESS FULL| TEST |    81 |  2430 |    29   (0)|
----------------------------------------------------------------

Note
-----
  - SQL profile coe_10g08ytt2m5mu_1950795681 used for this statement

ORCL:SYSTEM>SELECT sum(blocks) from test
  2  where owner='SYSTEM';

SUM(BLOCKS)
-----------
        520

ORCL:SYSTEM>
ORCL:SYSTEM>select * from
  2  table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------
SQL_ID  806ncj0a5fgus, child number 0
-------------------------------------
SELECT sum(blocks) from test where owner='SYSTEM'

Plan hash value: 1950795681

----------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |    29 (100)|
|   1 |  SORT AGGREGATE    |      |     1 |    30 |            |
|*  2 |   TABLE ACCESS FULL| TEST |    81 |  2430 |    29   (0)|
----------------------------------------------------------------

Note
-----
  - SQL profile coe_10g08ytt2m5mu_1950795681 used for this statement

Note that a different sql_id = 806ncj0a5fgus represents the second statement but the same plan_hash_value = 1950795681.  Also note that the SQL profile has the same name in both plans = coe_10g08ytt2m5mu_1950795681.

Now that I’m aware of the force_match=>TRUE option of DBMS_SQLTUNE.IMPORT_SQL_PROFILE I can use SQL profiles to force plans on queries that have different literal values, but are otherwise identical.  This adds a whole new set of problems that can be resolved without modifying the existing code which can really help in a performance firefight.

– Bobby

 

 

 

Categories: DBA Blogs

My Speaking Schedule for Oracle Open World 2014

Galo Balda's Blog - Wed, 2014-08-27 12:22

A quick post to let you know about the two presentations that I’ll be doing at Oracle Open World 2014.

Session ID:         UGF4482
Session Title:     “Getting Started with SQL Pattern Matching in Oracle Database 12c
Venue / Room:  Moscone South – 301
Date and Time:  9/28/14, 13:30 – 14:15

Session ID:          CON4493
Session Title:      “Regular Expressions in Oracle Database 101″
Venue / Room:   Moscone South – 303
Date and Time:   10/2/14, 13:15 – 14:00

As usual, you might have to check before the session to make sure the room has not changed.

I hope to see you there.


Filed under: 12C, Open World, Oracle, Regular Expressions, Row Pattern Matching, SQL Tagged: 12C, Open World, Oracle, Regular Expressions, Row Pattern Matching, SQL
Categories: DBA Blogs

In-memory Consistency

Jonathan Lewis - Wed, 2014-08-27 12:00

A comment on one of my early blogs about the 12c in-memory database option asked how Oracle would deal with read-consistency. I came up with a couple of comments outlining the sort of thing I would look for in a solution, and this note is an outline on how I started to tackle the question – with a couple of the subsequent observations. The data is (nearly) the same as the data I generated for my previous article on the in-memory database (and I’m running 12.1.0.2, of course):


create table t1 nologging
as
select  *
from    all_objects
where   rownum <= 50000
;

insert /*+ append */ into t1 select * from t1;
commit;

insert /*+ append */ into t1 select * from t1;
commit;

insert /*+ append */ into t1 select * from t1;
commit;

begin
        dbms_stats.gather_table_stats(user, 't1', method_opt=>'for all columns size 1');
end;
/

alter table t1
        inmemory priority high memcompress for query low
        inmemory memcompress for query high (object_type)
;

In this case I’ve made the inmemory priority high and I haven’t set any column to “no inmemory” although I have made one column different from the rest (v$_im_column_level doesn’t get populated unless there is some variation across columns). I have to say I couldn’t get very consistent behaviour in terms of when the data finally got into memory with this table creation – possibly something to do with using “alter table” rather than “create table” – but a second “alter table t1 inmemory;” seemed to do the trick if Oracle was playing hard to get.

Once I’d checked that the table was in memory I collected performance figures from v$mystat and v$session_event for the following query:


select
        /* Test Run */
        last_ddl_time
from
        t1
where   t1.created > trunc(sysdate)
and     t1.object_type = 'TABLE'
and     t1.subobject_name is not null
;

Once I was satisfied that the in-memory option was working correctly, I went through the following steps:

  • Session 1: set transaction read only;
  • Session 1: run the query and collect performance figures
  • Session 2: do several small, committed, updates, modifying a total of 30 or 40 random rows
  • Session 2: Flush the buffer cache – so that we can see future block acquisition
  • Session 1: re-run the query and collect performance figures – compare and contrast

The effect of the “set transaction read only;” was to force the session to do some extra work in the second execution of the query to make the data read-consistent back to the start of the “transaction”. The results were as follows (don’t forget that some of the numbers will relate to the action of collecting the performance figures):


First execution
===============
Name                                                                     Value
----                                                                     -----
Requests to/from client                                                      4
opened cursors cumulative                                                    5
user calls                                                                   6
recursive calls                                                              3
session logical reads                                                    6,680
DB time                                                                      1
non-idle wait count                                                          4
consistent gets                                                              3
consistent gets from cache                                                   3
consistent gets pin                                                          3
consistent gets pin (fastpath)                                               3
logical read bytes from cache                                           24,576
calls to kcmgcs                                                              7
calls to get snapshot scn: kcmgss                                            1
table scans (long tables)                                                    1
table scans (IM)                                                             1
IM scan CUs memcompress for query low                                        1
session logical reads - IM                                               6,677
IM scan bytes in-memory                                              5,155,309
IM scan bytes uncompressed                                          45,896,824
IM scan CUs columns theoretical max                                         18
IM scan rows                                                           399,984
IM scan rows optimized                                                 399,984
IM scan CUs split pieces                                                     1
IM scan CUs predicates received                                              3
IM scan CUs predicates applied                                               3
IM scan CUs predicates optimized                                             1
IM scan CUs pruned                                                           1
IM scan segments minmax eligible                                             1
session cursor cache hits                                                    5
workarea executions - optimal                                                1
parse count (total)                                                          4
execute count                                                                5
bytes sent via SQL*Net to client                                         1,150

Event                                             Waits   Time_outs           Csec    Avg Csec    Max Csec
-----                                             -----   ---------           ----    --------    --------
SQL*Net message to client                             9           0           0.00        .000           0
SQL*Net message from client                           9           0           0.44        .049       8,408

Second Execution
================
Name                                                                     Value
----                                                                     -----
Requests to/from client                                                      4
opened cursors cumulative                                                    5
user calls                                                                   6
recursive calls                                                              3
session logical reads                                                    6,728
DB time                                                                      1
non-idle wait count                                                         35
enqueue requests                                                             2
enqueue releases                                                             2
physical read total IO requests                                             29
physical read total multi block requests                                    24
physical read total bytes                                            6,987,776
cell physical IO interconnect bytes                                  6,987,776
consistent gets                                                             92
consistent gets from cache                                                  92
consistent gets pin                                                         44
consistent gets pin (fastpath)                                               5
consistent gets examination                                                 48
logical read bytes from cache                                          753,664
physical reads                                                             853
physical reads cache                                                       853
physical read IO requests                                                   29
physical read bytes                                                  6,987,776
consistent changes                                                          48
free buffer requested                                                      894
CR blocks created                                                           41
physical reads cache prefetch                                              824
physical reads prefetch warmup                                             713
shared hash latch upgrades - no wait                                        43
calls to kcmgcs                                                              7
calls to get snapshot scn: kcmgss                                            1
file io wait time                                                        3,861
data blocks consistent reads - undo records applied                         48
rollbacks only - consistent read gets                                       41
table scans (long tables)                                                    1
table scans (IM)                                                             1
table scan rows gotten                                                   2,803
table scan blocks gotten                                                    41
IM scan CUs memcompress for query low                                        1
session logical reads - IM                                               6,636
IM scan bytes in-memory                                              5,155,309
IM scan bytes uncompressed                                          45,896,824
IM scan CUs columns theoretical max                                         18
IM scan rows                                                           399,984
IM scan rows optimized                                                 399,984
IM scan rows cache                                                          48
IM scan blocks cache                                                        41
IM scan CUs split pieces                                                     1
IM scan CUs predicates received                                              3
IM scan CUs predicates applied                                               3
IM scan CUs predicates optimized                                             1
IM scan CUs pruned                                                           1
IM scan segments minmax eligible                                             1
session cursor cache hits                                                    5
workarea executions - optimal                                                1
parse count (total)                                                          4
execute count                                                                5
bytes sent via SQL*Net to client                                         1,150
bytes received via SQL*Net from client                                   1,772
SQL*Net roundtrips to/from client                                            4

Event                                             Waits   Time_outs           Csec    Avg Csec    Max Csec
-----                                             -----   ---------           ----    --------    --------
Disk file operations I/O                              2           0           0.01        .003           0
db file sequential read                               5           0           0.01        .001           0
db file scattered read                               24           0           0.38        .016           0
SQL*Net message to client                            10           0           0.01        .001           0
SQL*Net message from client                          10           0           0.76        .076       8,408

There’s quite a lot of stats which probably aren’t interesting – and there’s one detail that is important but doesn’t appear (at least not clearly) and that’s the fact that the table in question had about 6,800 blocks below its highwater mark.

So, what do the stats tell us? The most obvious change, of course, is that we had to do some physical reads to get a result set: 24 multiblock reads and 5 single block reads (the latter from the undo tablespace). This is echoed in the session stats as 853 “physical reads cache” from 29 “physical read IO requests”. We can then see the specific read-consistency work (in two ways – with a third close approximation):

consistent changes                                                          48
CR blocks created                                                           41

data blocks consistent reads - undo records applied                         48
rollbacks only - consistent read gets                                       41

IM scan rows cache                                                          48
IM scan blocks cache                                                        41

We applied 48 undo change vectors to fix up 41 blocks to the correct point in time and used them to read 48 rows – the last pair of figures won’t necessarily match the first two pairs, but they do give us a measure of how much data we had to acquire from the cache when trying to do an in-memory scan.

The number 41 actually appears a couple more times: it’s “table scan blocks gotten” (which might seem a little odd since we got far more than 41 blocks by multiblock reads – but we only really wanted 41), and it’s also the change (downwards) in “session logical reads – IM”. Even when Oracle does a pure in-memory query it calculates the number of blocks it would have been reading and reports that number as “session logical reads” and “session logical reads – IM” – so there’s another way to get confused about buffer visits and another statistic to cross-check when you’re trying to work out how to calculate “the buffer hit ratio” ;)

After the first read the scattered reads all seemed to be 32 blocks of “intermittent” tablescan – perhaps this is a measure of the number of blocks that are compressed into a single in-memory chunk (for query low), but perhaps it’s a side effect of the “physical reads prefetch warmup” that Oracle may do when the cache has a lot of empty space. I’ll leave it as an exercise to the reader to refine the test (or think of a different test) to determine whether it’s the former or latter; it’s quite important to find this out because if Oracle is tracking change at the “in-memory chunk” rather than at the block level then a small amount of high-precision change to an in-memory table could result in a relatively large amount of “redundant” I/O as a long-running query tried to stay read-consistent.


Dress Code 2.0: Wearable Tech Meetup at the OTN Lounge at Oracle OpenWorld 2014

Usable Apps - Wed, 2014-08-27 09:38

What? Dress Code 2.0: Wearable Tech Meetup at the OTN Lounge at Oracle OpenWorld 2014

When? Tuesday, 30-September-2014, 4-6 PM

Partners! Customers! Java geeks! Developers everywhere! Lend me your (er, wearable tech) ears!

Get your best wearables technology gear on and come hang out with the Oracle Applications User Experience team and friends at the OTN Lounge Wearables Technology Meetup at Oracle OpenWorld 2014.

Oracle Apps UX and OTN augmenting and automating work with innnovation and the cloud
  • See live demos of Oracle ideation and proof of concept wearable technology—smart watches, heads-up displays, sensors, and other devices and UIs—all integrated with the Oracle Java Cloud.
  • Try our wearable gadgets for size, and chat with the team about using OTN resources to design and build your own solutions.
  • Show us your own wearables and discuss the finer points of use cases, APIs, integrations, UX design, and fashion and style considerations for wearable tech development, and lots more!

Inexpensive yet tasteful gifts for attendees sporting wearable tech, while supplies last!

Note: A 2014 Oracle OpenWorld or JavaOne conference badge is required for admittance to the OTN Lounge. 

More?

Hands-On Programming with R by Garrett Grolemund

Surachart Opun - Wed, 2014-08-27 02:42
R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS.
R language is useful to become a data scientist, as well as a computer scientist. I mention a book that points about a data science with R. A Hands-On Programming with R Write Your Own Functions and Simulations By Garrett Grolemund. It was written how to solve the logistical problems of data science. Additional, How to write our own functions and simulations with R. In a book, readers are able to learn in practical data analysis projects (Weighted Dice, Playing Cards, Slot Machine) and understand more in R. Additional, Appendix A-E will help to install/update R and R packages as well as loading Data and debugging in R code.
Garrett Grolemund maintains shiny.rstudio.com, the development center for the Shiny R package.
Free Sampler.Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

The Social Spotlight is Going to Shine on #OOW14

Linda Fishman Hoyle - Tue, 2014-08-26 15:21

A Guest Post by Mike Stiles, Senior Content Manager for Oracle Social Cloud (pictured left)

Want to see an example of “busy” and “everywhere”? Then keep an eye on the Oracle Social Cloud team as they head into this year’s Oracle OpenWorld. Famous for their motto of “surely we can tackle even more,” Oracle’s top socializers will be all over Moscone, from the Social Intelligence Center in CX Central to 16+ social track sessions to live demos to comprehensive social coverage. Oracle Social Cloud will be trumpeting the social business imperative with live, interactive displays and inspiring speakers from Oracle, General Motors, Chevrolet, FleishmanHillard, Nestle, Polaris, Banc of California, CMP.LY and more.

Catch as many of these highlights as you can. You know how we social people love for people to follow us:

  • Social Intelligence Center: Swing by the Oracle SRM “Social Intelligence Center” in CX Central in Moscone West. We don’t know if it will literally make you smarter, but it is a real world demonstration of how the Oracle Social Cloud’s Social Relationship Management (SRM) platform serves up big data visualizations. Specifically, we’ll be watching the web and social chatter around #OOW14 using advanced analytics and deeper listening. You can see the new graphical representations of social data and global activity, get some great ideas for establishing a Social Intelligence Center at your brand, or see firsthand how the Oracle SRM platform is a mean, modernizing, social management, streamlining machine. And don’t forget to tweet about what you see.
  • “Financial Services: Why Social Media Makes Cents” with Tom Chernaik of CMP.LY; Kevin Salas, Banc of California; and Angela Wells of Oracle Social. Monday, Sept. 29 @ 1:45p.m. [CON8561]
  • “A Sky-High Overview: Oracle Social Cloud” with Meg Bear, Group Vice President of Oracle Social. Tuesday, Sept. 30 @ 10 and 11:45a.m. [TGS8068]
  •  “Show Me the Money: Building the Business Case for Social” with Holly Spaeth of Polaris; Michelle Lapierre of Marriott; Meghan Blauvelt, Nestle; and Angela Wells of Oracle Social. Wednesday, Oct. 1 @ 11:45a.m. [CON8551]
  • “Social Relationship Management: Lessons Learned from the Olympics, Super Bowl, Grammys and More” with Jamie Barbour of Chevrolet; Melissa Schreiber of FleishmanHillard; and Erika Brookes of Oracle Social. Wednesday, Oct. 1 @ 1p.m. [CON8349]
  • “Global Command Centers: Why Social Engagement is Key to Connect with Customers” with Rebecca Harris and Whitney Drake of General Motors; Alan Chumley of FleishmanHillard; and Tara Roberts of Oracle Social. Wednesday, Oct. 1 @ 2:15p.m. [CON8350]
  • “Whose Customer is this Anyway? Rise of the CCO, the CDO and the “New” CMO” with Jeb Dasteel, Oracle’s Chief Customer Officer (CCO); other C-Suite executives; and Erika Brookes of Oracle Social. Wednesday, Oct. 1 @ 3:45p.m. [CON8457]
  • “Leveraging Social Identity to Build Better Customer Relations” with Andrew Jones of the Altimeter Group. Thursday, Oct. 2 @ 11:30a.m. [CON8348]
  • “When Social Data = Opportunity: Leveraging Social Data to Target Custom Audiences” with Michelle Lapierre of Marriott; and Cory Treffiletti of Oracle. Thursday, Oct. 2 @ 12:45p.m. [CON8554]

Want the most thorough coverage of Oracle Social’s OpenWorld activities imaginable? Then by all means make sure you friend and follow us on all our channels, including Twitter, Facebook, Google+, and LinkedIn. And subscribe to our daily Social Spotlight podcast!

We want YOU to contribute to our channels and share the top sights, sounds and takeaways that you’re getting from OpenWorld. Register today and we'll see you in San Francisco September 28 - October 2, 2014!

Open World Session--Functional Overview of the PeopleSoft Fluid User Experience: The Home Page

PeopleSoft Technology Blog - Tue, 2014-08-26 14:50

PeopleTools 8.54 is a landmark release for Oracle/PeopleSoft, and you are starting to see a lot of information on it, both in this blog and elsewhere.  One of the most important aspects of this release is the new Fluid User Experience.  This is a broad-ranging subject, so you will see posts from a functional perspective (this post), a developer’s perspective, and announcements about Fluid applications that are being delivered by PeopleSoft.

Perhaps you’ve heard about how the Fluid user experience provides support for mobile applications.  While that is certainly true, Fluid offers much more than that.  What the Fluid UX really provides is the ability to access PeopleSoft applications across a variety of form factors from smart phones to tablets to desktops/laptops.  Fluid applications present a common user experience on a variety of devices regardless of screen size.  These applications are efficient and attractive as well, and offer the kind of user experience the modern work force is expecting.  So no matter how your users access PeopleSoft, they will be presented with the same intuitive user experience.  This post is the first of a series covering the main features that you will see in Fluid applications when you install them or if you develop some yourself.  We’ll also cover how to get started with Fluid/PeopleTools 8.54, and how new Fluid application pages will work with your existing applications.

We’ll start today with perhaps the most fundamental feature of the Fluid UX: the Home Page.  This page provides a base or launch pad for users to navigate to their essential work.  Home pages are designed for specific roles, so they contain all the essentials for each role without extraneous menus or content that might distract users.  In this way, Fluid Home Pages are conceptually similar to current home pages or dashboards, but Fluid home pages employ the new responsive UI that renders well on different form factors.

 Let’s look at the main features of the Home page.

The first thing you’ll notice about Home Pages is that they contain a variety of Tiles.  Tiles can serve as navigation mechanisms, but may also convey dynamic information.  Tiles are similar in purpose to pagelets, but are not as interactive.  They are responsive, however, and can automatically change size and position to accommodate different form factors.

Now let’s take a look at the Home Page header, which contains several useful features.  Central is the Home Page drop down menu.  This takes the place of tabs, and enables users to move among all the home pages that they use.  Users may serve in a variety of roles in an enterprise, and they may therefore have more than one Home Page—one for each role they play.  For example, a person may be a manager, but they are also an employee, and as such they have different activities and tasks they perform in both those roles.  They would likely have different home pages for those different roles. 

 Next is the PeopleSoft Search widget.  This provides for a search-centric navigation paradigm, and enables users to search from almost anywhere within their PeopleSoft system.  In addition, with the new PeopleSoft search, users can search across pillars and retrieve results from all their PeopleSoft content.  The search results are even actionable, so users can often complete a task right from the search results page.

Notifications are a handy mechanism that lets users know when there are tasks requiring their attention.  The Notifications widget displays the number of items requiring attention.  When the user clicks the widget a window displays the sum of all notifications from all applications to which the user has access.  The user can act on those items directly from the Notifications window.

Next is the Actions widget.  This menu is configurable, but one of the main actions available is Personalizations. This takes the user to a page where they can add or remove tiles from a Home Page, delete Home Pages, or even create new home pages and configure them.

Finally, and perhaps most importantly, we have the Navigation Bar widget.  This opens the Nav Bar, which enables users to get anywhere in their PeopleSoft system.  The Nav Bar is flexible configurable, powerful and intuitive.

The Nav Bar is a rich topic in its own right, and will be covered in a separate blog post in the near future.  In fact, the Fluid user experience is a large subject, so we’ll be posting many more articles describing its features in greater depth.  We’ll also provide a taste of developing Fluid applications.

If you would like more information on the Fluid UX (and everything PeopleSoft) please see the PeopleSoft Information Portal.

We will also be covering the Fluid UX in great depth in several sessions at Oracle Open World.  Come see us at the conference!  You’ll not only acquire useful information, but you can talk with us personally and see live demos of these features.  Here are a few sessions in particular that cover the Fluid UX:

  • A Closer Look at the New PeopleSoft Fluid User Experience (CON7567)
  • PeopleSoft Mobility Deep Dive: PeopleSoft Fluid User Interface and More (CON7588)
  • PeopleSoft Fluid User Interface: A Modern User Experience for PeopleSoft HCM on Any Device (CON7667)
  • PeopleSoft PeopleTools 8.54: PeopleSoft Fluid User Interface in Action (CON7595)
These sessions cover a wide variety of subjects from the functional (for end users and SMEs) to the technical (for developers).

Session Schedule Information OpenWorld 2014 San Francisco

Andrejus Baranovski - Tue, 2014-08-26 12:55
I have received my session schedule information for OpenWorld 2014. This year event is going to be quite busy with three sessions. Below you can check session titles along with times, looking forward to meet you in San Francisco !


Session ID: CON2623Session Title: Oracle ADF Development and Deployment to Oracle CloudVenue / Room: Moscone South - 270Date and Time: 10/1/14, 15:30 - 16:15
Session ID: CON3745 (together with Danilo Schmiedel)Session Title: Oracle Mobile Suite and Oracle Adaptive Case Management: A Strong Combination to Empower PeopleVenue / Room: Moscone West - 3018Date and Time: 10/1/14, 16:45 - 17:30
Session ID: CON2495Session Title: Data Caching Strategies for Oracle Mobile Application FrameworkVenue / Room: Marriott Marquis - Nob Hill C/DDate and Time: 10/2/14, 12:00 - 12:45

Rittman Mead in the OTN TOUR Latin America 2014

Rittman Mead Consulting - Tue, 2014-08-26 11:28

Another OTN Tour Latin America has come and gone. This is the most important technical event in the region visiting 12 countries and with more than 2500 attendees in two weeks.

This year Rittman Mead was part of the OTN Tour in Buenos Aires (Argentina) and Montevideo (Uruguay) presenting about ODI and OGG.

We have started in Buenos Aires on August 11 for the first day of the OTN Tour in Argentina. I’ve talked about the integration of ODI and OGG 12c, explaining all the technical details to configure and how to implement it. Most of the attendees didn’t work with these tools (but were curious about them) so I personalised a little the presentation giving them first an introduction of ODI and OGG.

As the vice-president of the UYOUG (Uruguayan Oracle User Group) I’m part of the organisation of the OTN Tour in my country, so we needed to come back in the same Monday to adjust some last details to have everything ready for the event in Uruguay.

Most of the speakers came on Wednesday, and we have spent a great day with Michelle Malcher, Kamran Agayev, Hans Forbrich and Mike Dietrich. First, we went to lunch at the Mercado del Puerto, an emblematic place that has lot of “parrillas” (kind of barbecues) and then we gave them a little city tour which included a visit to El Cerro de Montevideo. Finally we visited one of the most important wineries in Uruguay, Bodega Bouza where we have a wine tour followed by an amazing wine tasting of a variety of wines including Tannat which is our insignia grape. You know…it is important to be relaxed before a conference :-)

luch_allCerro_Montevideo

parrilla

The first day of the event in Uruguay was dedicated exclusively to technical sessions and in the second one we had the hands-on labs. The conference covered a wide range of topics from BI Mobile, e-Business Suite to how to upgrade to Oracle Database12c, Oracle Virtualization and Oracle RAC. All the sessions were packed with attendees.

Mike session Edel session

The next day, we had labs with PCs with software already installed but attendees could came with their own laptops to install all the software needed for the hands-on. We had the famous RAC Attack! lead by Kamran and with the help of the ninjas Michelle, Hans and Nelson Calero, and an Oracle Virtualization lab by Hernan Petitti for 7 hours!

Rac AttackRAC Attack2

It was a great event. You can see more pictures here and download the presentations here. The attendees as well as all the speakers were really happy with the result. And so did we.

This  is only the beginning for Rittman Mead in Latin America. There are a lot of things to come, so stay tuned!

Categories: BI & Warehousing

Community Source Is Dead

Michael Feldstein - Tue, 2014-08-26 11:21

As Phil noted in yesterday’s post, Kuali is moving to a for-profit model, and it looks like it is motivated more by sustainability pressures than by some grand affirmative vision for the organization. There has been a long-term debate in higher education about the value of “community source,” which is a particular governance and funding model for open source projects. This debate is arguably one of the reasons why Indiana University left the Sakai Foundation (as I will get into later in this post). At the moment, Kuali is easily the most high-profile and well-funded project that still identifies itself as Community Source. The fact that this project, led by the single most vocal proponent for the Community Source model, is moving to a different model strongly suggests that Community Source has failed.

It’s worth taking some time to talk about why it has failed, because the story has implications for a wide range of open-licensed educational projects. For example, it is very relevant to my recent post on business models for Open Educational Resources (OER).

What Is Community Source?

The term “Community Source” has a specific meaning and history within higher education. It was first (and possibly only) applied to a series of open source software projects funded by the Mellon Foundation, including Sakai, Kuali, Fedora, and DSpace (the latter two of which have merged). As originally conceived, Community Source was an approach that was intended to solve a perceived resource allocation problem in open source. As then-Mellon Foundation Associate Program Officer Chris Mackie put it,

For all that the OSS movement has produced some runaway successes, including projects like Perl, Linux, and Mozilla Firefox, there appear to be certain types of challenges that are difficult for OSS to tackle. Most notably, voluntaristic OSS projects struggle to launch products whose primary customers are institutions rather than individuals: financial or HR systems rather than Web servers or browsers; or uniform, manageable desktop environments rather than programming languages or operating systems. This limitation may trace to any of several factors: the number of programmers having the special expertise required to deliver an enterprise information system may be too small to sustain a community; the software may be inherently too unglamorous or uninteresting to attract volunteers; the benefits of the software may be too diffuse to encourage beneficiaries to collaborate to produce it; the software may be too complex for its development to be coordinated on a purely volunteer basis; the software may require the active, committed participation of specific firms or institutions having strong disincentives to participate in OSS; and so on. Any of these factors might be enough to prevent the successful formation of an OSS project, and there are many useful types of enterprise software—including much of the enterprise software needed by higher education institutions—to which several of them apply. In short, however well a standard OSS approach may work for many projects, there is little reason to believe that the same model can work for every conceivable software project.

This is not very different from the argument I made recently about OER:

In the early days of open source, projects were typically supported through individual volunteers or small collections of volunteers, which limited the kinds and size of open source software projects that could be created. This is also largely the state of OER today. Much of it is built by volunteers. Sometimes it is grant funded, but there typically is not grant money to maintain and update it. Under these circumstances, if the project is of the type that can be adequately well maintained through committed volunteer efforts, then it can survive and potentially thrive. If not, then it will languish and potentially die.

The Mellon Foundation’s answer to this problem was Community Source, again as described by Chris Mackie:

Under this new model, several institutions contract together to build software for a common need, with the intent of releasing that software as open source. The institutions form a virtual development organization consisting of employees seconded from each of the partners. This entity is governed cooperatively by the partners and managed as if it were an enterprise software development organization, with project and team leads, architects, developers, and usability specialists, and all the trappings of organizational life, including reporting relationships and formal incentive structures. During and after the initial construction phase, the consortial partners open the project and invite in anyone who cares to contribute; over time the project evolves into a more ordinary OSS project, albeit one in which institutions rather than individual volunteers usually continue to play a major role.

A good friend of mine who has been involved in Mellon-funded projects since the early days describes Community Source more succinctly as a consortium with a license. Consortial development is a longstanding and well understood method of getting things done in higher education. If I say to you, “Kuali is a consortium of universities trying to build an ERP system together,” you will probably have some fairly well-developed notions of what the pros and cons of that approach might be. The primary innovation of Community Source is that it adds an open source license to the product that the consortium develops, thus enabling another (outer) circle of schools to adopt and contribute to the project. But make no mistake: Community Source functions primarily like a traditional institutional consortium. This can be best encapsulated by what Community Source proponents refer to as the Golden Rule: “If you bring the gold then you make the rules.”[1]

Proponents of Community Source suggested even from the early days that Community Source is different from open source. Technically, that’s not true, since Community Source projects produce open source software. But it is fair to say that Community Source borrows the innovation of the open source license while maintaining traditional consortial governance and enterprise software management techniques. Indiana University CIO and Community Source proponent Brad Wheeler sometimes refers to Community Source as “the pub between the Cathedral and the Bazaar (a reference to Eric Raymond’s seminal essay on open source development).” More recently, Brad and University of Michigan’s Dean of Libraries James Hilton codified what they consider to be the contrasts between open source and Community Source in their essay “The Marketecture of Community,” and which Brad elaborates on in his piece “Speeding Up On Curves.” They represent different models of procuring software in a two-by-two matrix, where the dimensions are “authority” and “influence”:

Note that both of these dimensions are about the degree of control that the purchaser has in deciding what goes into the software. It is fundamentally a procurement perspective. However, procuring software and developing software are very different processes.

A Case Study in Failure and Success

The Sakai community and the projects under its umbrella provide an interesting historical example to see how Community Source has worked and where it has broken down. In its early days, Indiana University and the University of Michigan where primary contributors to Sakai and very much promoted the idea of Community Source. I remember a former colleague returning from a Sakai conference in the summer of 2005 commenting, “That was the strangest open source conference I have ever been to. I have never seen an open source project use the number of dollars they have raised as their primary measure of success.” The model was very heavily consortial in those days, and the development of the project reflected that model. Different schools built different modules, which were then integrated into a portal. As Conway’s Law predicts, this organizational decision led to a number of technical decisions. Modules developed by different schools were of differing quality and often integrated with each other poorly. The portal framework created serious usability problems like breaking the “back” button on the browser. Some of the architectural consequences of this approach took many years to remediate. Nevertheless, Sakai did achieve a small but significant minority of U.S. higher education market share, particularly at its peak a few years ago. Here’s a graph showing the growth of non-Blackboard LMSs in the US as of 2010, courtesy of data from the Campus Computing Project:

Meanwhile, around 2009, Cambridge University built the first prototype of what was then called “Sakai 3.” It was intended to be a ground-up rewrite of a next-generation system. Cambridge began developing it themselves as an experiment out of their Centre for Applied Research in Educational Technologies, but it was quickly seized upon by NYU and several other schools in the Sakai community as interesting and “the future.” A consortial model was spun up around it, and then spun up some more. Under pressure from Indiana University and University of Michigan, the project group created multiple layers of governance, the highest of which eventually required a $500K institutional commitment in order to participate. Numbers of feature requirements and deadlines proliferated, while project velocity slowed. The project hit technical hurdles, principally around scalability, that it was unable to resolve, particularly given ambitious deadlines for new functionality. In mid-2012, Indiana University and University of Michigan “paused investment” in the project. Shortly thereafter, they left the project altogether, taking with them monies that they had previously committed to invest under a Memorandum of Understanding. The project quickly collapsed after that, with several other major investors leaving. (Reread Phil’s post from yesterday with this in mind and you’ll see the implications for measuring Kuali’s financial health.)

Interestingly, the project didn’t die. Greatly diminished in resources but freed from governance and management constraints of the consortial approach, the remaining team not only finally re-architected the platform to solve the scalability problems but also have managed seven major releases since that implosion in 2012. The project, now called Apereo OAE, has returned to its roots as an academic (including learning) collaboration platform and is not trying to be a direct LMS replacement. It has even begun to pick up significant numbers of new adoptees—a subject that I will return to in a future post.

It’s hard to look at the trajectory of this project and not conclude that the Community Source model was a fairly direct and significant cause of its troubles. Part of the problem was the complex negotiations that come along with any consortium. But a bigger part, in my opinion, was the set of largely obsolete enterprise software management attitudes and techniques that come along as a not-so-hidden part of the Community Source philosophy. In practice, Community Source is essentially project management approach focused on maximizing the control and influence of the IT managers whose budgets are paying for the projects. But those people are often not the right people to make decisions about software development, and the waterfall processes that they often demand in order to exert that influence and control (particularly in a consortial setting) are antithetical to current best practices in software engineering. In my opinion, Community Source is dead primarily because the Gantt Chart is dead.

Not One Problem but Two

Community Source was originally developed to address one problem, which was the challenge of marshalling development resources for complex (and sometimes boring) software development projects that benefit higher education. It is important to understand that, in the 20 years since the Mellon Foundation began promoting the approach, a lot has changed in the world of software development. To begin with, there are many more open source frameworks and better tools for developing good software more quickly. As a result, the number of people needed for software products (including voluntaristic open source projects) has shrunk dramatically—in some cases by as much as an order of magnitude. Instructure is a great example of a software platform that reached first release with probably less than a tenth of the money that Sakai took to reach its first release. But also, we can reconsider that “voluntaristic” requirement in a variety of ways. I have seen a lot of skepticism about the notion of Kuali moving to a commercial model. Kent Brooks’ recent post is a good example. The funny thing about it, though, is that he waxes poetic about Moodle, which has a particularly rich network of for-profit companies upon which it depends for development, including Martin Dougiamas’ company at the center. In fact, in his graphic of his ideal world of all open source, almost every project listed has one or more commercial companies behind it without which it would either not exist or would be struggling to improve:

BigBlueButton is developed entirely by a commercial entity. The Apache web server gets roughly 80% of its contributions from commercial entities, many of which (like IBM) get direct financial benefit from the project. And Google Apps aren’t even open source. They’re just free. Some of these projects have strong methods for incorporating voluntaristic user contributions and taking community input on requirements, while others have weak ones. But across that spectrum of practices, community models, and sustainability models, they manage to deliver value. There is no one magic formula that is obviously superior to the others in all cases. This is not to say that shifting Kuali’s sustainability model to a commercial entity is inevitably a fine idea that will succeed in enabling the software to thrive while preserving the community’s values. It’s simply to say that moving to a commercially-driven sustainability model isn’t inherently bad or evil. The value (or lack thereof) will all depend on how the shift is done and what the Kuali-adopting schools see as their primary goals.

But there is also a second problem we must consider—one that we’ve learned to worry about in the last couple of decades of progress in the craft of software engineering (or possibly a lot earlier, if you want to go back as far as the publication of The Mythical Man Month). What is the best way to plan and execute software development projects in light of the high degree of uncertainty inherent in developing any software with non-trivial complexity and a non-trivial set of potential users? If Community Source failed primarily because consortia are hard to coordinate, then moving to corporate management should solve that problem. But if it failed primarily because it reproduces failed IT management practices, then moving to a more centralized decision-making model could exacerbate the problem. Shifting the main stakeholders in the project from consortium partners to company investors and board members does not require a change in this mindset. No matter who the CEO of the new entity is, I personally don’t see Kuali succeeding unless it can throw off its legacy of Community Source IT consortium mentality and the obsolete, 1990′s-era IT management practices that undergird it.

  1. No, I did not make that up. See, for example, https://chronicle.com/article/Business-Software-Built-by/49147

The post Community Source Is Dead appeared first on e-Literate.

Accelerate your Transformation to Digital

WebCenter Team - Tue, 2014-08-26 09:07
by Dave Gray, Entrepreneur, Author & Consultant

Digital Transformation – The re-alignment of, or new investment in, technology and business models to more effectively engage consumers or employees, digitally

We are in the process of building a global digital infrastructure that quantifies, connects, analyzes and maps the physical and social world we live in. This has already had massive impact on the way we live and work, and it will continue to do so, in ways which are impossible to predict or imagine.

If you work in a company that was imagined, designed and created before this digital infrastructure was developed, there is no question that your business will be impacted. If it hasn’t happened yet, it is simply a matter of time.

This digital transformation is a global phenomenon that is affecting every individual, and every business, on the planet. Everything that can be digital, will be digital. That means every bit of information your company owns, processes or touches. There is no product or service you provide that won’t be affected. The question is, what does it mean to you, and what should you be doing about it?

When technology advances, strategies and business models shift. It’s not simply a matter of taking what you do today and “making it digital.” That’s the difference between translation and transformation. Your products and services don’t need to be “translated” into the digital world. They must be transformed.

Take Kodak, for example. As long as there have been photographs, people have used them to store and share memories of people and events. That hasn’t changed since Eastman Kodak was founded by George Eastman in 1888.

But when technology advances, strategies and business models shift. Kodak was at the leading edge of the research that doomed its own business model. In 1975, Kodak invented the digital camera (although it was kept secret at the time). Kodak engineers even predicted, with startling accuracy, when the digital camera would become a ubiquitous consumer technology. So Kodak had a major advantage over every other company in the world. They were able to predict the death of film and they had about a 15-year head start over everyone else.

Unfortunately, the business of film was so profitable, and the reluctance (or fear) of disrupting its own business was so great, that Kodak had difficulty focusing on anything else.

In 2010, two software engineers, Kevin Systrom and Mike Krieger, founded a company called Instagram, which they sold to Facebook two years later for approximately $1 billion.

When technology advances, strategies and business models shift.

Let’s take another example, Nokia, who commanded 40 percent of the mobile phone market as recently as 2007. Nokia was the clear and unequivocal market leader at that time. People were talking on phones in 2007 and they are still doing that today. But what happened? The phone became a mobile computer, a platform for digital services. And in a digital world, a phone is only as good as the digital services you can get on that phone. So Nokia, who used to compete only with other phone manufacturers, suddenly found itself in competition with computer manufacturers, who already had a strongly-developed ecosystem, strong relationships with application developers, and a compelling business model to offer them. Nokia was unable to make the leap from phone maker to computer maker.

When technology advances, strategies and business models shift.

It doesn’t matter what you make or what service you provide. If it’s a product, it will more and more come to resemble a computer. If it’s a service, it will increasingly become a digital service.

The shift doesn’t happen everywhere all at once. It’s happened with music and books, it’s happening with phones, TV and film, the hospitality and transportation industry, it is just starting to happen with cars, it will soon be happening in health care and other systems that tend to resist change because of bureaucracy and legal regulation.

So what can you do? How can you avoid being the next Kodak or the next Nokia? 

You need to take a different approach to innovation.

How do you manage your innovation initiatives?

Many companies use something called an innovation funnel. 

The idea of the funnel is that you solicit ideas from all over the company, and sometimes even from outside the company. Ideas come in at the top of the funnel and you have several gates that they have to go through, and they get funded at varying levels as they pass the criteria at each gate. If you do it right, the theory is that the best ideas come out and these are the ones that we implement.

The problem with the funnel approach is that nobody puts ideas in at the top. Nothing comes in. Why? Because people look at that funnel and what they see is a sophisticated machine for killing their ideas. The design of the funnel does not encourage people to generate ideas, because basically it’s a suggestion box with a shredder inside. It’s an idea killer. It doesn’t excite people. It doesn’t encourage creativity.

People think: I'm going to have to write a business plan. I'm going to have to do a bunch of market research. I'm going to have to make a lot of spreadsheets and projections. And this is on top of my regular job. Only to have my idea most probably killed in the end. You are saying to people “We welcome your ideas” but what people are thinking is “You don't really welcome my ideas. You welcome my ideas so you can kill them.”

So what happens? Nothing comes into the top of the funnel, and the funnel manager goes, “Why? Why is nobody giving me their ideas?” This is why it's a problem. Because if anyone really has an idea, what are they going to do? They are going to leave, because in most cases, it’s actually easier to do your innovation out in the world today, even with no funding, than it is to do it inside a modern industrial company.

So what's an alternative to the funnel? Connected companies, like Amazon and Google, do it differently. They create a level playing field where anyone in the company can generate ideas and they actually are allowed to spend time working on them, without having to prove anything. They are trusted to innovate.

At Google they call it 20 percent time. For one day a week, or the equivalent, you get the opportunity  to play with your ideas. This kind of approach has to be supported by a culture where people actually really do have passion, energy and things they want to do. And it has to be recognized and supported throughout the organization as something that’s important. 

If you want to do this, efficiency has to take a hit. You can't optimize everything for efficiency and also do experiments. You just can't. Experiments by their very nature are inefficient, because you don’t really know what you’re doing. If you know in advance what the outcome will be, by definition it’s not an experiment.

This approach is less like a funnel and more like a terraced garden. It’s actually similar to a funnel, in a way, but it’s flipped.

Think of it this way. You've got to make space for these ideas to start.

There's no learning in the funnel approach. You don't learn much from making a business plan. You learn when you do experiments, when you put things into the world and start interacting with customers. You learn when you do things. What do you learn from making a business plan? Business plans are science fiction.

As a leader you have to make space for these experiments to happen. 

And some of these experiments – maybe even a lot of them – will yield interesting results.

And you want to have a way to distribute budget and resources to the most promising experiments.

So you set a certain percentage of the budgets throughout the organization, and you say, this money is for funding promising experiments. You can't spend it on operations or improving efficiency. You have to spend it on new ideas that you think are promising. That’s the second level in the terraced garden.

Layer one gives everybody a little bit of elbow room to innovate and experiment. Layer two offers a way for management to pick the plants that look the most interesting and give them a little bit of care and feeding. It might be that you just give someone some extra time. 

Then third and fourth layers are where the most promising ideas, the ones that might be worth making big bets on, emerge. You might have a few of these that you think might actually generate the next major stage of growth for the company. 

The good thing is that these big bets are all based on things that are already working. This is how venture capitalists invest. They usually don't invest in a really good business plan, because people who make good business plans often don't make good entrepreneurs. People who make good entrepreneurs are people who are out there doing things already.

Amazon's recommendation engine started out as a little weed at the bottom of the garden. Gmail at Google started out as a small 20-percent-time project, somebody was doing it in their spare time.

Venture capitalists invest in companies that already working. They may not be profitable yet, but they have customers, they have promise, they have figured something out, they have learned something. This is also how companies like Google and Amazon do it.

They don't invest in business plans. They don't say it's got to be a billion dollar opportunity or it's not worth or time. They say, let’s try it. Get out there and try things with customers because the billion dollar opportunity may start as a $10,000 opportunity. It may start small. 

What are Google’s big bets right now? Google Glass. The self-driving car.

What are the big bets for Amazon? Kindle. Kindle's a big bet. There are lots of dollars going into that one. Amazon web services: the project that says, we're going to take our own infrastructure and we're going to sell it. Even to competitors. We'll sell to anybody.

What are some of the experiments that failed? Amazon tried auctions. And it seems to make sense. “eBay does it. We've got a big audience. We can try it.” They tried it. They tried and they failed.

But what really happened with Amazon auctions? They learned. They realized something about their customers. Amazon is very customer focused. They realized that their customers don't want to wait. They don't want to wait a week to see if they bought something. Amazon customers want it now.

One of the interesting experiments is something called “unique phrases inside this book.” Someone had a hypothesis: “I think some books have not just unique words but unique phrases. If a phrase shows up in one book, you might want to know other books that have that phrase in it. It might be interesting.” Someone's working on that as a little experiment.

What happens if that experiment fails? Those people don't get fired. They are extremely valuable because of what they have learned. They find other teams. They get recruited. Think of a swarm of startups where people are recruiting each other all the time. It's like Silicon Valley inside of Amazon.

This kind of change is not simple or easy. But it’s clear that the future will not be simply more of the past. It will require bold thinking, creativity, and a new approach to innovation. Innovation can’t be the job of an R&D department. It has to be everyone’s job. And if you’re in senior management, it’s your job to create the conditions that will make innovation possible.

You can hear more from Dave on how to transform your business to digital in our Digital Business Thought Leaders webcast "The Digital Experience: A Connected Company’s Sixth Sense".

SQL Server Replication Quick Tips

Pythian Group - Tue, 2014-08-26 07:56

There is a time in every SQL Server DBA career where a mail came in with a “replication is not working, please check” message. This article is intended to provide with quick tips on how to handle common replication errors and performance problems in a one way transactional replication topology

Oh boy, there is a data problem:

ID-10039897

You check replication monitor and get a :

“Transaction sequence number: 0x0003BB0E000001DF000600000000, Command ID: 1″

The row was not found at the Subscriber when applying the replicated command. (Source: MSSQLServer, Error number: 20598)

Note the sequential number will be used in the following scripts, also the commandID is important to note as not necessarily the whole sequential number has issues, it might be tied to just one command.

Go to the distributor database en run the following command to get the list of articles involved in this issue:

select * from dbo.MSarticles
where article_id in (
select article_id from MSrepl_commands
where xact_seqno = 0x0003BB0E000001DF000600000000)

To get the whole list of commands you can run below query

exec sp_browsereplcmds
@xact_seqno_start = ’0x0003BB0E000001DF000600000000′,
@xact_seqno_end = ’0x0003BB0E000001DF000600000000′

With this last query you can get to the exact command that is failing (by searching the command number in the commandID column)

You will notice that a transactional replication will typically(depending on setup) use insert, delete, update stored procedures to replicate the data, so the command you will see over here will look something like:

{CALL [sp_MSdel_dboMyArticle] (118)}

That is the stored procedure generated to process delete statement over dbo.MyArticle table, and in this case it is trying to delete ID 118. Based on the error reported you will now realize that the issue is that the replication is trying to delete MyArtcile on ID 118 and is not there, so it is trying to delete a non existent record.

Options:

  1. You can either check the publisher for this record and manually insert it in the subscriber, this will cause the replication command to succeed and will fix the issue.
  2. You can skip the command, for this specific example you can skip the command as there is no need to delete something that has been already deleted, by removing the command from the MSrepl_commands table. (Beware, only do this when you know what you are doing, manually removing transactions can result in a unstable replication.) In this example you would use something like
    Delete from MSrepl_commands
    where xact_seqno = 0x0003BB0E000001DF000600000000 and commandID=1
  3. Reinitialize, this option is the least famous, you should try to fix the issue before doing this, however if after skipping the command you still get new errors everywhere, something definitely went wrong and there is no easy way to guarantee that your subscription is up to date and stable, this can be indicator that someone or something messed around with the data, there was some type of modification at the subscription and this is causing issues with the replication. Remember most likely a one way transactional replication is intended to have a copy of the data so it can be queried, no modification should be made to the data as this won´t replicate back to the publisher.

Query time outs:

After checking the replication monitor you get a message like:ID-10054415

Query timeout expired
The process is running and is waiting for a response from the server
Initializing…

and then terminating with this error…
Agent ‘MyAgent’ is retrying after an error, YY retries attempted

This can be due to several reasons:

  • Your transaction is taking a long time and needs some tuning. If your transaction is touching too much data or is using a bad query plan it can result in a long running query, check your TSQL and see if the execution plan is optimal
  • There is a problem with the network. If you normally don´t have this issue and this just happened out of the blue, you can try to check the network, sometimes a network failure or saturated endpoint can increase transfer rates affecting your replication.
  • Server performance, either the publisher or subscriber can have a performance problem, either too much CPU or Memory usage can eventually impact a replication transaction causing it to timeout
  • The query just needs some more time to complete. If this is the case you can tweak the time out setting to give the transaction some more time so it can process properly. To do this:
  1. Right click the Replication folder
  2. Click Distributor Properties and select General
  3. Click ‘Profile Defaults’
  4. Choose ‘Distribution Agents’ on left
  5. Click ‘New’ to create a new default agent profile
  6. Choose ‘Default Agent Profile’ from the list displayed, (to copy this)
  7. Pick a name for your new profile and upate the QueryTimeout value in right column
  8. Save
  9. Choose to use this profile across all your replication sets. However I would recommend to only apply to the agent that requires this change
  10. To individually assign the profile, open Replication Monitor and then in the left pane click your replication set
  11. In the right pane, select your desired agent, right click and change the profile to the new one you just created

 Mini Hack on expired subscriptionsID-10098834

When a replication is marked as expired, it will tell you that you need to reinitialize.

To activate it “under the hood”, check your replication monitor last error, it will show you the last sequential number that tried to process, then run this command(using the corresponding seq_no):

update MSsubscriptions
set status=2
where subscription_seqno=0x0002AADE00005030000100000002

The status column means:

0 = Inactive.

1 = Subscribed.

2 = Active.

You can change it to Active and it will try to process again. Why would you use this? if the subscription expired but your distribution cleanup job haven´t run, then it can try to reprocess everything again, if the issue was related to a network time out and now you have your network back up, you can try this as it will try to start from the last sequential number. Also you can try to do this to reproduce the last error reported, so it will fail and eventually expire again but you will have a better idea on why it failed in the first place.

Multi threading or “Streams”

A slow replication, and by slow I mean when you know that your replication is experiencing a delay when your command goes from the distributor to the subscription, you can check this with performance counters or quickly insert a token(http://technet.microsoft.com/en-us/library/ms151846%28v=sql.105%29.aspx)

You can improve the performance by adding streams, normally a default setting will write sequentially the replication transactions one by one, with Streams you can add more threads, say you specify to use 4 strems, you will be processing 4 transactions at a time meaning a faster turnaround. This can work beautifully but it can also generate deadlocks and inconsistencies, I would recommend to start low and just add 1 stream at a time and stop when you start seeing a problem. Do not go crazy and feel this is a turbo button and add 30 streams, and like most features, test it in QA first!

To Enable this option follow these steps:

  1. Open Replication Monitor, expand the Publisher and select the Publication in the left pane.
  2. On the right pane window , under “All Subscriptions” , you will see a list of all the Subscribers.
  3. Right Click the Subscriber you want to modify and click on “View Details”. A new Window will appear with the distribution agent session details.
  4. Now click on “Action” in the Menu bar at top and select “Distribution Agent Job Properties”, this will open the corresponding job properties.ID-100203331
  5. Go to  “Steps” in the left pane window followed by highlighting “Run Agent” on the Right pane window, click Edit.
  6. A new Windows will popup , scroll to the right end of the command section and append this parameter “ -SubscriptionStreams 2”
  7. Save the settings and restart the Distribution Agent job.

You might encounter some issues when implementing this, you can read this KB for further info:

http://support.microsoft.com/kb/953199

Conclusion

There are many tips on how to fix a replication, sometimes is easier to just reinitialize, but sometimes this is not an option when critical systems depend on the subscription to be up to date or your database is so huge that it will take days to complete. When possible try to troubleshoot instead of just restarting the replication from scratch as it will give you a lot more insight on what is going on.

Categories: DBA Blogs

Starting out with MAF?

Angelo Santagata - Tue, 2014-08-26 07:45
If your starting out with MAF, Oracle's Mobile Application Framework, then you MUST read this blog entry and make sure everything is right.. even if your a seasoned ADF mobile developer like me.. you wanna check this out, got me a couple of times!

https://blogs.oracle.com/mobile/entry/10_tips_for_getting_started