Skip navigation.

Yann Neuhaus

Syndicate content
All blog entries from http://www.dbi-services.com/
Updated: 16 hours 45 min ago

SQL Patch: Another way to change the plan without changing the code

Fri, 2014-10-03 09:02

Recently, at a customer site, I faced a performance issue. However, as often the statement is embedded in the application so it's not possible to rewrite the query. In this blog post, we'll change the execution plan to solve the problem without changing the code - thanks to SQL Patch.

Java get Class names from package String in classpath

Fri, 2014-10-03 01:28

As a Java developer you probably used to know about reflexion. However, in order to keep your software architecture flexible, some functionalities are sometimes not provided out of the box by the JVM.

In my particular case, I needed to find out every Class and Sub-Classes inside a package, thus reparteed within several Jars.

Internet has lots of solution, but it remains complicated for everybody to reach this goal. After googleing, I found a link which provided a partial solution. I would like to thank the website author:

http://www.java2s.com/Code/Java/Reflection/Attemptstolistalltheclassesinthespecifiedpackageasdeterminedbythecontextclassloader.htm

Some other solution invited us to deploy external libraries as well. But I was not interested to manage another lib in my soft just for that purpose.

So, the solution was to recover all jars from the context classloader and loop on them in order to find out the classes we are looking for.

Following, you will see a complete Java class resolving this issue:

 

import java.io.File;

import java.io.IOException;

import java.io.UnsupportedEncodingException;

import java.net.URL;

import java.net.URLClassLoader;

import java.net.URLDecoder;

import java.util.ArrayList;

import java.util.Enumeration;

import java.util.HashMap;

import java.util.List;

import java.util.jar.JarEntry;

import java.util.jar.JarFile;

 

/**

 *

 *

 *

 * @author Philippe Schweitzer dbi services Switzerland

 *

 */

public class ClassFinder {

 

    public static void main(String[] args) throws ClassNotFoundException {

 

        List<Class> classes = ClassFinder.getClassesFromPackage("YOUR PACKAGE NAME");

 

        System.out.println("START ClassList:");

        for (Class c : classes) {

            System.out.println(c.toString());// + " " + c.getCanonicalName());

        }

        System.out.println("END ClassList:");

    }

 

    /**

     *

     * Attempts to list all the classes in the specified package as determined     *

     * by the context class loader...

     *

     * @param pckgname the package name to search

     * @return a list of classes that exist within that package

     * @throws ClassNotFoundException if something went wrong

     *

     */

    public static List getClassesFromPackage(String pckgname) throws ClassNotFoundException {

 

        ArrayList result = new ArrayList();

        ArrayList<File> directories = new ArrayList();

        HashMap packageNames = null;

 

        try {

            ClassLoader cld = Thread.currentThread().getContextClassLoader();

            if (cld == null) {

                throw new ClassNotFoundException("Can't get class loader.");

            }

 

            for (URL jarURL : ((URLClassLoader) Thread.currentThread().getContextClassLoader()).getURLs()) {

                System.out.println("JAR: " + jarURL.getPath());

               

                getClassesInSamePackageFromJar(result, pckgname, jarURL.getPath());

                String path = pckgname;

                Enumeration<URL> resources = cld.getResources(path);

 

                File directory = null;

 

                while (resources.hasMoreElements()) {

                    String path2 = resources.nextElement().getPath();

                    directory = new File(URLDecoder.decode(path2, "UTF-8"));

                    directories.add(directory);

                }

 

                if (packageNames == null) {

                    packageNames = new HashMap();

                }

                packageNames.put(directory, pckgname);

            }

 

        } catch (NullPointerException x) {

            throw new ClassNotFoundException(pckgname + " does not appear to be a valid package (Null pointer exception)");

 

        } catch (UnsupportedEncodingException encex) {

            throw new ClassNotFoundException(pckgname + " does not appear to be a valid package (Unsupported encoding)");

 

        } catch (IOException ioex) {

            throw new ClassNotFoundException("IOException was thrown when trying to get all resources for " + pckgname);

 

        }

 

        for (File directory : directories) {

            if (directory.exists()) {

                String[] files = directory.list();

 

                for (String file : files) {

                    if (file.endsWith(".class")) {

                        try {

                      //      System.out.println(packageNames.get(directory).toString() + '.' + file.substring(0, file.length() - 6));

 

                            result.add(Class.forName(packageNames.get(directory).toString() + '.' + file.substring(0, file.length() - 6)));

                        } catch (Throwable e) {

                        }

                    }

                }

            } else {

                throw new ClassNotFoundException(pckgname + " (" + directory.getPath() + ") does not appear to be a valid package");

 

            }

        }

        return result;

 

    }

 

    /**

     *

     * Returns the list of classes in the same directories as Classes in

     * classes.

     *

     * @param result

     * @param classes

     * @param jarPath

     *

     */

    private static void getClassesInSamePackageFromJar(List result, String packageName, String jarPath) {

 

        JarFile jarFile = null;

 

        try {

            jarFile = new JarFile(jarPath);

            Enumeration<JarEntry> en = jarFile.entries();

 

            while (en.hasMoreElements()) {

                JarEntry entry = en.nextElement();

                String entryName = entry.getName();

                packageName = packageName.replace('.', '/');

 

                if (entryName != null && entryName.endsWith(".class") && entryName.startsWith(packageName)) {

                    try {

                        Class entryClass = Class.forName(entryName.substring(0, entryName.length() - 6).replace('/', '.'));

 

                        if (entryClass != null) {

                            result.add(entryClass);

                        }

                    } catch (Throwable e) {

// do nothing, just continue processing classes

                    }

                }

            }

        } catch (Exception e) {

 

        } finally {

            try {

                if (jarFile != null) {

                    jarFile.close();

                }

 

            } catch (Exception e) {

            }

        }

    }

}

OOW14 Day 4 - Internals of the 12c Multitenant Architecture by Vit Spinka

Thu, 2014-10-02 14:47

This is the session I preferred at Oracle Open World. Well, I'm writing that before going to @ludodba one and I'm sure I'll have then two preferred sessions... So Vit Spinka has presented the internals of the new multitenant architecture. It's always good to play with some internals things. Not only for the geeky fun of it but also because it helps understand how it work and address issues later.

I had investigated the metadata/object links in my blog post (and thanks to Vit for having referenced it during his presentation). But I learned from vit about what has changed in redo logs. In his case, the research on redo log internals is not just a curiosity. It's mandatory for his job: he is the principal developer for Dbvisit Replicate and Dbvisit Replicate reads the redo logs: the MINER process reads them and transforms them to something that can be used by the APPLY process.

So I'll not repeat what is available in his slides:
http://vitspinka.cz/files/CON2282%20Internals%20of%20the%20Oracle%20Database%2012c%20Multitenant%20Architecture.pdf 

Finally the redo is quite the same except that it adds the container id (it's one byte only because we are limited to 252 PDB). Addressing the files is not very special as the pluggable is very similar to transportable tablespaces. Addressing the objects is a bit different because we can have same object id across several PDB, and this is the reason to introduce the container id in the redo. But that's not so special.

The thing to remember about the new multitenant architecture is that:

  • it is not a big change for the instance which still manages the same objects (sessions, cursors, services, buffers, etc) just adding a container id
  • it no change for the database files as transportable tablespaces already introduced 'plugging'
  • all the magic is in the dictionary in order to have a shared dictionary for oracle objects anda  private dictionary for application objects. 

Day 3 at Oracle Open World 2014 - Cloud: Private or Public?

Thu, 2014-10-02 07:16

One of the main subject of this year's Oracle OpenWorld was the Cloud. In this post I will share some thoughts on this: is the Cloud a dream, a reality, fog or smoke?

 

Before

Before going to OOW 2014 I did not have a fixed position about Cloud. I had some questions, for instance about security like, I guess, other people. I saw several sessions and I started to write this blog two days ago to summarize my reflection. Fortunately I was in a session on Wednesday where the discussion was "Private Cloud? Public Cloud?" and after that I had to update, in the good way, this post.

 

Now

Now as we can choose between public, private or even mixed Cloud the sky is less .... cloudy.

I am a bit more convinced. I would use Public Cloud for the development environment as this will reduce the implementation time, flexibility for the developer. Why not also for the training as we do not have to use production data? For the QA, clone and production I would more use the Private Cloud for the time being. In both cases we will benefit from great agility, low expenses, better utilization of resources aso.

But there are still some questions:

  • Although we can install the Private Cloud like the Public one, what will be the impact of the budget? 

  • What will be the impact on our day to day work?

  • What will be our role as an integrator working more on the infrastructure layer?

  • Do we have a view and can we have some valuable discussions with the people who manage our system in the Public Cloud in case we hit issues? Can a good relationship be build?

  • Today we increase our skills while we are working also in Dev on the premise installation. We can hit sometimes, during the integration phase, issues that have to be solved. This of course avoid to have later the same problems in the other environments. How will this work in case we use a mixed environment, Public for Dev, Private for Prod?

  • Who will do the load&stress tests in case the Public Cloud is used?

  • In a validated system we have to follow strict procedures (e.g. GxP). How is this managed in the Public Cloud? Are we still compliant? Are the people managing the Public Cloud trained using the company's procedures? The last session confirmed that in that case we have to use the Private Cloud.

  • Besides the regulatory rules in the different countries, what will be the acceptance in the different countries? Will it be the same in Europe as in the USA? Do we have the same culture about this subject?

  • Another question which is not related to the technology; how the future young IT people have to be positionned? Which kind of training they have to follow? Is this already in place in the universities? Are the teacher aware of those changes?

I've got lots of information on the technical side but I am also interested on the social and human point of view. It would be interesting to have the view(s) from a sociologist or philosopher as this new approach will certainly have an impact on our life - like the Big Data will have.

Probably I missed some information in all this flow and I don't have all keys. But I think we are at the beginning of a revolution? evolution? or opportunity.

Let's see what will happen in the next few years, and think about it in the meantime...

Oracle OEM Cloud Control 12c upgrade to 12.1.0.4

Thu, 2014-10-02 02:12

In this blog post, I will describe how to upgrade from Oracle Enterprise Manager Cloud Control 12.1.0.3 to OEM 12.1.0.4.0. I have already described the main new features of Cloud Control 12.1.0.4 version in an earlier post (Oracle OEM Cloud Control 12.1.0.4 - the new features). The first pre-requisite is to apply the patch 11061801 on the repository database in 11.2.0.3 version, using the classical opatch apply method. Then, we can begin the upgrade phase.

First, we should explicitly stop the OMS jvmd and adp engines:

 

oracle@vmtestoraem12c:/home/oracle/ [oms12c] emctl extended oms jvmd stop -allOracle Enterprise Manager Cloud Control 12c Release 3Copyright (c) 1996, 2013 Oracle Corporation. All rights reserved.Please enter the SYSMAN password:Stopping all Engines{}

No engines found for this operation

oracle@vmtestoraem12c:/home/oracle/ [oms12c] emctl extended oms adp stop -a

Oracle Enterprise Manager Cloud Control 12c Release 3

Copyright (c) 1996, 2013 Oracle Corporation. All rights reserved.

No valid registry entry found for verb jv

 

Then we stop the OMS:

 

oracle@vmtestoraem12c:/home/oracle/ [oms12c] emctl stop oms -all

Oracle Enterprise Manager Cloud Control 12c Release 3

Copyright (c) 1996, 2013 Oracle Corporation.

All rights reserved.Stopping WebTier...

WebTier Successfully Stopped

Stopping Oracle Management Server...

Oracle Management Server Successfully Stopped

AdminServer Successfully Stopped

Oracle Management Server is Down

 

We stop the management agent:

 

oracle@vmtestoraem12c:/home/oracle/ [agent12c] emctl stop agent

Oracle Enterprise Manager Cloud Control 12c Release 3

Copyright (c) 1996, 2013 Oracle Corporation.

All rights reserved.

Stopping agent ..... stopped.


Finally after unzipping the 12.1.0.4 binary files, we can run the installer:

 

cc1

 

We choose not to receive security updates:

 

cc2

cc3

 

 

We choose to skip the updates:

 

cc4

 

All the prerequisites checks have succeeded :=)

 

cc5

 

We select a One System Upgrade and the Oracle_Home where the 12.1.0.3 version is installed:

 

cc7

 

We select the new Middleware Home:

 

cc8

 

We enter the administration passwords:

 

cc9

 

The installer reminds you that you have correctly patched the repository database. Let's check if it is correct:

 

Interim patches (1) :

Patch 11061801 : applied on Mon Aug 04 16:52:51 CEST 2014

Unique Patch ID: 16493357

Created on 24 Jun 2013, 23:28:20 hrs PST8PDT

Bugs fixed: 11061801

 

cc10

 

We did not copy the emkey to the repository, so we have to run:

 

oracle@vmtestoraem12c:/u01/app/oracle/MiddleWare_12103/oms/bin/ [oms12c] emctl config emkey -copy_to_repos_from_file -repos_conndesc '"(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=vmtestoraem12c.it.dbi-services.com)(PORT=1521)))(CONNECT_DATA=(SID=OMSREP)))"' -repos_user sysman -emkey_file /u01/app/oracle/MiddleWare_12103/oms/sysman/config/emkey.ora

Oracle Enterprise Manager Cloud Control 12c Release 3

Copyright (c) 1996, 2013 Oracle Corporation.

All rights reserved.

Enter Admin User's Password :

Enter Enterprise Manager Root (SYSMAN) Password :

The EMKey has been copied to the Management Repository.

This operation will cause the EMKey to become unsecure.

 

After the required operation has been completed, secure the EMKey by running "emctl config emkey -remove_from_repos":

 

cc11

 

We select Yes to let the installer fix the isssue automatically:

 

cc12

 

We select Next:

 

cc13

 

We can select additionnal plugins:

 

cc14

 

We enter the weblogic password:

 

cc15

 

We select install:

 

cc16

 

And finally we run the allroot.sh script connected as root:

 

cc17

 

The upgrade is successfull! Let's check the OMs status:

 

oracle@vmtestoraem12c:/u01/app/oracle/MiddleWare_12cR4/oms/ [oms12c] emctl status oms -details

Oracle Enterprise Manager Cloud Control 12c Release 4

Copyright (c) 1996, 2014 Oracle Corporation.

All rights reserved.

Enter Enterprise Manager Root (SYSMAN) Password :

Console Server Host : vmtestoraem12c.it.dbi-services.com

HTTP Console Port : 7789

HTTPS Console Port : 7801

HTTP Upload Port : 4890

HTTPS Upload Port : 4901

EM Instance Home : /u01/app/oracle/gc_inst/em/EMGC_OMS1

OMS Log Directory Location : /u01/app/oracle/gc_inst/em/EMGC_OMS1/sysman/log

OMS is not configured with SLB or virtual hostname

Agent Upload is locked.

OMS Console is locked.

Active CA ID: 1

Console URL: https://vmtestoraem12c.it.dbi-services.com:7801/em

Upload URL: https://vmtestoraem12c.it.dbi-services.com:4901/empbs/upload

WLS Domain InformationDomain Name : GCDomain

Admin Server Host : vmtestoraem12c.it.dbi-services.com

Admin Server HTTPS Port: 7102

Admin Server is RUNNING

Oracle Management Server Information

Managed Server Instance Name: EMGC_OMS1

Oracle Management Server Instance Host: vmtestoraem12c.it.dbi-services.com

WebTier is Up

Oracle Management Server is Up

BI Publisher is not configured to run on this host.

 

Now we have access to the Enterprise Manager Cloud Control 12.1.0.4:

 

cc18

 

The next step consists in upgrading the management agents. From the Setup menu, we select Upgrade Agents:

 

cc19

cc20

 

The management agent is detected:

 

cc21

 

The operation is successfull:

 

cc22

 

The update to 12.1.0.4 Enterprise Manager version did not cause any problem and has a new feature which checks the correct patching of the Enterprise Manager repository database.

OOW14 Day 3 - 12c Adaptive Bitmap Pruning

Wed, 2014-10-01 13:10

I'm currently following the session 'Real-World Performance of Star and Snowflake Schemas' with Michael Hallas, Andrew Holdsworth, John Clarke. It's really a good presentation. the Real Performance team tries to spread messages about how to design for performance. But today I'll not blog about what I've seen but about what I've not seen. Everybody talks about those nice 12c features that are useful in BI workloads, such as adaptive joins, adaptive parallel distribution, vector 'in-memory' aggregation, etc. Nobody talks about Adaptive Bitmap Pruning.

If you google for for it there is only one result (at least before my blog is published) which is the patent about all all those new CBO features that came in 12.1.0.1

And when I assist to a session that shows star transformation and execution plans with and without temporary table, I'm frustrated that I don't see anything about that great feature that stabilizes our BI reports on star schemas. I'm preparing our tuning workshop (dates here - 10% discount if you book before the end of the year) and it's something I can't skip when talking about star transformation and bitmap indexes.

So let's have a look to it. It you want a refresh about star transformation, please just wait about the next SOUG newsletter. But if you're already familiar with it, this is for you.

Let's have a look at an execution plan in 12c after running a query on a star schema. You have the same as in 11g except that we have that grayed ‘STATISTICS COLLECTOR’. Star transformation is good when the predicate is selective enough to filter few rows. Imagine that the cardinality estimation was wrong and most of FACT rows have the required value. This is what happened here, and the optimizer has chosen to stop iterating in that bitmap branch. It just ignores the predicate at that step and the join back to the dimension Cartesian join will filter it anyway.

If you check the execution plan with predicates you will see the predicate on dimension in the two table access.

12cAdaptiveBitmapPruning.png

Look at the end. When the statistics collector has seen that the threshold has been passed over, it has decided to skip that bitmap branch. This is Adaptive Bitmap Pruning. The bitmap branch is good only if it helps to filter a lot of rows. If it’s not the case, then it’s just an overhead and it is skipped, coming back - for that branch only - to the behavior we have when star transformation was disabled.

As with the other adaptive plans, the threshold is calculated at parse time.

See more details about the inflection point in a previous blog about Adaptive Joins

 

Solving customer issues at OOW14: Dbvisit Replicate can replicate tables without primary key

Wed, 2014-10-01 06:39

Usually, the logical replication of changes uses the primary key. Each row updated or deleted generate a statement to be applied on the target, which affects only one row because it accesses with the primary key. If there is no primary key, we need to have something unique and at worst it is the whole row. But sometimes old applications were designed before being implemented into relational database and have no unicity. It it a problem for logical replication? We will see that Dbvisit replicate can address that.

Here is the case I encountered at a customer. The application has a master-detail table design, and the detail tables are inserted/deleted all together for the same master key. And there is no primary key, and even nothing unique. The only value that may help is a timestamp but sometimes timestamps do not have the sufficient precision to be unique. And anyway, imagine what happens if we change back the system time, or during daylight saving changes.

At dbi services we have very good contact with our partner Dbvisit and it's the kind of question that can be addressed quickly by the support. Anyway, I was at the Oracle Open World and then was able to discuss directly with the Dbvisit replicate developers. There is a solution and it is even documented.

The basic issue is that when the delete occurs, a redo entry is generated for each row that is deleted and then Dbvisit replicate generates an update statement to do the same on the target. But when there are duplicates the first statement will affect several rows and the next statement will affect no rows.

This is the kind of replication complexity that is addressed with conflict resolution. It can be addressed manually: the replication stops when a conflict is detected and continues once we have decided what to do. But we can also set rules to address it automatically when the problem occurs again so that the replication never stops.

Here is the demo about that as I tested it before providing the solution to my customer. 

Note that it concerns only deletes here but the same can be done with updates.

1. I create a table with 4 identical rows for each value of N:

  create table TESTNOPK as select n,'x' x from (select rownum n from dual connect by level
SQL> connect repoe/repoe Connected.
SQL> create table TESTNOPK as select n,'x' x from (select rownum n from dual connect by level   Table created.

2. Status of replication from the Dbvisit console:


| Dbvisit Replicate 2.7.06.4485(MAX edition) - Evaluation License expires in 29 days MINE IS running. Currently at plog 35 and SCN 796568 (10/01/2014 01:08:04). APPLY IS running. Currently at plog 35 and SCN 796566 (10/01/2014 01:08:04). Progress of replication dbvrep_XE:MINE->APPLY: total/this execution -------------------------------------------------------------------------------------------------------------------------------------------- REPOE.CUSTOMERS:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ADDRESSES:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.CARD_DETAILS:           100%  Mine:1727/1727       Unrecov:0/0         Applied:1727/1727   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ORDER_ITEMS:            100%  Mine:12520/12520     Unrecov:0/0         Applied:12520/12520 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.ORDERS:                 100%  Mine:10040/10040     Unrecov:0/0         Applied:10040/10040 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.INVENTORIES:            100%  Mine:12269/12269     Unrecov:0/0         Applied:12269/12269 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.LOGON:                  100%  Mine:12831/12831     Unrecov:0/0         Applied:12831/12831 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.TESTNOPK:               100%  Mine:40/40           Unrecov:0/0         Applied:40/40       Conflicts:0/0       Last:01/10/2014 01:08:02/OK -------------------------------------------------------------------------------------------------------------------------------------------- 8 tables listed.  

3. I delete the lines with the value 10:


SQL> select * from TESTNOPK where n=10;
         N X ---------- -         10 x         10 x         10 x         10 x
SQL> delete from TESTNOPK where n=10;
4 rows deleted.
SQL> commit;
Commit complete.

5. apply is stop on a conflict: too many rows affected by the delete


MINE IS running. Currently at plog 35 and SCN 797519 (10/01/2014 01:10:56). APPLY IS running. Currently at plog 35 and SCN 796928 (10/01/2014 01:09:08) and 1 apply conflicts so far (last at 01/10/2014 01:10:57) and WAITING on manual resolve of apply conflict id 35010009996. Progress of replication dbvrep_XE:MINE->APPLY: total/this execution -------------------------------------------------------------------------------------------------------------------------------------------- REPOE.CUSTOMERS:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ADDRESSES:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.CARD_DETAILS:           100%  Mine:1727/1727       Unrecov:0/0         Applied:1727/1727   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ORDER_ITEMS:            100%  Mine:12520/12520     Unrecov:0/0         Applied:12520/12520 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.ORDERS:                 100%  Mine:10040/10040     Unrecov:0/0         Applied:10040/10040 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.INVENTORIES:            100%  Mine:12269/12269     Unrecov:0/0         Applied:12269/12269 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.LOGON:                  100%  Mine:12831/12831     Unrecov:0/0         Applied:12831/12831 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.TESTNOPK:                90%  Mine:44/44           Unrecov:0/0         Applied:40/40       Conflicts:1/1       Last:01/10/2014 01:09:17/RETRY:Command affected 4 row(s). -------------------------------------------------------------------------------------------------------------------------------------------- 8 tables listed.     dbvrep> list conflict Information for conflict 35010009996 (current conflict): Table: REPOE.TESTNOPK at transaction 0008.003.0000022b at SCN 796930 SQL text (with replaced bind values): delete from "REPOE"."TESTNOPK" where (1=1) and "N" = 10 and "X" = 'x'
Error: Command affected 4 row(s). Handled as: PAUSE Conflict repeated 22 times.

6. I resolve the conflict manually, forcing the delete of all rows

                                                                                                                                                       dbvrep> resolve conflict 35010009996 as force Conflict resolution set.   At that point, there is 3 following conflicts that I need to force as well because of the other deletes affecting no rows. I don't reproduce them here.

7. Once the conflits are resolved, the replication continues:

  MINE IS running. Currently at plog 35 and SCN 800189 (10/01/2014 01:19:16). APPLY IS running. Currently at plog 35 and SCN 800172 (10/01/2014 01:19:14). Progress of replication dbvrep_XE:MINE->APPLY: total/this execution -------------------------------------------------------------------------------------------------------------------------------------------- REPOE.CUSTOMERS:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ADDRESSES:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.CARD_DETAILS:           100%  Mine:1727/1727       Unrecov:0/0         Applied:1727/1727   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ORDER_ITEMS:            100%  Mine:12520/12520     Unrecov:0/0         Applied:12520/12520 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.ORDERS:                 100%  Mine:10040/10040     Unrecov:0/0         Applied:10040/10040 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.INVENTORIES:            100%  Mine:12269/12269     Unrecov:0/0         Applied:12269/12269 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.LOGON:                  100%  Mine:12831/12831     Unrecov:0/0         Applied:12831/12831 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.TESTNOPK:               100%  Mine:44/44           Unrecov:0/0         Applied:44/44       Conflicts:4/4       Last:01/10/2014 01:18:21/RETRY:Command affected 0 row(s). -------------------------------------------------------------------------------------------------------------------------------------------- 8 tables listed.                                                                                                                                                           dbvrep> list conflict Information for conflict 0 (current conflict): No conflict with id 0 found.  

8. Now I want to set a rule that manages that situation automatically. I add a 'too many rows' conflict rule to touch only one line for each delete:


dbvrep> SET_CONFLICT_HANDLERS FOR TABLE REPOE.TESTNOPK FOR DELETE ON TOO_MANY TO SQL s/$/ and rownum = 1/ Connecting to running apply: [The table called REPOE.TESTNOPK on source is handled on apply (APPLY) as follows: UPDATE (error): handler: RETRY logging: LOG UPDATE (no_data): handler: RETRY logging: LOG UPDATE (too_many): handler: RETRY logging: LOG DELETE (error): handler: RETRY logging: LOG DELETE (no_data): handler: RETRY logging: LOG DELETE (too_many): handler: SQL logging: LOG, regular expression: s/$/ and rownum = 1/ INSERT (error): handler: RETRY logging: LOG TRANSACTION (error): handler: RETRY logging: LOG]                                                                                                                                                        9. Now testing the automatic conflict resolution:   SQL> delete from TESTNOPK where n=9;
4 rows deleted.
SQL> commit;
Commit complete.
10.  the conflicts are automatically managed:   MINE IS running. Currently at plog 35 and SCN 800475 (10/01/2014 01:20:08). APPLY IS running. Currently at plog 35 and SCN 800473 (10/01/2014 01:20:08). Progress of replication dbvrep_XE:MINE->APPLY: total/this execution -------------------------------------------------------------------------------------------------------------------------------------------- REPOE.CUSTOMERS:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ADDRESSES:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.CARD_DETAILS:           100%  Mine:1727/1727       Unrecov:0/0         Applied:1727/1727   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ORDER_ITEMS:            100%  Mine:12520/12520     Unrecov:0/0         Applied:12520/12520 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.ORDERS:                 100%  Mine:10040/10040     Unrecov:0/0         Applied:10040/10040 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.INVENTORIES:            100%  Mine:12269/12269     Unrecov:0/0         Applied:12269/12269 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.LOGON:                  100%  Mine:12831/12831     Unrecov:0/0         Applied:12831/12831 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.TESTNOPK:               100%  Mine:48/48           Unrecov:0/0         Applied:48/48       Conflicts:7/7       Last:01/10/2014 01:19:57/OK -------------------------------------------------------------------------------------------------------------------------------------------- 8 tables listed.  

Now the replication is automatic and the situation is correctly managed.


 oow-imattending-200x200-2225057.gif  

As I already said, Dbvisit is a simple tool but is nethertheless very powerfull. And Oracle Open World is an efficient way to learn: share knowlege during the day, and test it during the night when you are too jetlagged to sleep...





 



 

 

solving customer issue at OOW14: Dbvisit replicate can even replicate tables with no primary key

Wed, 2014-10-01 06:39
Usually, the logical replication of changes uses the primary key. Each row updated or deleted generate a statement to be applied on the target, which affects only one row because it accesses with the primary key. If there is no primary key, we need to have something unique and at worst it is the whole row. But sometimes old applications were designed before being implemented into relational database and have no unicity. It it a problem for logical replication? We will see that Dbvisit replicate can address that.   Here is the case I encountered at a customer. The application has a master-detail table design, and the detail tables are inserted/deleted all together for the same master key. And there is no primary key, and even nothing unique. The only value that may help is a timestamp but sometimes timestamps do not have the sufficient precision to be unique. And anyway, imagine what happens if we change back the system time, or during daylight saving changes.   At dbi services we have very good contact with our partner Dbvisit and it's the kind of question that can be addressed quickly by the support. Anyway, I was at the Oracle Open World and then was able to discuss directly with the Dbvisit replicate developers. There is a solution and it is even documented.

The basic issue is that when the delete occurs, a redo entry is generated for each row that is deleted and then Dbvisit replicate generates an update statement to do the same on the target. But when there are duplicates the first statement will affect several rows and the next statement will affect no rows.

This is the kind of replication complexity that is addressed with conflict resolution. It can be addressed manually: the replication stops when a conflict is detected and continues once we have decided what to do. But we can also set rules to address it automatically when the problem occurs again so that the replication never stops.

Here is the demo about that as I tested it before providing the solution to my customer. 

Note that it concerns only deletes here but the same can be done with updates.

1. I create a table with 4 identical rows for each value of N:

  create table TESTNOPK as select n,'x' x from (select rownum n from dual connect by level
SQL> connect repoe/repoe Connected.
SQL> create table TESTNOPK as select n,'x' x from (select rownum n from dual connect by level   Table created.

2. Status of replication from the Dbvisit console:


| Dbvisit Replicate 2.7.06.4485(MAX edition) - Evaluation License expires in 29 days MINE IS running. Currently at plog 35 and SCN 796568 (10/01/2014 01:08:04). APPLY IS running. Currently at plog 35 and SCN 796566 (10/01/2014 01:08:04). Progress of replication dbvrep_XE:MINE->APPLY: total/this execution -------------------------------------------------------------------------------------------------------------------------------------------- REPOE.CUSTOMERS:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ADDRESSES:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.CARD_DETAILS:           100%  Mine:1727/1727       Unrecov:0/0         Applied:1727/1727   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ORDER_ITEMS:            100%  Mine:12520/12520     Unrecov:0/0         Applied:12520/12520 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.ORDERS:                 100%  Mine:10040/10040     Unrecov:0/0         Applied:10040/10040 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.INVENTORIES:            100%  Mine:12269/12269     Unrecov:0/0         Applied:12269/12269 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.LOGON:                  100%  Mine:12831/12831     Unrecov:0/0         Applied:12831/12831 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.TESTNOPK:               100%  Mine:40/40           Unrecov:0/0         Applied:40/40       Conflicts:0/0       Last:01/10/2014 01:08:02/OK -------------------------------------------------------------------------------------------------------------------------------------------- 8 tables listed.  

3. I delete the lines with the value 10:


SQL> select * from TESTNOPK where n=10;
         N X ---------- -         10 x         10 x         10 x         10 x
SQL> delete from TESTNOPK where n=10;
4 rows deleted.
SQL> commit;
Commit complete.

5. apply is stop on a conflict: too many rows affected by the delete


MINE IS running. Currently at plog 35 and SCN 797519 (10/01/2014 01:10:56). APPLY IS running. Currently at plog 35 and SCN 796928 (10/01/2014 01:09:08) and 1 apply conflicts so far (last at 01/10/2014 01:10:57) and WAITING on manual resolve of apply conflict id 35010009996. Progress of replication dbvrep_XE:MINE->APPLY: total/this execution -------------------------------------------------------------------------------------------------------------------------------------------- REPOE.CUSTOMERS:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ADDRESSES:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.CARD_DETAILS:           100%  Mine:1727/1727       Unrecov:0/0         Applied:1727/1727   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ORDER_ITEMS:            100%  Mine:12520/12520     Unrecov:0/0         Applied:12520/12520 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.ORDERS:                 100%  Mine:10040/10040     Unrecov:0/0         Applied:10040/10040 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.INVENTORIES:            100%  Mine:12269/12269     Unrecov:0/0         Applied:12269/12269 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.LOGON:                  100%  Mine:12831/12831     Unrecov:0/0         Applied:12831/12831 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.TESTNOPK:                90%  Mine:44/44           Unrecov:0/0         Applied:40/40       Conflicts:1/1       Last:01/10/2014 01:09:17/RETRY:Command affected 4 row(s). -------------------------------------------------------------------------------------------------------------------------------------------- 8 tables listed.     dbvrep> list conflict Information for conflict 35010009996 (current conflict): Table: REPOE.TESTNOPK at transaction 0008.003.0000022b at SCN 796930 SQL text (with replaced bind values): delete from "REPOE"."TESTNOPK" where (1=1) and "N" = 10 and "X" = 'x'
Error: Command affected 4 row(s). Handled as: PAUSE Conflict repeated 22 times.

6. I resolve the conflict manually, forcing the delete of all rows

                                                                                                                                                       dbvrep> resolve conflict 35010009996 as force Conflict resolution set.   At that point, there is 3 following conflicts that I need to force as well because of the other deletes affecting no rows. I don't reproduce them here.

7. Once the conflits are resolved, the replication continues:

  MINE IS running. Currently at plog 35 and SCN 800189 (10/01/2014 01:19:16). APPLY IS running. Currently at plog 35 and SCN 800172 (10/01/2014 01:19:14). Progress of replication dbvrep_XE:MINE->APPLY: total/this execution -------------------------------------------------------------------------------------------------------------------------------------------- REPOE.CUSTOMERS:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ADDRESSES:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.CARD_DETAILS:           100%  Mine:1727/1727       Unrecov:0/0         Applied:1727/1727   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ORDER_ITEMS:            100%  Mine:12520/12520     Unrecov:0/0         Applied:12520/12520 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.ORDERS:                 100%  Mine:10040/10040     Unrecov:0/0         Applied:10040/10040 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.INVENTORIES:            100%  Mine:12269/12269     Unrecov:0/0         Applied:12269/12269 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.LOGON:                  100%  Mine:12831/12831     Unrecov:0/0         Applied:12831/12831 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.TESTNOPK:               100%  Mine:44/44           Unrecov:0/0         Applied:44/44       Conflicts:4/4       Last:01/10/2014 01:18:21/RETRY:Command affected 0 row(s). -------------------------------------------------------------------------------------------------------------------------------------------- 8 tables listed.                                                                                                                                                           dbvrep> list conflict Information for conflict 0 (current conflict): No conflict with id 0 found.  

8. Now I want to set a rule that manages that situation automatically. I add a 'too many rows' conflict rule to touch only one line for each delete:


dbvrep> SET_CONFLICT_HANDLERS FOR TABLE REPOE.TESTNOPK FOR DELETE ON TOO_MANY TO SQL s/$/ and rownum = 1/ Connecting to running apply: [The table called REPOE.TESTNOPK on source is handled on apply (APPLY) as follows: UPDATE (error): handler: RETRY logging: LOG UPDATE (no_data): handler: RETRY logging: LOG UPDATE (too_many): handler: RETRY logging: LOG DELETE (error): handler: RETRY logging: LOG DELETE (no_data): handler: RETRY logging: LOG DELETE (too_many): handler: SQL logging: LOG, regular expression: s/$/ and rownum = 1/ INSERT (error): handler: RETRY logging: LOG TRANSACTION (error): handler: RETRY logging: LOG]                                                                                                                                                        9. Now testing the automatic conflict resolution:   SQL> delete from TESTNOPK where n=9;
4 rows deleted.
SQL> commit;
Commit complete.
10.  the conflicts are automatically managed:   MINE IS running. Currently at plog 35 and SCN 800475 (10/01/2014 01:20:08). APPLY IS running. Currently at plog 35 and SCN 800473 (10/01/2014 01:20:08). Progress of replication dbvrep_XE:MINE->APPLY: total/this execution -------------------------------------------------------------------------------------------------------------------------------------------- REPOE.CUSTOMERS:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ADDRESSES:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.CARD_DETAILS:           100%  Mine:1727/1727       Unrecov:0/0         Applied:1727/1727   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ORDER_ITEMS:            100%  Mine:12520/12520     Unrecov:0/0         Applied:12520/12520 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.ORDERS:                 100%  Mine:10040/10040     Unrecov:0/0         Applied:10040/10040 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.INVENTORIES:            100%  Mine:12269/12269     Unrecov:0/0         Applied:12269/12269 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.LOGON:                  100%  Mine:12831/12831     Unrecov:0/0         Applied:12831/12831 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.TESTNOPK:               100%  Mine:48/48           Unrecov:0/0         Applied:48/48       Conflicts:7/7       Last:01/10/2014 01:19:57/OK -------------------------------------------------------------------------------------------------------------------------------------------- 8 tables listed.  

Now the replication is automatic and the situation is correctly managed.


 oow-imattending-200x200-2225057.gif  

As I already said, Dbvisit is a simple tool but is nethertheless very powerfull. And Oracle Open World is an efficient way to learn: share knowlege during the day, and test it during the night when you are too jetlagged to sleep...





 



 

 

OOW14 Day 2 - Delphix #cloneattack

Tue, 2014-09-30 19:14

Do you know Delphix? The first time I heard of it was from Jonathan Lewis. And from Kyle Hailey of course. So it's not only about agile and virtualization. It's a real DBA stuff. So as I did yesterday with Dbvisit #repattack let's install the demo.

Here is the setup:

  • one source virtual machine with an XE database
  • one target virtual machine with XE installed but no database
  • one virtual machine with Delphix
And what can we do with that? We can clone the databases instantaneously. It's:
  • a virtual appliance managing storage snapshots for instant cloning
  • this is exposed through direct NFS to be used by the database
  • totally automated database maintenance (creating, restore, changing name, etc) through a nice GUI
So what's the point? You want to clone an environment instantaneously. Chose the point in time you want and it's done. You can clone 50 databases for your 50 developers. You can rewind your test database to run unit testing in an continuous integration development environment. You can do all that stuff that requires so many IT procedures just with a few clicks on the Delphix GUI.   Just an example, here is my source database and the way I choose the point in time I want to clone:   CaptureDelphix01.PNG   It's running: CaptureDelphix02.PNG   The #cloneattack is a good way to test things and discuss with others...  

I have now @delphix on my laptop installed with @kylehhailey at #oow14. Cont. tomorrow at OTW http://t.co/QJLVhp93jg pic.twitter.com/QgoAgJPXyo

— Franck Pachot (@FranckPachot) September 30, 2014

@kylehhailey #cloneattack: finished it today now playing with clones while listening to @TanelPoder pic.twitter.com/wH3kQKBp8U

— Franck Pachot (@FranckPachot) September 30, 2014

That's some powerful multitasking - awesome @FranckPachot @TanelPoder

— Kyle Hailey (@kylehhailey) September 30, 2014

Day 2 at Oracle Open World - best practices for WebLogic & Cloud Control

Tue, 2014-09-30 15:03

Today, in this post I will describe some Oracle WebLogic and Cloud Control best practices I have learned in the last sessions. It's always good to see what is advised by other people that are confronted with other or the same challenges.

 

Managing Oracle WebLogic Server with Oracle Enterprise Manager 12c

One session was related to the best pratices for managing WebLogic with Cloud Control 12c.

  • Use the administration functions:

Now you can, with Cloud Control 12c, do the WebLogic administration using its console. Starting and stopping the managed servers and applications were already possible but now you can do more like configuring the resources, deploying applications and so on.
As you are using the Cloud Control console you can sign in to several targets WLS servers. This means you have to enter each time the required password. By providing the credentials and saving them as the preferred ones (in Preferred Credentials) you avoid to enter the password each time.

  • Automate Tasks accross domains with predefined jobs:

Predefined jobs can be used to start automatically WLST scripts and this against one or more domains. Like with the WLS console you can register your actions into a .py script, update it for your new targets, create the job and set the schedule. This can obviously be a script for configuration but also for monitoring or creating statistics.  

  • Automatic response to issue via corrective action:

By including corrective actions in templates you can apply them to managed servers. If the corrective action fails, by using rules you can send email in a second step to inform that there is an issue which need to be solved.

  • Use EMCLI to manage the credentials
  • use APEX to query the Management Repository for reporting

 

Troubleshooting Performance Issues

An other session where best practices were explained was the session on "Oracle WebLogic Server: Best Practices for Troubleshooting Performance Issues". A very helpfull session, all chairs in the room were occupied and some people had to stand, meaning the session was expected.

Some general tips:  

  •  verbose:gc to find out if the performance issues are related to the garbage collection behaviour  
  •  Dweblogic.log.RedirectStdoutToServerLogEnabled=true  
  •  use the Java Flight Recorder (JFR)  
  •  use Remote Diagnostic Agent (RDA)  
  •  use WLDF to create an image of your system  
  •  Thread/heap dumps to see how your application is working

One of the first action you have to do is to read the log files as they can show you which kind of errors are logged; stuck threads, too many open files aso.

The same application can behave differently whether it is deployed on WebLogic running on Linux or on Windows. For instance a socket can remain in TIME_WAIT 4 minutes in Linux but only 1 minute under Windows.

In case you encounter OutOfMemory errors, log the garbage collector information

-verbose:gc -XX+PrintGCDetails -XX:PrintGCDateStamps -XX:-PrintGCTimeStamps

More information can be found in the document referred by ID 877172.1

Thread Dump
To analyze your application you can create a thread dump

  •  under Unix/Linux: kill -3
  •  jstack
  •  WLST threadDump()
  •  jcmdprint_thread (for Java HotSpot)
  •  jcmdThread.print (for Java 7)

More information can be found in the document referred by ID 1098691.1

Once the thread dump has been created you have to analyze it.
For that, several tools are available

  •  Samurai
  •  Thread Dump Analyzer (TDA)
  •  ThreadLogic

Some best practices I already know; one tool I want to test now is ThreadLogic to be trained in case I have to use it in a real case.

Let's see what will happen in the next days.

1st day at Oracle Open World '14 : news on the upcoming Oracle WebLogic 12.2.1

Mon, 2014-09-29 19:33

Landing on Sunday 28th, after a 13 hours' trip my colleague Franck Pachot and I had just the time to do the registration, go to the hotel, and go back to the "Welcome Reception" where we could eat something. After a night where I could feel the jet lag :-) we where ready to "participate" in this amazing event, the Oracle Open World 2014.

The first session I attended was the keynote where new challenges were exposed, "moving" old 20 years applications; building new infrastructures with less budget as the money is put more to the business applications to fullfill the user demands and expectations; Big Data where the analyzes but also the delivery of the results has to be fast. To resume we are in a period where the infrastructure is changing by using more and more the cloud but the approach to deal with the new challenges has also to be changed to integrate this new digital world.

Another interesting session was the one from Mr. William Lyons about the Oracle WebLogic server strategy and roadmap. He talked about the Cloud Application Foundation like mobile development productivity, Foundation for Fusion Middleware and Application, High Availability, performance, multi-tenancy, cloud management and operation aso. He first recapitulated the new features from WebLogic 12.1.2 like the management of Coherence, Oracle HTTP server, webtier using only one tool like WLS console, WLST or OFMW console. He also talked about the dabatase integration with GriLink, RAC, multi tenant database, application continuity and Database Resident Connection Pool which improves the performance.

He passed then to the new features from 12.1.3 which has been released in June 2014. This new version improves functionnalities in the Fusion MiddleWare, Mobile as well as in High Availability areas. The developper can now have a free development license, they can install the product by using a zip version which contains also the patches. WebLogic 12.1.3 supports Java EE7 AND 8.

The next release which is plan for 2015 is WebLogic 12.2.1. With this version the multitenancy concept is covered where domain partition can be used to isolate resources for the different tenants. Regarding Java it will be fully compliant with Java EE7 and 8.

In this first day lots of information have been ingested but they have to be digested in the next weeks :-)

Let's see what will happend in the next days!

OOW14 Day 1 - Dbvisit #repattack

Mon, 2014-09-29 19:02

Oracle Open World is not only conferences but also practice and networking. Today at the OTN lounge have installed the following demos on my laptop:

  • a Dbvisit replicate #repattack with 
  • a Delphix cloning environement #cloneattack

I'll detail the former below and the latter tomorrow, but if you are at San Francisco and missed it, please come tomorrow to the same kind of session at the Oak Table World! You don't even need the OOW registration for that - it's independant but at the same place. Here are the details: http://www.oraclerealworld.com/oaktable-world/agenda/

 

Dbvisit replicate

This is the event:

Tweet:

Become a #RepAttack warrior at #OOW14 Get to the OTN lounge for briefing today from 3:00 pm http://t.co/fJRbOuMPqn

— Dbvisit Software (@dbvisit) September 29, 2014

 

Well, actually I did install everything a bit earlier as I had the #repattack environement before and I woke up very early because of the jet lag... The installation is straightforward and I've monitored it with anoter tool which I like (and we are partner as well): Orachrome Lighty.

Tweet:

I woke up because of #jetlag then I've installed @dbvisit #repattack on my laptop and monitor it with @orachrome pic.twitter.com/EVm1GZBo3l

— Franck Pachot (@FranckPachot) September 29, 2014

 

The idea is to quickly setup a source and a target virtualbox, with Oracle XE and Swingbench on the source. And then setup the replication on it. It is really straightforward and shows that logical replication is not too complex to set. So the OTN lounge was to occasion to meet the Dbvisit team.

 

Delphix

Here is the setup - I will continue tomorrow for cloning:

Tweet:

I have now @delphix on my laptop installed with @kylehhailey at #oow14. Cont. tomorrow at OTW http://t.co/QJLVhp93jg pic.twitter.com/QgoAgJPXyo

— Franck Pachot (@FranckPachot) September 30, 2014

Documentum upgrade project: D2-Client and missing icons

Sun, 2014-09-28 19:49

The new D2-Client does not correctly display the icon for some formats. This usually happens when the icon.css is not up to date based on the content format in the repository. The solution is to find these formats and update the icon.css.

Here is the solution in three simple steps:

 

1) Find all formats described in icon.css

 

grep -B1 formats icon.css | cut -f1 -d"{" | grep SPAN | cut -f2 -d. > format.txt

 

icon.css is located on the application server under"...\\webapps\\D2-Client\\styles\\defaults\\css"

 

2) Find which format in the repository is not defined in icon.css using a dql query

Use the value in format.txt from previous step to build the dql query:

 

select distinct a_content_type from dm_document(all)
where a_content_type not in
('unknown',
'blank',
'crtext',
'text',
'mactext',
'pdftext',
'pdf',
.... .
....
'vsd1Large',
'vsd2Large',
'vsd3Large',
'vsd4Large')

 

3) Update icon.css with the missing formats

Let's take an example: For msw2007, I added:

 

SPAN.msw2007 {  BACKGROUND-IMAGE: url(/D2-Client/servlet/GetResource/images/icons/formats/doc-small.gif) }

 

High availability in the Oracle landscape

Wed, 2014-09-24 15:41

It's now several weeks I attended some event about high availability (HA). But what is actually high availability? According to our friend Wikipedia, HA is based on 3 main principals:

  1. Elimination of Single Point of Failure (SPoF)
  2. Reliable crossover (with minimal or no downtime)
  3. Detection of failures when they arrive

If those principals are met, end users may never realize about a failure.
The aim of that blog is not to dive into too much details, but just to give you an overview and provide you entry points for further investigation.

HA can be applied to a broad range of elements of the Oracle Fusion Middleware stack like:

  • Application programming with ADF
  • Reporting
  • Application server
  • Identity management
  • Supporting Database

Let's see how those elements can take care of HA.

Application Programming with ADF for HA:

As easy as application development can be made easy with ADF in JDeveloper, developers still have to consider particular settings to enable the application to run smouthly in WebLogic cluster and take advantage of the HA features.
BEWARE: by default, when you start an WebLogic managed server with the node manager, it is not taking in account any of the specific settings. So start script should be enabled on the node manager.
- Persistence store type should be set to REPLICATE_IF_CLUSTERED in the weblogix.xml file
- Controller configuration should be set with "adf scope ha support" being true in the adf_config.xml for the Application Ressource
- Application modules should be set for clustrer failover (AM-Pooling jbo.dofailover = "true")
- Managed Beans and Task Flow Parameters should be serializable
- UI bindings should be kept in a small scope

So if you follow some principals, your ADF application will take the best out HA on a WebLogic cluster.

Reporting:

There are 2 main tools provided by Oracle for reporting purpose:
- Forms/Reports
- BI Publisher
Both can be integrated in a HA environment even though the need may not be that big.
There are different challenges, specifically with Forms/Reports as the same report with a given ID can be generated and delivered by different report servers whereas the client application is looking for a single reference. So it's not very straight forward to create a HA environment for Forms/Reports with replicated servers.

What developers should now about WebLogic Server and HA:

Not only ADF based Java applications can benefit from WebLogic cluster, but any Java EE application can do so when deployed on a WebLogic cluster.
There are some settings and design principles to be taken in account so that the application can switch between the different nodes of the cluster.

There is a broad range of solutions that can apply to the WebLogic cluster:

• Auto Restart
• Session Replication
• Transaction Failover
• Clustered Messaging
• Server Migration
• Clusterware Integration
• Metadata split
• Online Deployment
• Rolling Redeployment
• Online Configuration Change
• Cluster Management
• Rolling Patching
• Shared Services Architektur

Oracle database 12c associated to WebLogic 12c enables another level in the HA with the Application Continuity feature allowing automated transaction replay making developer's life easier.

You can find more information about WebLogic Cluster and HA on the following Oracle white papaer:

http://www.oracle.com/technetwork/middleware/weblogic/learnmore/1534212

Identity and Access Management:

This is one very sensitive subject where HA is key. Without proper availability of your access management, HA in your applications would be almost useless, as user won't be able to use them.

But this is also one of the most complex environment to make HA, in the Oracle landscape, because of all the bricks put together:
- Oracle Internet Directory (OID)
- Oracle Access Management (OAM)
- Oracle Virtual Directory (OVD)
- Metadata Services (MDS)
- Oracle Platform Security Services (OPSS)
- Oracle Entitlement Server (OES)
... and more

You can find details about HA for Identity and Access management on the Oracle website:

http://docs.oracle.com/cd/E40329_01/doc.1112/e28391/toc.htm

Oracle DB - the basement:

Fusion Middleware applications are refering to databases which also need to be HA if the application layer wants to provide HA.
The Oracle database provides several solutions for the HA of which mainly:
- Real Application Cluster (RAC)
- Data Guard
Both can be used seperately, but also combined.
HA solutions can also be implemented on storage level and/or hardware.
Other option is to use third party solutions like Db Visit (http://www.dbvisit.com/) which can leverage HA on an Oracle standard edition and spare the additional costs of an Enterprise edition.

 

As HA is mostly synonym of complex environments, I hope you will enjoy setting-up them and please your end users in hidding failures.

How to measure Exadata SmartScan efficiency

Tue, 2014-09-23 03:09

A thread on OTN Forum about Exadata came to the following question: "But how can I monitor if it is effectively used or not?". This is a common question. There are 3 exclusive features coming with Exadata, and instance statistics can show their usage. Even better: two of them can even be checked on your current (non-Exadata) system. And that is good to foresee how Exadata can improve your workload.

Let's find how to measure the following feature efficiency:

  • Have reads eligible to SmartScan
  • Avoid I/O with Storage Index
  • Avoid transfert with offloading
Have reads eligible to SmartScan

First of all, SmartScan occurs only on direct-path reads. If you don't see 'cell smart table scan' and 'cell smart index scans' in your Top timed events, then SmartScan can do nothing for you. And you see that as 'direct path read' wait event when you are not in Exadata.

If those direct-path reads are not a significant part of your DB Time, then you have something else to do before going to Exadata. You should leverage direct-path reads: full table scans, parallel query, etc.

Then when you are on Exadata and 'cell smart table scan' and 'cell smart index scans' are used, then you can check the proportion of reads that are actually using SmartScan.

SmartScan input is: 'cell physical IO bytes eligible for predicate offload'. This is the amount of reads (in bytes) that are going to the SmartScan code. You have the total amount of reads as 'physical read total bytes' so you can compare it to know which part of your reads is subject to SmartScan.

If 'cell physical IO bytes eligible for predicate offload' / 'physical read total bytes' is small, then you have something to tune here. You want to do direct-path reads and you want to see 'TABLE ACCESS STORAGE' in the execution plan.

Not yet in Exadata? The Performance Analyzer can simulate it. The statistic is 'cell simulated physical IO bytes eligible for predicate offload.'

Avoid I/O with Storage Index

When you know that SmartScan is used or can be used on a significant part of your reads, then the first thing you want to do is to avoid physical I/O. Among the 'cell physical IO bytes eligible for predicate offload', some reads will not necessitate disc I/O at all, thanks to Storage Indexes. You have the volume in 'cell physical IO bytes saved by storage index'. Just compare that with the eligible volume and you know the amount of disk reads that have been saved by Storage Indexes. That is the most efficient optimization of SmartScan: you don't have to read them, you don't have to uncompress them, you don't have to filter them, you don't have to transfer them...

Avoid transfert with offloading

Then there is the proper offloading. The previous (Storage Indexes) addressed I/O elimination. This is the key feature for performance. Offloading addresses the transfer from storage to database servers. This is the key feature for scalability.

In the last decade, we replaced lot of direct attached disks with SAN. That was not for performance reasons. That was for maintainability and scalability. Having a shared storage system helps to allocate disk space when needed, get good performance by striping, get high availability by mirroring. The only drawback is the transfer time that is higher than direct attached disks. 

Exadata still has the scalable architecture of the SAN, but releases the transfer bottleneck with offloading (in addition fo the fast interconnect which is very efficient). What can be filtered early on storage cells do not have to be transferred: columns not in the select clause, rows outside of the where (or join) clause predicates.

And you can measure it as well. When you measure it on non-Exadata with the performance analyzer, you compare the SmartScan output, which is the 'cell simulated physical IO bytes returned by predicate offload', to the SmartScan input 'cell simulated physical IO bytes returned by predicate offload'. And this is a good estimation of the efficiency you can expect when going to Exadata.

When you are on Exadata, that may be different. Compressed data have to be uncompressed in order to apply the predicates and projection at the storage cells. Then the predicate/projection offloading input is: 'cell IO uncompressed bytes'. and you compare that to 'cell physical IO interconnect bytes returned by smart scan'

Summary

If you want to see Exadata SmartScan efficiency, just check an AWR report and compare the following:

cell physical IO bytes eligible
for predicate offload


      /  
   

 

physical read total bytes
 

     Goal:
     high % 

cell physical IO bytes saved
by storage index


     
      /  
   


 

cell physical IO bytes eligible
for predicate offload

     Goal:
     high %

cell physical IO interconnect bytes
returned by smart scan
 

      /      

 

cell IO uncompressed bytes

 

      Goal:
      small %
 

 

You probably wonder why I don't use the 'smart scan efficiency ratio' that we find at different places? They are often wrong for two reasons:

  • They compare 'cell physical IO interconnect bytes returned by smart scan' to 'cell physical IO interconnect bytes'. But the latter includes the writes as well, and because of ASM mirroring, writes are multipled when measured at interconnect level.

  • The 'cell physical IO interconnect bytes returned by smart scan' can't be compared with 'physical read total bytes' because the former has some data uncompressed. 

For that reason, we cannot use only a single ratio that covers all the SmartScan features.

This is why I always check the 3 pairs above in order to get a relevant picture. And two of them are available with the simulation mode (I'll blog about it soon).

 

 

SQL Server 2014: classic commit vs commit with delayed durability & I/Os

Mon, 2014-09-22 23:43

When you learn about SQL Server, you will often hear that a commit transaction is a synchronous operation and that you can trust it. In this blog post, I will provide some details about what we mean by synchronous behavior. The reason is that sometimes, when I talk about the new delayed durability feature provided by SQL Server 2014, there are some confusions. If you want more details on this new feature, please read the blog post of my colleague Stéphane Haby here. A quick shortcut is often the following: writing to the transaction log is synchronous, while writing with the new delayed durability feature is asynchronous.

First of all, you probably know that the buffer manager guarantees that the transaction log is written before the changes to the database are written. This is the famous protocol called Write-Ahead logging (aka WAL). Log records are not written directly to disk but first into the buffer cache and then flushed to the disk in a purely asynchronous manner. However, at the commit time the related thread must wait for the writes to complete to the point of the commit log record in the transaction log. This is the synchronous part of commit operation in order to meet the WAL protocol.

On the other hand, the new delayed durability feature allows the commit operation to be asynchronous (like writing to the transaction) but the big difference is that the related thread doesn’t have to wait until the commit log record is written in the transaction log. This new feature introduces some performance improvements, but as a caveat, there is the loss of data.

We can prove that both commit operations write asynchronously by using either the process monitor tool or by using a debugger and trying to catch the part of the code responsible for writing into the transaction log file.

I will use the following T-SQL script for this demonstration:

--> Commit transaction (without delayed durability option)

AdventureWorks2012; GO   -- Ensure DELAYED_DURABILITY is OFF for this test ALTER DATABASE adventureworks2012 SET DELAYED_DURABILITY = DISABLED; GO   -- Create table t_tran_delayed_durability IF OBJECT_ID(N't_tran_delayed_durability', 'U') IS NOT NULL        DROP TABLE t_tran_delayed_durability; GO     create table t_tran_delayed_durability (        id int identity ); GO   -- insert 1000 small transactions declare @i int = 1000   while @i 0 begin        insert t_tran_delayed_durability default values          set @i = @i - 1; end;

 

--> Commit transaction (with delayed durability enabled)

-- Ensure DELAYED_DURABILITY is ON for this test ALTER DATABASE adventureworks2012 SET DELAYED_DURABILITY = ALLOWED; GO   -- Create table t_tran_delayed_durability IF OBJECT_ID(N't_tran_delayed_durability', 'U') IS NOT NULL        DROP TABLE t_tran_delayed_durability; GO   create table t_tran_delayed_durability (        id int identity ); GO   -- insert 1000 small transactions declare @i int = 1000   while @i 0 begin        begin tran tran_1        insert t_tran_delayed_durability default values        commit tran tran_1 with (DELAYED_DURABILITY = on)          set @i = @i - 1; end;

 

Below, you will find an interesting picture of the process monitor trace output that shows the SQL Server file system activity that writes to the transaction log file in both cases.

 

--> Commit transaction (without delayed durability option)

 

blog_17_1_procmon_normal_transaction

 

You will notice that SQL Server uses the WriteFile() function to write to the transaction log for each commit operation (4096 bytes each). I will only show you a sample of the output, but you can imagine the final number of records you can have here. If we take a look at the process monitor stack you will notice that SQL Server uses the WriteFile() Windows function located in the Kernel32.lib library to write to the transaction log with an overlapped structure (in others words asynchronous I/O).

 

blog_17_3_procmon_stack

 

This test confirms what Bob Dorr explains in the Microsoft article about SQL Server I/Os and transaction log I/O.

 

--> Commit transaction (with delayed durability enabled)

 

blog_17_1_procmon_delayed_transaction

 

In this case, the same function is used by SQL Server with a big difference here: SQL Server will group some IO into chunks (in my case 16K, 48K, and 60K) before writing to disk. Cleary, there is less activity here (in my case 18 lines against approximatively 1000 lines for the first test).

We can also attach a debugger (for instance WinDbg) to the SQL Server process and set a breakpoint in the Kernel32!writefile() function for the calling thread in order to have more details about the execution stack. Note that the process monitor stack showed the module KERNELBASE.dll for the WriteFile() function but as mentioned by this Microsoft article kernelbase.dll gets functionality from kernel32.dll and advapi32.dll.

 

blog_17_1_windbg_stack_writefile

 

Both commit operations show the same stack except of course the number of executions.

To summarize, I wanted to show you that both commit operations (with and without delayed duration) are using asynchronous IO to write to the transaction log file. The big difference is that with the delayed durability option, SQL Server improves the log IO writes performance by deferring and grouping the IO into 60K chunks before writing to the disk. I hope this will help you understand more about SQL Server commit operations.

Oracle OEM Cloud Control 12.1.0.4 - agent upgrade & patch

Mon, 2014-09-22 20:09

The new Oracle OEM Cloud Control 12.1.0.4 release migration makes it necessary for the DBA to migrate the old agent version to 12.1.0.4. If your infrastructure has a huge number of agents and if you want to apply the agent patches to the upgraded agents, this might be a very time-consuming job. However, there is a way to realize the operation in just one shot.

In my example, we have an agent in version 12.1.0.3:

 

oracle@vmtestfusion01:/u00/app/oracle/agent12c/core/12.1.0.3.0/OPatch/ [agent12c] ./opatch lsinventory

Oracle Interim Patch Installer version 11.1.0.10.0

Copyright (c) 2013, Oracle Corporation.

All rights reserved.

Oracle Home       : /u00/app/oracle/agent12c/core/12.1.0.3.0

Central Inventory : /u00/app/oraInventory  

from           : /u00/app/oracle/agent12c/core/12.1.0.3.0/oraInst.loc

OPatch version   : 11.1.0.10.0

OUI version       : 11.1.0.11.0

Log file location : /u00/app/oracle/agent12c/core/12.1.0.3.0/cfgtoollogs/opatch/opatch2014-09-02_08-00-36AM_1.log

OPatch detects the Middleware Home as "/u00/app/oracle/Middleware/11g"

Lsinventory Output file location : /u00/app/oracle/agent12c/core/12.1.0.3.0/cfgtoollogs/opatch/lsinv/lsinventory2014-09-02_08-00-36AM.txt

Installed Top-level Products (1):EM Platform (Agent)                                                 12.1.0.3.0

There are 1 products installed in this Oracle Home.

Interim patches (2) :Patch 10203435     : applied on Sat Jun 22 08:51:24 CEST 2013

Unique Patch ID: 15915936.1   Created on 7 Feb 2013, 18:06:13 hrs PST8PDT  

Bugs fixed:     10203435

Patch 16087066     : applied on Sat Jun 22 08:51:22 CEST 2013

Unique Patch ID: 15928288  

Created on 4 Feb 2013, 04:52:18 hrs PST8PDT  

Bugs fixed:     13583799, 6895422

OPatch succeeded.

 

In the OMS environment, we have to download and copy the agent-side patches to $OMS_HOME/install/oneoffs/12.1.0.4.0/Generic.

In my example, I downloaded the 19002534 EM DB plugin bundle patch 12.1.0.6.1 (agent side):

 

oracle@vmtestoraem12c:/u01/app/oracle/MiddleWare_12cR4/oms/install/oneoffs/12.1.0.4.0/Generic/ [oms12c] ls

p19002534_121060_Generic.zip

 

The agent upgrade procedure will use this directory to apply the patch.

Let's upgrade the agent from 12.1.0.3 to 12.1.0.4 by using the Cloud Control console:

 

ag1

 

Select the agent to be upgraded:

 

ag2_copy

 

The new job screen lists the different steps:

 

ag3_copy

 

In the log file we can visualize the patch:

 

Tue Sep 2 08:07:26 2014 -

Found following valid patch files from the patch location which will be considered in this patching session :

Tue Sep 2 08:07:26 2014 - p19002534_121060_Generic.zip

Tue Sep 2 08:07:26 2014 - /u00/app/oracle/agent12c/core/12.1.0.4.0/bin/unzip -o p19002534_121060_Generic.zip -d /u00/app/oracle/agent12c/oneoffs >> /u00/app/oracle/agent12c/core/12.1.0.4.0/cfgtoollogs/agentDeploy/applypatchesonapplicablehome2014-09-02_08-07-26.log 2>&1

Archive: p19002534_121060_Generic.zip  

creating: /u00/app/oracle/agent12c/oneoffs/19002534/  

creating: /u00/app/oracle/agent12c/oneoffs/19002534/etc/  

creating: /u00/app/oracle/agent12c/oneoffs/19002534/etc/config/

inflating: /u00/app/oracle/agent12c/oneoffs/19002534/etc/config/actions.xml

…………

 

By checking the agent inventory, we verify the new upgraded agent has received the EM DB PLUGIN BUNDLE PATCH 12.1.0.6.1:

 

[agent12c] opatch lsinventory -oh /u00/app/oracle/agent12c/plugins/oracle.sysman.db.agent.plugin_12.1.0.6.0/

Oracle Interim Patch Installer version 11.1.0.10.4

Copyright (c) 2014, Oracle Corporation. All rights reserved.

Oracle Home       : /u00/app/oracle/agent12c/plugins/oracle.sysman.db.agent.plugin_12.1.0.6.0

Central Inventory : /u00/app/oraInventory

   from          : /u00/app/oracle/agent12c/plugins/oracle.sysman.db.agent.plugin_12.1.0.6.0//oraInst.loc

OPatch version   : 11.1.0.10.4

OUI version       : 11.1.0.12.0

Log file location : /u00/app/oracle/agent12c/plugins/oracle.sysman.db.agent.plugin_12.1.0.6.0/cfgtoollogs/opatch/opatch2014-09-02_10-09-32AM_1.log

OPatch detects the Middleware Home as "/u00/app/oracle/Middleware/11g"

Lsinventory Output file location : /u00/app/oracle/agent12c/plugins/oracle.sysman.db.agent.plugin_12.1.0.6.0/cfgtoollogs/opatch/lsinv/lsinventory2014-09-02_10-09-32AM.txt

Installed Top-level Products (1):

Enterprise Manager plug-in for Oracle Database                       12.1.0.6.0

There are 1 products installed in this Oracle Home.

Interim patches (1) :

Patch 19002534     : applied on Tue Sep 02 10:05:37 CEST 2014

Unique Patch ID: 17759438

Patch description: "EM DB PLUGIN BUNDLE PATCH 12.1.0.6.1 (AGENT SIDE)"

   Created on 17 Jun 2014, 09:10:22 hrs PST8PDT

   Bugs fixed:

     19002534, 18308719

 

This feature is very useful for massive agent upgrades, because the agent is upgraded and, in the same operation, the bundle patch is applied. You are also able to use the patch plan to apply bundle patches to multiple agents in one operation.

The SQL Server DBA's essential toolkit list

Mon, 2014-09-22 02:01

This week, I attended the SQLSaturday 2014 in Paris. During the Pre-Conference on Thursday, I followed Isabelle Van Campenhoudt for her SQL Server Performances Audit session. This conference took the form of an experience sharing between attendees. Indeed, we tried to list together the most important software, tools, features or scripts which will help an SQL Server DBA during his work. In this blog, I want to share our final list with you.

 

Windows Server Level: Hardware & Applications


CrystalDiskMark

CrystalDiskMark is a free disk benchmark software. It can be downloaded here.

 

SQLIO

SQLIO is another free disk benchmark software. It can be downloaded here.

 

Windows Performance Monitor (PerfMon)

PerfMon is a Windows native tool which collects log data in real time in order to examine how programs running on the computer affect the performance.

PerfMon provides a lot of counters which measure the system state or the activity.

You can learn more on TechNet.

You can find the most important counters for SQL Server here.

 

Performance Analysis of Logs (PAL)

PAL is an Open Source tool based on the top of PerfMon. It reads and analyses the main counters looking for known thresholds.

PAL generates an HTML report which alerts when thresholds are reached.

PAL tool can be downloaded on CodePlex.

 

Microsoft Assessment and Planning (MAP)

MAP is a Microsoft toolkit which provides hardware and software information and recommendations for deployment or migration process for several Microsoft technologies (such as SQL Server or Windows Server).

MAP toolkit can be downloaded on TechNet.

 

SQL Server Level: Configuration & Tuning

 

Dynamic Management Views and Functions (DMV)

DMV are native views and functions of SQL Server which returns server state information of a SQL Server instance.

You can learn more on TechNet.

 

sp_Blitz (from Brent Ozar)

It is a free script which checks SQL Server configuration and highlights common issues.

sp_Blitz can be found on Brent Ozar website.

 

Glenn Berry's SQL Server Performance

It provides scripts to diagnostic your SQL Server since SQL Server 2005.

These scripts can be downloaded here.

 

Enterprise Policy Management (EPM) Framework

EPM Framework is based on Policy-Based Management. It is a reporting solution which tracks SQL Server states which do not meet the specified requirements. It works on all instances of SQL Server since SQL Server 2000.

You can learn more on CodePlex.

 

SQL Server Level: Monitoring & Troubleshooting

 

SQL Profiler

SQL Profiler is a rich interface integrated in SQL Server, which allows to create and manage traces to monitor and troubleshoot an SQL Server instance.

You can learn more on TechNet.

 

Data Collector

Data Collector is a SQL Server feature introduced in SQL Server 2008, and available in all versions.

It gathers performance information from multiple instances for performance monitoring and tuning.

You can learn more on TechNet.

 

Extended Events

Extended Events is a monitoring system integrated in SQL Server. It helps for troubleshooting or identifying a performance problem.

You can learn more on TechNet.

 

SQL Nexus

SQL Nexus is an Open Source tool that helps you for identifying the root cause of SQL Server performance issues.

It can be downloaded on CodePlex.

 

SQL Server Level: Maintenance

 

SQL Server Maintenance Solution

It a set of scripts for running backups, integrity checks, and index statistics maintenance on all editions of Microsoft SQL Server since SQL Server 2005.

This solution can be downloaded on Ola Hallengren's website.

 

 

Conclusion

This blog does not pretend to make a complete list of DBA needs, but it tries to cover most parts. You will notice that all softwares are free and recognized by the DBA community as reliable and powerful tools.

I hope this will help you.

For information, you can learn how to use these tools in our SQL Server DBA Essentials workshop.

Documentum upgrade project: D2-Client, facets and xPlore

Sun, 2014-09-21 19:57

To enhance the search capability we had to configure xPlore to use the new customer attributes as facets and configure D2 to use the default and new facets.

  Configuring xPlore to use facets with the customer attributes
  • Stop the Index Agent and Server
  • Update indexserverconfig.xml by adding the following line (e. g.):

 

 xml-code

 

  • Keep only the indexserverconfig.xml file in $DSSEARCH_HOME/config
  • Remove $DSSEARCH_HOME/data/*
  • Start index and agent server
  • Start a full reindexing
  • Once all is indexed, set index to normal mode

 

Necessary tests

You should do two tests before configuring the D2-Client.

 

1. On the content server:

 

java com.emc.d2.api.config.search.D2FacetTest -docbase_name test67 -user_name admin -password xxxx -full_text -facet_names dbi_events

 

2. On the xPlore server:

  • Check if the new lines have been validated by executing $DSEARCH_HOME/dsearch/xhive/admin/XHAdmin
  • Navigate to xhivedb/root-library/dsearch/data/default
  • Under the Indexes Tab, click the "Add Subpaths" button to open the "Add sub-paths to index" window where you can see in the Path column the added customer attributes

 

Configure the search in D2-Config
  • Launch D2-Config
  • Select Interface and then the Search sub-menu
  • Tick  "Enable Facets" and enter a value for "Maximun of result by Facet"

 

D2-Config

 

Once this is done, you are able to use the facets with the D2-Client.

Improving your SharePoint performance using SQL Server settings (part 2)

Sun, 2014-09-21 17:36

Last week, I attended the SQLSaturday 2014 in Paris and participated in a session on SQL Server optimization for Sharepoint by Serge Luca. This session tried to list the best pratices and recommendations for Database Administrators in order to increase the SharePoint performance. This blog post is based on this session and is meant as a sequel to my previous post on Improving your SharePoint performance using SQL Server settings (part 1).

 

SQL Server instance

It is highly recommended to use a dedicated SQL Server instance for a SharePoint farm and to set LATIN1_GENERAL_CI_AS_KS_WS as the instance collation.

 

Setup Account permissions

You should give the Setup Account the following permissions in your SQL Server instance:

  • securityadmin server role

  • dbcreator server role

  • dbo_owner for databases used by the Setup Account

 

Alias DNS

It is recommended to use Alias DNS to connect to the SQL Server instance with your SharePoint server. It simplifies the maintenance and makes it easier to move SharePoint databases to another server.

 

Disk Priority

When you plan to allocate your SharePoint databases accross different databases, you might wonder how to maximize the performance of your system.

This is a possible disk organization (from faster to lower):

  • Tempdb data and transaction log files

  • Content database transaction log files

  • Search database data files (except Admin database)

  • Content database data files

 

Datafiles policy

You should use several datafiles for Content and Search databases, as follows:

  • distribute equally-sized data files accross separate disks

  • the number of data files should be lower than the number of processors

Multiple data files are not supported for other SharePoint databases.

 

Content databases size

You should avoid databases bigger than 200 GB. Databases bigger than 4 TB are not supported by Microsoft.

 

Conclusion

SharePoint is quite abstract for SQL Server DBAs because it requires specific configurations.

As a result, you cannot guess the answer: you have to learn on the subject.