Skip navigation.

Yann Neuhaus

Syndicate content
All blog entries from
Updated: 11 hours 34 min ago

Les journées SQL Server 2014 - un event qui devrait vous intéresser!

Fri, 2014-10-24 02:05

Comme vous le savez peut-être déjà, le plus grand événement SQL Server francophone arrive bientôt. Les journées SQL Server 2014, organisées par le GUSS, (Groupement des Utilisateurs SQL Server) sont à Paris les 1 et 2 décembre 2014. dbi services y est aussi - et vous?


Les journées SQL Server 2014


Voici le site de l'event:

Les 2 jours traitent aussi bien de l'administration, du développement que de la BI SQL Server 2014.

A cette occasion, je vous annonce que dbi services sera présent avec 2 sessions:


  • Lundi 01/12/2014, 10h30: Infrastructure et Always-On - par David Barbarin

Une session basée sur des retours d'expérience concernant les architectures SQL Server AlwaysOn et les groupes de disponibilités on-Premise chez différents clients. Les problématiques que nous aborderons sont diverses et variées et concernent aussi bien les phases d'implémentation d'une architecture AlwaysOn et ainsi que l'opérationnel.


  • Mardi 02/12/2014, 10h30: Sécurité via policies-  par Stéphane Haby

Fort de mon expérience dans le domaine bancaire (suisse bien sûr), je vous propose une manière simple de contrôler la sécurité via les policies. Quels types de contrôle doit-on mettre en place au niveau de l'instance, au niveau de la base de données? Comment les gérer sur plusieurs serveurs? Avoir de beaux petits rapports ou e-mails en retour…

Essayons de répondre ensemble à toutes ces petites questions que nous avons au niveau des audits de sécurité.


Vous retrouvez tout l'agenda des sessions ici. dbi services aura également un stand ou vous pourrez venir échanger avec nos experts SQL Server.

N'hésitez pas à venir pour nous retrouver lors de cet événement francophone important! En plus de sa beauté, Paris n’est pas si loin de la Suisse. Wink

SQL Server failover cluster, VSphere, & SCSI-3 reservation nightmares

Wed, 2014-10-22 22:53

When I have to install a virtualized SQL Server FCI at a customer place as an SQL Server consultant, the virtualized environment usally is ready. I guess this is the same for most part of the database consulting people. Since we therefore lack practice, I have to admit that we do not always know the good configuration settings to apply to the virtualized layer in order to correctly run our SQL Server FCI architecture.

A couple of days ago, I had an interesting case where I had to help my customer to correctly configure the storage layer on VSphere 5.1. First, I would like to thank my customer because I seldom have the opportunity to deal with VMWare (other than via my personal lab).

The story begins with a failover testing that fails randomly on a SQL Server FCI after switching the SQL Server volumes from VMFS to RDM in a physical compatibility mode. We had to switch because the first configuration was installed in CIB configuration (aka Cluster-In-Box configuration). As you certainly know, it does not provide a full additional layer of high availability over VMWare in this case, because all the virtual machines are on the same host. So we decided to move to a CAB configuration (Cluster across Box) scenario that is more reliable than the first configuration.

In the new configuration, a failover triggers a Windows 170 error randomly with this brief description: "The resource is busy". At this point, I suspected that the ISCI-3 reservation was not performed correctly, but the cluster failover validate report didn’t show any errors concerning the SCSI-3 reservation. I then decided to generate the cluster log to see if I could find more information about my problem - and here is what I found:


00000fb8.000006e4::2014/10/07-17:50:34.116 INFO [RES] Physical Disk : HardDiskpQueryDiskFromStm: ClusterStmFindDisk returned device='?scsi#disk&ven_dgc&prod_vraid#5&effe51&0&000c00#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}' 00000fb8.000016a8::2014/10/07-17:50:34.116 INFO [RES] Physical Disk : ResHardDiskArbitrateInternal request Not a Space: Uses FastPath 00000fb8.000016a8::2014/10/07-17:50:34.116 INFO [RES] Physical Disk : ResHardDiskArbitrateInternal: Clusdisk driver handle or event handle is NULL. 00000fb8.000016a8::2014/10/07-17:50:34.116 INFO [RES] Physical Disk : HardDiskpQueryDiskFromStm: ClusterStmFindDisk returned device='?scsi#disk&ven_dgc&prod_vraid#5&effe51&0&000d00#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}' 00000fb8.000006e4::2014/10/07-17:50:34.117 INFO [RES] Physical Disk : Arbitrate - Node using PR key a6c936d60001734d 00000fb8.000016a8::2014/10/07-17:50:34.118 INFO [RES] Physical Disk : Arbitrate - Node using PR key a6c936d60001734d 00000fb8.000006e4::2014/10/07-17:50:34.120 INFO [RES] Physical Disk : HardDiskpPRArbitrate: Fast Path arbitration... 00000fb8.000016a8::2014/10/07-17:50:34.121 INFO [RES] Physical Disk : HardDiskpPRArbitrate: Fast Path arbitration... 00000fb8.000016a8::2014/10/07-17:50:34.122 WARN [RES] Physical Disk : PR reserve failed, status 170 00000fb8.000006e4::2014/10/07-17:50:34.122 INFO [RES] Physical Disk : Successful reserve, key a6c936d60001734d 00000fb8.000016a8::2014/10/07-17:50:34.123 ERR   [RES] Physical Disk : HardDiskpPRArbitrate: Error exit, unregistering key... 00000fb8.000016a8::2014/10/07-17:50:34.123 ERR   [RES] Physical Disk : ResHardDiskArbitrateInternal: PR Arbitration for disk Error: 170. 00000fb8.000016a8::2014/10/07-17:50:34.123 ERR   [RES] Physical Disk : OnlineThread: Unable to arbitrate for the disk. Error: 170. 00000fb8.000016a8::2014/10/07-17:50:34.124 ERR   [RES] Physical Disk : OnlineThread: Error 170 bringing resource online. 00000fb8.000016a8::2014/10/07-17:50:34.124 ERR   [RHS] Online for resource AS_LOG failed.


An arbitration problem! It looks to be related to my first guess doesn’t it? Unfortunately I didn’t have access to the vmkernel.log to see potential reservation conflicts. After that, and this is certainly the funny part of this story (and probably the trigger for this article), I took a look at the multi-pathing configuration for each RDM disk. The reason for this is that I remembered some of the conversations I had with one of my friends (he will surely recognize himself Smile) in which we talked about ISCSI-3 reservation issues with VMWare.

As a matter of fact, the path selection policy was configured to round robin here. According to the VMWare KB1010041, PSP_RR is not supported with VSphere 5.1 for Windows failover cluster and shared disks. This is however the default configuration when creating RDM disks with EMC VNX storage which is used by my customer. After changing this setting for each shared disk, no problem occurred!

My customer inquired about the difference between VMFS and RDM disks. I don’t presume to be a VMWare expert because I'm not, but I know that database administrators and consultants cannot just ignore anymore how VMWare (or Hyper-V) works.

Fortunately, most of the time there will be virtual administrators with strong skills, but sometimes not and in this case, you may feel alone facing such problem. So the brief answer I gave to the customer was the following: If we wouldn’t use physical mode RDMs and used VMDKs or virtual mode RDMs instead, the SCSI reservation would be translated to a file lock. In CIB configuration, there is no a problem, but not for CAB configuration, as you can imagine. Furthermore, using PSP_RR with older versions than VSphere 5.5 will free the reservation and can cause potential issues like the one described in this article.

Wishing you a happy and problem-free virtualization!

Documentum upgrade project: from D2-Config 3.1 to D2-Config 4.1

Mon, 2014-10-20 22:20

During a migration from D2-Config 3.0 to D2-Config 4.1, we have encountered 2 issues. In order to have a clean installation and the insurance of being able to backup the configuration, we have decided to perform a full config export on D2-Config 3.0 and a full import with a configuration reset to D2-Config 4.1.


1. Configuration export

The first issue we experienced was regarding docapps existing in the repository. As you can see in following screenshot, D2-Config crashes very quickly.




But we were quickly able to find a solution. We just had to unselect the docapps check boxes and wait a little bit.




2. Configuration update

After the first import we discovered that some conditions in the menu management were not manageable anymore. But they still applied to D2-Client.

Actually, a class name is seen - i. e. “eu.c6.web.conditions.permits.HasPermit” - instead of the real condition: “User has minimum “write” permit on the selection”.




We have this result because no migration path was anticipated between the 2 releases of D2-Config and some configuration items kept its old class names...

Class names now have to start with “com.emc” instead of “eu.c6”


In fact (and this was confirmed by C), we need to change the menu, toolbar, and tree configuration items manually.

Manually here mean to export the full configuration and perform the changes directly inside the zip-archived XML files of the related configuration items. After a quick assessment, we counted around 100 occurrences to be changed in our case. That was too much to be done during an upgrade. So we decided to automatize the process.

We used a tailor-made Linux script to do that:



$> ./ <config archive name>



After execution, the result was a new D2-Config archive, ready to be deployed, with “Update_” as a prefix for the archive name given in parameter.

You can see the content of the script, executed and tested under RHEL 5.10:





 # Script to update D2-config from 3.0 objects level to 3.1/4.1


 if [[ $# -ne 1 ]] ; then

 echo "ERROR usage : \"$0 \""

 exit 1




 mkdir -p ${root_dir}

 unzip -d ${root_dir} "$1"


 for file in `grep -RH -i eu.c6 ${root_dir}/* | cut -d ":" -f 1`


 echo $file


 mv $file ${file}_old

 sed -e s/eu.c6/com.emc/g ${file}_old > $file

 rm ${file}_old



 cd $root_dir

 zip -r "../UPDATED_$1" *

 cd ..

 chmod 777 "UPDATED_$1"


 rm -rf $root_dir



You can now try to import the updated configuration archive and see the difference.

Be careful, a full config import with configuration reset implies the deletion and recreation of some objects like plublic searches (d2c_query with new r_object_id).

Thanks for reading!


Mon, 2014-10-20 12:37

I have recently published a script to check index fragmentation. But then, do you COALESCE or REBUILD? Well, there also is another option: ALTER INDEX SHRINK SPACE. Let's compare all those index defragmentation operations.

Fragmented index

I have an index created when the table had 1 million rows. Then, I deleted 90% of the rows. Here is the index state from:

  • dbms_space.space_usage
  • index_stats after an analyze index validate structure
  • my index fragmentation checking script with 4 buckets

and here are the results:


      :UNF       :FS1       :FS2       :FS3       :FS4      :FULL  BLOCKS
---------- ---------- ---------- ---------- ---------- ---------- -------
         0          0          1          0          0       2230    2304

------- ------- ---------- -------- -------- -------- ----------- ---------- --------
      3    2304     100404     2226     2225        5         404    1622013       10

         N to          N rows/block bytes/block %free space  blocks free
---------- -- ---------- ---------- ----------- ----------- ------- -----
        10 ->     250280         45         714          91     557 oooo
    250730 ->     500370         45         714          91     557 oooo
    500820 ->     750010         45         714          91     556 oooo
    750460 ->     999660         45         714          91     556 oooo


I have 2226 leaf blocks, the index height is 3 with 5 branch blocks. The leaves are only 91.
However dbms_space shows only full blocks: the deleted entries are still there, just marked as deleted, and the blocks are still seen as full.



Now let's COALESCE:

SQL> alter index DEMO_N coalesce;

And before checking the same values about the index space I measure the amount of work that has been done by the operation (from v$mystat):


NAME                                                                  VALUE
---------------------------------------------------------------- ----------
redo entries                                                          22067
db block gets                                                         32818
session logical reads                                                 32886
db block changes                                                      40601
undo change vector size                                            35199264
redo size                                                          47878800

      :UNF       :FS1       :FS2       :FS3       :FS4      :FULL  BLOCKS
---------- ---------- ---------- ---------- ---------- ---------- -------
         0          0       2004          0          0        227    2304

------- ------- ---------- -------- -------- -------- ----------- ---------- --------
      3    2304     100000      223      222        5           0    1591530       88

         N to          N rows/block bytes/block %free space  blocks free
---------- -- ---------- ---------- ----------- ----------- ------- -----
        10 ->     248690        452        7170          11      56
    253200 ->     500800        450        7158          11      56
    505310 ->     752020        449        7132          11      56
    756530 ->     998730        443        7038          12      55


COALESCE is an online operation that defragments the leaf blocks. We have now only 223 leaf blocks that are 90% full (because my pctfree is the default 10%). But the index stil has the same height and still has 2300 blocks. Where are the reclaimed 2000 blocks? They are available if the index need a new block (for a block split). They are seen as FS2 (at least 25 to 50% free space) by dbms_space because they still contain the deleted rows, but they are fully reclaimable anyway.



Back with the same table, and doing a SHRINK SPACE COMPACT instead of COALESCE:


NAME                                                                  VALUE
---------------------------------------------------------------- ----------
redo entries                                                          28794
db block gets                                                         40940
session logical reads                                                 41527
db block changes                                                      49387
undo change vector size                                            36990460
redo size                                                          51848880

      :UNF       :FS1       :FS2       :FS3       :FS4      :FULL  BLOCKS
---------- ---------- ---------- ---------- ---------- ---------- -------
         0          0          1          0       2003        227    2304

------- ------- ---------- -------- -------- -------- ----------- ---------- --------
      3    2304     100000      223      222        5           0    1591530       88

         N to          N rows/block bytes/block %free space  blocks free
---------- -- ---------- ---------- ----------- ----------- ------- -----
        10 ->     248690        452        7170          11      56
    253200 ->     500800        450        7158          11      56
    505310 ->     752020        449        7132          11      56
    756530 ->     998730        443        7038          12      55


So what is the difference? Slightly more work (about 20% more logical reads and block changes) for the same result. Except that now the reclaimed blocks are in FS4 (75 to 100% free space).



What if we use SHRINK SPACE instead of SHRINK SPACE COMPACT?


NAME                                                                  VALUE
---------------------------------------------------------------- ----------
redo entries                                                          29352
db block gets                                                         45496
session logical reads                                                 46190
db block changes                                                      50032
undo change vector size                                            36981524
redo size                                                          51901500

      :UNF       :FS1       :FS2       :FS3       :FS4      :FULL  BLOCKS
---------- ---------- ---------- ---------- ---------- ---------- -------
         0          0          1          0          0        227     240

------- ------- ---------- -------- -------- -------- ----------- ---------- --------
      3     240     100000      223      222        5           0    1591530       88

         N to          N rows/block bytes/block %free space  blocks free
---------- -- ---------- ---------- ----------- ----------- ------- -----
        10 ->     248690        452        7170          11      56
    253200 ->     500800        450        7158          11      56
    505310 ->     752020        449        7132          11      56
    756530 ->     998730        443        7038          12      55


With tables, the shrink space lowers the high water mark. Here it is the same idea: in addition to the shrink space the reclaimed blocks are no more allocated to the index, so it can be used for other segments in the tablespace. We see that from dbms_space: the index is now 240 blocks only.



The previous requires a lock only for a short duration (according that we did the shrink space compact before). The rebuild needs a Share lock on the table during the whole operation, blocking concurrent DML.


NAME                                                                  VALUE
---------------------------------------------------------------- ----------
db block gets                                                           953
redo entries                                                           1832
db block changes                                                       1906
session logical reads                                                  4019
undo change vector size                                                9152
redo size                                                            173732

      :UNF       :FS1       :FS2       :FS3       :FS4      :FULL  BLOCKS
---------- ---------- ---------- ---------- ---------- ---------- -------
         0          0          1          0          0        222     256

------- ------- ---------- -------- -------- -------- ----------- ---------- --------
      2     256     100000      222      221        1           0    1591520       90

         N to          N rows/block bytes/block %free space  blocks free
---------- -- ---------- ---------- ----------- ----------- ------- -----
        10 ->     248690        452        7170          11      56
    253200 ->     501250        451        7170          11      56
    505760 ->     749300        451        7170          11      55
    753810 ->     997350        448        7117          11      55


The result is the same as the previous operation (SHRINK SPACE) except that the index height has decreased. A rebuild is the right operation if the index blevel has become too high. And we did that offline but with much less work. Minimal undo and redo. And small blocks to read (when the index is still usable the rebuild can use the current index to rebuild the new segment).



Last operation, possible only in Enterprise Edition, is the rebuild online which doesn't need to lock the table.


NAME                                                                  VALUE
---------------------------------------------------------------- ----------
redo entries                                                            660
db block changes                                                        876
db block gets                                                          1419
session logical reads                                                  4989
undo change vector size                                               24932
redo size                                                            114924

      :UNF       :FS1       :FS2       :FS3       :FS4      :FULL  BLOCKS
---------- ---------- ---------- ---------- ---------- ---------- -------
         0          0          1          0          0        222     256

------- ------- ---------- -------- -------- -------- ----------- ---------- --------
      2     256     100000      222      221        1           0    1591520       90

         N to          N rows/block bytes/block %free space  blocks free
---------- -- ---------- ---------- ----------- ----------- ------- -----
        10 ->     248690        452        7170          11      56
    253200 ->     501250        451        7170          11      56
    505760 ->     749300        451        7170          11      55
    753810 ->     997350        448        7117          11      55


Here we don't see an overhead to do it online. This is because my table is small (my testcase has only one column which is the indexed one). On a real table you will probably see that the online rebuild takes longer than the offline one, because it cannot use the current index. But anyway, the fact that it is online means that the duration is not a big issue.



This is an example on a table that had a massive purge: all blocks were touched. In that case the REBUILD is the right solution. However if you are in Standard Edition and cannot do it online, then you will probably do a SHRINK SPACE COMPACT because it lets you do a SHRINK SPACE (need a quick locks but for a very short duration - to do on a period of low activity).
COALESCE will make sense here only if you know you will insert back a lot of rows, so that you don't need to deallocate the blocks from the index.

Now, what to do if the free space to reclaim is only on small part of the index blocks? As in the following case:


         N to          N rows/block bytes/block %free space  blocks free
---------- -- ---------- ---------- ----------- ----------- ------- -----
        10 ->      50468        374        5974          26     112 o
     50917 ->     100756        449        7179          11     112
    101205 ->     151044        449        7179          11     112
    151493 ->     201332        449        7179          11     112
    201781 ->     251620        449        7179          11     112
    252069 ->     301908        449        7179          11     112
    302357 ->     351747        449        7179          11     111
    352196 ->     401586        449        7179          11     111
    402035 ->     451425        449        7179          11     111
    451874 ->     501264        449        7179          11     111
    501713 ->     551103        449        7179          11     111


In that case a COALESCE is probably the best because you keep the blocks allocated to the index for future growth. And the few blocks reclaimed will probably not change the index height anyway. However, if you did a small purge that concern only a small part of the index and want to reclaim the space for other segments, then the SHRINK SPACE is the right solution. But do you really need to reclaim space in that case?

Now you see the usage for my index fragmentation script: I don't need only the average free space. I need to have a clear picture of the fragmentation in order to decide what to do and how.

Windows 10: The new features

Fri, 2014-10-17 02:03

First, it should have been Windows 9, but finally the new Microsoft operating system is named Windows 10! In the same way as for my blog post on Windows 8, I have decided to write a few words about the new features of Windows 10.

Documentum upgrade project: Configuring NTLM SSO for D2 3.1 SP1

Thu, 2014-10-16 20:11

The Documentum D2 3.1 SP1 release is kind of a mix between D2 4.1 APIs (using D2FS in backend) and D2 3.1 front-end. It needs SSO to be fully implemented and the configuration has to be applied for 3.1 as well as for D2FS. For D2FS, the same configuration applies whether you are using the NT Lan Manager (NTLM) or Kerberos authentication.

If you want to implement Kerberos Single Sign On instead of NTLM, have a look at this blog post:


1. NTLM configuration for D2 3.1 SP1

The D2 3.1 documentation explains how to configure NTLM for D2 3.1 SP1.

Referring to the D2 3.1 installation guide, you can see the following:


Locate the file « shiro.ini » used by D2 applications and add the following lines:





D2-NTLM.domainController=<domain controller>

D2-NTLM.domainName=<domain name>

D2-NTLM.domainUser=<domain user to authentify>

D2-NTLM.domainPassword=<user's passwords>




/** = D2-NTLM



“docbaseX”: corresponds to a docbase using NTLM

“loginX”: corresponds to a supersuser login of “docbaseX”

“passwordX”: corresponds to an encrypted password of the superuser of “docbaseX”.


In our case, the file is located in the following path: <Tomcat root>/webapps/D2-Client/WEB-INF/classes/  

At first look, everything is there. However, some clarifications are very welcome.


About Active directory connection:

  • <domain controller>: enter the domain controller IP address

  • <domain name>: This is the active directory domain. You must write “InternalDom” for “InternalDom\userloginname” user principal name.

  • <domain user to authentify>: User name for the authentication concerning the domain controller. You must write “userloginname” for “InternalDom\userloginname” user principal name.


About Documentum repository connection:

  • <docbaseX>: enter the name of the docbase

  • <superUserX>: enter a user name which is a super user for docbaseX

  • <passwordX>: enter encoded password for related super user name


2. NTLM configuration for D2FS

2.1 D2 3.1 SP1

You must be aware - at least since patch 02 for D2 3.1 SP1 - that the way to store the password for the related super user name has changed.

Referring to D2 4.1 installation guide, you can see the following:


If does not exist, create the file in the webapps/D2/WEB-INF/classes/ folder using a text editor. Open in the folder webapps/D2/WEB-INF/classes/ and add following lines:



*.user=<administrator user>

*.password=<encoded password>

*.domain=<your domain> [not mandatory]

#or for each repository

<repository>.user=<administrator user>

<repository>.password=<encoded password>

<repository>.domain=<your domain>



  • Repository corresponds to the repository.
  • User and password are the username and password of an inline Super User account in the
  • repository.
  • Domain means the domain of the repository and can be left out for inline accounts.


In our case, the file is located in the following path: <Tomcat root>/webapps/D2-Client/WEB-INF/classes/

Everything is there. However, again, some clarifications are welcome:

  • <repository>: replace it with the name of the docbase

  • <administrator user>: enter a user name which is a super user for docbaseX.

  • <encoded password>: enter the password for the related super user name.

  • <domain>: Active directory domain You must write “InternalDom” for “InternalDom\userloginname” user principal name.


2.2 D2 3.1 SP1 P02

Since this release of D2, you must store the <encoded password> in the D2 lockbox.

Make sure you have installed the lockbox functionality properl, and that it is already working between D2 and its Java Method Server.

Then you can remove all lines related to passwords in the files:



*.user=<administrator user>

*.domain=<your domain> [not mandatory]

#or for each repository

<repository>.user=<administrator user>

<repository>.domain=<your domain>



Then, you can execute the following command:


java -classpath "<Tomcat root>\webapps\D2-Client\WEB-INF\lib\*" <D2 lockbox path>D2FS-trust.<Repository name>.password <user password>




  • <Tomcat root>: Root path of the tomcat instance

  • <D2 lockbox path>: Folder path where the D2.lockbox is stored

  • <Repository name>: Name of the repository

  • <user password>: Clear password of super user name setup in file


Make sure “D2FS” is in uppercase.


3. Working Example with lockbox

We will now see a few examples of working configurations. Obviously, this setup may not be the only one working to achieve the goal for Single Sign On authentication. You are also certainly able to identify where some adaptation can be performed.


Suppose we have following environment (with the following information):

MS domain controller address : “”

MS domain name: “InternalDomain”

MS domain user principal name: “InternalDomain\DomUser”

MS domain user password: “DomPasswd”


Tomcat root: “C:\Tomcat”

Lockbox file location: “C:\Lockbox\d2.lockbox”


First repository name: “DCTMRepo1”

Second repository name: “DCTMRepo2”


Ensure that you have stopped all D2-Client application instances on the application server, as well as for the D2-Config.


3.1 Inline super user creation

The user you are going to create must have the following attributes:



- State: Active

- Name: SSOAdmin

- Login Name: SSOAdmin

- Login Domain: InternalDomain

- Password: RepoPasswd

- User Source: Inline Password

- Privileges: Superuser

- Extended Privileges: None

- Client Capability: Consumer



Create a user for all repositories. In this example, we will see it as given that the same has be done for both repositories.


3.2 Shiro.ini file content

First, we must encode the password of the MS domain user name and the SSOAdmin:



java -classpath "C:\Tomcat\webapps\D2-Client\WEB-INF\lib\*" com.emc.d2.api.utils.GetCryptedPassword DomPasswd



java -classpath "C:\Tomcat\webapps\D2-Client\WEB-INF\lib\*" com.emc.d2.api.utils.GetCryptedPassword RepoPasswd




Then, we can fill in the file:








D2- NTLM.docbases=DCTMRepo1,SSOAdmin,8RLQerkftOBCedjQNEz57Q==,InternalDomain|DCTMRepo2,SSOAdmin,8RLQerkftOBCedjQNEz57Q==,InternalDomain



/** = D2-NTLM



3.3 file content







3.4 D2.lockbox password store


java -classpath "C:\Tomcat\webapps\D2-Client\WEB-INF\lib\*" C:\LockboxD2FS-trust.DCTMRepo1.password RepoPasswd


java -classpath "C:\Tomcat\webapps\D2-Client\WEB-INF\lib\*" C:\LockboxD2FS-trust.DCTMRepo2.password RepoPasswd



That's it. Restart the D2-Client application and test it.

Thanks for reading!

Oracle Cloud Control 12c: removing an agent is much easier in OEM

Thu, 2014-10-16 19:54

To remove an agent in the previous Oracle OEM Cloud Control 12c versions, you first had to delete its targets, stop the agent, and remove the host target. Only then were you able to remove the agent. In version, the decommission agent feature has been greatly improved.

From the agent menu, you select the new feature Agent Decommission:




The agent must be in stopped status to be decommissioned:





It shows the targets which will be removed.




In a very short time, the targets are delete, and the agent is removed.

The agent decommissioning feature is just a little new feature, but this will greatly simplify the agent emoval procedure :=)

Oracle IOT: when to use Index Organized Tables

Thu, 2014-10-16 12:17

When can we use IOT? That's a question I had when giving recently the Oracle DBA essentials workshop. the DBA Essential is very dense and there is only half a day about performance. We have the Oracle Performance Tuning workshop to go into those details. But IOTs are under used in my opinion, so I'll post a use case for them where they are a good idea.


  • Index access is very fast to get all indexed columns, but can be longer when having to get the non-indexed columns
  • IOT are fast to retreive all columns (except overflow) by primary key
  • IOT are long when retreiving by secondary index when we get columns that are not in the index and that are not the primary key
Here is a good use case: you have a many-to-many relationship that you implement with an association table   This is your UML diagram: CaptureIOT1.PNG (grantOption is a property of the association. It's a boolean that tells if the user can grant the granted privilege to another user)   And this is how you implement it in a relational database: CaptureIOT2.PNG So you have an association table that has a primary key which is composed of the two foreign keys, and an additional column for the association property.   And you have an index on USER_PRIVS(USER_ID,PRIV_ID) to enable the primary key. And you need to have all the foreign key indexed so you have to add an index on USER_PRIVS(PRIV_ID).   Now, for performance reasons you will probably want to add all the columns to both indexes, so you have finally those two indexes:


If you have a doubt about that modeling (such as why I don't introduce a surrogate key here) please tell me, we can discuss that. But I can guarantee that this is the right approach at least in Oracle.   So when you navigate from any of the tables, you have to access only to the index. No need for the table. So why store a table ? This is where IOT comes. Let's create that as an IOT:   CREATE TABLE USER_PRIVS ( USER_ID,PRIV_ID,GRANT_OPTION , PRIMARY KEY(USER_ID,PRIV_ID) ) ORGANISATION INDEX;   and a secondary index:   CREATE INDEX PRIV_USERS (PRIV_ID,USER_ID,GRANT_OPTION)  

So, if you navigate from USERS you access directly to the index leaf that has all information.

And if you navigate from PRIVILEGES, you have also all information. Someone said that access via secondary index is long? Yes but that's only when we have to get to the other columns because the secondary index don't have a rowid. It stores the primary key instead and must go through the primary index to get the other columns. But there are two point in my example:

  • I already have all the primary key columns in my secondary index, and Oracle is clever enough to not store them in double
  • I don't have to get to the primary index because I have evrey columns in my secondary index
Those are the indexes I would have anyway, so here I avoid to store a table: less storage, cheaper inserts and deletes.   In general, the use cases for IOT are:
  • we access mainly through the primary key and accept more expensive queries when accessing otherwise. Note that the secondary index can be optimized is we ALTER INDEX ... UPDATE BLOCK REFERENCES regularly
  • the secondary indexes have all the columns needed for our query
The association table is in the second category and is a good occasion to implement IOT. Note that the IOT and index defined above are used by Oracle to avoid forign key table locks as both starts with one of the foreign key columns.   I will illustrate the costs of access to IOT in a future blog post. If you want to learn more about indexes, I'll never stop to advise you to read

SQL Server: Forced failover with AlwaysOn & Availability Groups

Wed, 2014-10-15 20:38

During a workshop at one of my customer about the implementation of SQL Server AlwaysOn and availability groups, I had an interesting discussion. It was about asynchronous replication between 2 replicas and the effect of the forced failover operation with allowed data loss. Does SQL Server resynchronize availability databases if I loose some transactions during the failover process?

The quick answer is yes but probably the most interesting question is how SQL Server achieves resynchronization and this is exactly what we will see in this blog post.

Simulate a data loss is pretty easy if you have a SQL Server AlwaysOn architecture with availability groups and 2 replicas enrolled in an asynchronous replication. You can pause the replication on secondary and then perform a manual forced failover from the same server for instance. During this blog post I will use the undocumented function sys.fn_db_log() and some dynamic management views related to availability groups as well.

Let's begin with my SQL Server architecture: One availability group with the adventureworks2012 database that hosts a table dbo.aag_test and 2 replicas SQL141 and SQL143 configured in asynchronous replication. SQL141 is the current primary.


Step 1: Create a replication issue by pausing the availability database on the secondary

After suspending the replication from the secondary, we insert an additional record in a table aag_test on the primary (SQL141) which will be not replicated to the secondary (SQL143). Notice that we are not aligned on both sides by viewing the content of the sys.fn_db_log() function on each transaction log.


:connect sql141 use AdventureWorks2012;   select * from dbo.aag_test;   select * from sys.fn_dblog(null, null); go     :connect sql143 use AdventureWorks2012;   select * from dbo.aag_test;   select * from sys.fn_dblog(null, null); go


Here is the content of the AdventureWorks2012 transaction log file on the primary:


00000104:00003a88:0001     LOP_MODIFY_ROW      LCX_BOOT_PAGE 0000:00000000     00000104:00003a90:0001     LOP_MODIFY_ROW      LCX_SCHEMA_VERSION  0000:00000000 00000104:00003a90:0002     LOP_BEGIN_XACT      LCX_NULL     0000:000343da 00000104:00003a90:0003     LOP_BEGIN_XACT      LCX_NULL     0000:000343db 00000104:00003a90:0004     LOP_FORMAT_PAGE     LCX_HEAP     0000:000343db 00000104:00003a90:0005     LOP_MODIFY_ROW      LCX_PFS      0000:000343db 00000104:00003a90:0006     LOP_MODIFY_ROW      LCX_IAM      0000:000343db 00000104:00003a90:0007     LOP_HOBT_DELTA      LCX_NULL     0000:000343db 00000104:00003a90:0008     LOP_FORMAT_PAGE     LCX_HEAP     0000:000343db 00000104:00003a90:0009     LOP_COMMIT_XACT     LCX_NULL     0000:000343db 00000104:00003a90:000a     LOP_INSERT_ROWS     LCX_HEAP     0000:000343da 00000104:00003a90:000b     LOP_SET_FREE_SPACE  LCX_PFS      0000:00000000 00000104:00003a90:000c     LOP_COMMIT_XACT     LCX_NULL     0000:000343da


And this is the content of the same database transaction log file on the secondary:


00000104:00003a70:0018     LOP_INSERT_ROWS     LCX_HEAP     0000:000343d7 00000104:00003a70:0019     LOP_SET_FREE_SPACE  LCX_PFS      0000:00000000 00000104:00003a70:001a     LOP_COMMIT_XACT     LCX_NULL     0000:000343d7 00000104:00003a88:0001     LOP_MODIFY_ROW      LCX_BOOT_PAGE 0000:00000000


On the secondary, the last record in the transaction log file is identified by the LSN: 260:14984:1 (in decimal format). We notice additional records in the transaction log file on the primary after pausing the replication - we can identify among them LOP_INSERT_ROWS / LCX_HEAP records that correspond in fact to the insertion of additional records in the table aag_test.

We can also take a look at the DMV related to the availability groups by using this query:


SELECT        ar.replica_server_name as ServerName,        drs.synchronization_state_desc as SyncState,        drs.last_hardened_lsn,        drs.last_redone_lsn FROM sys.dm_hadr_database_replica_states drs        LEFT JOIN sys.availability_replicas ar              ON drs.replica_id = ar.replica_id ORDER BY ServerName

The query's output below:


ServerName   SyncState           last_hardened_lsn   last_redone_lsn SQL141       SYNCHRONIZED        260000001501600001  NULL SQL143       NOT SYNCHRONIZING   260000001499200001  260000001498400001 -- (260:14984:1)


The column last_redone_lsn for the secondary (SQL143) represents the last log record that has been redone. The value (convert in decimal format) is the same previously found in the transaction log by using the sys.fn_db_log() function. The last_hardened_lsn value represents the last transaction blog hardened to the transaction log file. This value is usually greater than the last_redone_lsn value.


Step 2: SQL143 becomes the new primary - after restarting SQL141 and initiating a manuel forced failover with potential data loss

At this point we have some lost data that have not been replicated (LSN between 00000104:00003a88:0001 and 00000104:00003a90:000c).


00000104:00003a90:0001     LOP_MODIFY_ROW      LCX_SCHEMA_VERSION  0000:00000000 00000104:00003a90:0002     LOP_BEGIN_XACT      LCX_NULL     0000:000343da 00000104:00003a90:0003     LOP_BEGIN_XACT      LCX_NULL     0000:000343db 00000104:00003a90:0004     LOP_FORMAT_PAGE     LCX_HEAP     0000:000343db 00000104:00003a90:0005     LOP_MODIFY_ROW      LCX_PFS      0000:000343db 00000104:00003a90:0006     LOP_MODIFY_ROW      LCX_IAM      0000:000343db 00000104:00003a90:0007     LOP_HOBT_DELTA      LCX_NULL     0000:000343db 00000104:00003a90:0008     LOP_FORMAT_PAGE     LCX_HEAP     0000:000343db 00000104:00003a90:0009     LOP_COMMIT_XACT     LCX_NULL     0000:000343db 00000104:00003a90:000a     LOP_INSERT_ROWS     LCX_HEAP     0000:000343da 00000104:00003a90:000b     LOP_SET_FREE_SPACE  LCX_PFS      0000:00000000 00000104:00003a90:000c


The new status of each replica is as following:


ServerName   SyncState           last_hardened_lsn   last_redone_lsn SQL141       NOT SYNCHRONIZING   0                   0 SQL143       NOT SYNCHRONIZING   260000001502400001  NULL


Some additional records are inserted to the transaction log but they are not related directly to the replicated data lost previously from the old primary SQL141.


Step 3: Replication from the new primary SQL143 is resumed

Here the new status of the replication of each replica:

  ServerName   SyncState           last_hardened_lsn   last_redone_lsn SQL141       NOT SYNCHRONIZING   260000001499200001  249000007658400001 SQL143       SYNCHRONIZED        26000001502400001  NULL


We can confirm now that we have lost the data remained in the send queue list on the old primary SQL141. The last hardened lsn is not the same between the initial situation and the new situation (260000001501600001 vs 260000001499200001). Remember that the hardened lsn (transaction log block) 260000001499200001 corresponds to the last hardened lsn on the old secondary SQL143 before the synchronization issue.


Step 4: Last step - resuming the database movement on the new secondary SQL141 in order to resynchronize the both replicas

At this point SQL Server has resynchronized perfectly the both replicas. Take a look at the output of the sys.fn_db_log() on each side. We retrieve effectively the same pattern on both sides.

Content of the transaction log on the new secondary SQL141:


00000104:00003a88:0001     LOP_MODIFY_ROW      LCX_BOOT_PAGE 0000:00000000 00000104:00003a90:0001     LOP_MODIFY_ROW      LCX_BOOT_PAGE 0000:00000000 00000104:00003a98:0001     LOP_COUNT_DELTA     LCX_CLUSTERED 0000:00000000 00000104:00003a98:0002     LOP_COUNT_DELTA     LCX_CLUSTERED 0000:00000000 00000104:00003a98:0003     LOP_COUNT_DELTA     LCX_CLUSTERED 0000:00000000 00000104:00003a98:0004     LOP_COUNT_DELTA     LCX_CLUSTERED 0000:00000000 00000104:00003a98:0005     LOP_BEGIN_CKPT      LCX_NULL     0000:00000000 00000104:00003aa0:0001     LOP_XACT_CKPT LCX_BOOT_PAGE_CKPT  0000:00000000 00000104:00003aa8:0001     LOP_END_CKPT LCX_NULL     0000:00000000


Content of the transaction log on the new primary SQL143:


00000104:00003a88:0001     LOP_MODIFY_ROW      LCX_BOOT_PAGE 0000:00000000 00000104:00003a90:0001     LOP_MODIFY_ROW      LCX_BOOT_PAGE 0000:00000000 00000104:00003a98:0001     LOP_COUNT_DELTA     LCX_CLUSTERED 0000:00000000 00000104:00003a98:0002     LOP_COUNT_DELTA     LCX_CLUSTERED 0000:00000000 00000104:00003a98:0003     LOP_COUNT_DELTA     LCX_CLUSTERED 0000:00000000 00000104:00003a98:0004     LOP_COUNT_DELTA     LCX_CLUSTERED 0000:00000000 00000104:00003a98:0005     LOP_BEGIN_CKPT      LCX_NULL     0000:00000000 00000104:00003aa0:0001     LOP_XACT_CKPT LCX_BOOT_PAGE_CKPT  0000:00000000 00000104:00003aa8:0001     LOP_END_CKPT LCX_NULL     0000:00000000


Compared to the first output in the transaction log of the old primary SQL141 we definitively lost the entries identified previously at the top of this article.

Wonderful but how SQL Server has achieved this process? The answer is in the SQL Server error log of the new secondary SQL141:

  • First, we notice the recovery LSN (transaction log block) identified by SQL Server for the AdventureWorks2012 database here (260:14992:1). The same value was founded previously by looking at the availability groups DMV and the column last_hardened_lsn. This is the starting point for the resynchronizing process
SQL141 error log 2014-10-07 22:40:57.120    spid17s      The recovery LSN (260:14992:1) was identified for the database with ID 5. This is an informational message only. No user action is required.


  • Then we resumed manually the replication and we retrieve the event in the error log.


2014-10-07 22:40:57.130    spid17s      AlwaysOn Availability Groups data movement for database 'AdventureWorks2012' has been suspended for the following reason: "failover from partner" (Source ID 1; Source string: 'SUSPEND_FROM_PARTNER'). To resume data movement on the database, you will need to resume the database manually. For information about how to resume an availability database, see SQL Server Books On   …   2014-10-07 22:43:33.950    spid57 ALTER DB param option: RESUME 2014-10-07 22:43:33.950    spid57 AlwaysOn Availability Groups data movement for database 'AdventureWorks2012' has been resumed. This is an informational message only. No user action is required. 2014-10-07 22:43:33.960    spid17s      AlwaysOn Availability Groups connection with primary database established for secondary database 'AdventureWorks2012' on the availability replica 'SQL143' with Replica ID: {546df6b8-d07d-4aa4-afb6-5f543d5fae98}. This is an informational message only. No user action is required.


  • The new secondary automatically restarted to resynchronize with the current primary. The commit LSN here is probably in the transaction log block (14992) that is lower than the transaction log block related to the Hardened Lsn (15016).


2014-10-07 22:43:34.060    spid17s      Availability database 'AdventureWorks2012', which is in the secondary role, is being restarted to resynchronize with the current primary database. This is an informational message only. No user action is required. 2014-10-07 22:43:34.060    spid43s      Nonqualified transactions are being rolled back in database AdventureWorks2012 for an AlwaysOn Availability Groups state change. Estimated rollback completion: 100%. This is an informational message only. No user action is required. 2014-10-07 22:43:34.070    spid43s      State information for database 'AdventureWorks2012' - Hardended Lsn: '(260:15016:1)'   Commit LSN: '(260:14992:12)'   Commit Time: 'Oct 7 2014 10:21PM'


  • And finally the most important information in this error log. SQL Server reverted to the transaction log block related to the recovery LSN (260:14992:1) and reinitialize to the last log record redone by the old secondary SQL143.


2014-10-07 22:43:34.810    spid43s      Using the recovery LSN (260:14992:1) stored in the metadata for the database with ID 5. This is an informational message only. No user action is required.

This is exactly the picture we have by looking at the last output of the sys.fn_db_log() function above that starting from the LSN 00000104:00003a90:0001 (260:14992:1 in decimal format) that corresponds to the begin of the transaction block to the LSN 00000104:00003aa8:0001 (104:15016:1 in decimal format).

As you can see SQL Server uses a sophisticated mechanism that allows to recover to a situation before data lost. You probably also notice that the key point of this revert process is the transaction log block and not the transaction itself.


Documentum upgrade project: Wrong icons in D2 search results with xPlore

Wed, 2014-10-15 20:11

After migrating from "Fast" to "xPlore" Documentum full-text search engine, our team discovered wrong icons in D2 search results. Documents in checked-out state continued to be displayed with the normal icon, even if xPlore indexed attribute value for “r_lock_owner”. This attribute contains the user name who has checked-out the document.

In the following picture, you can see the search result with the wrong icon, whereas in the “Version” window, the checked-out icon appears for the current version:




This result implies that D2 displays the results provided by xPlore directly, without any further check based on “r_object_id” for updated metadata values.

At this stage, xPlore only returned document information without value for “r_lock_owner”.

The reason is that after the user had checked-out the document, no queue item was created to tell the index-agent to re-index the document with a value for the “r_lock_owner” attribute.

In order to ask index agent to index again the document, we have to register the “dm_checkout” event on “dm_sysobject” type for “dm_fulltext_index_user”.

However, the reverse operation must also been registered, for “dm_unlock” and inform xPlore the document is not checked-out anymore.


First, we need to verify that the event is not registered for the user index. Using DQL, the following query must return no result:



select * from dmi_registry where user_name='dm_fulltext_index_user' and event='dm_checkout';



Then we need to retrieve r_object_id of the dm_sys_object type:



select r_object_id from dm_type where name ='dm_sysobject';



Using API, request a login session ticket and open a session as 'dm_fulltext_index_user':



API> getlogin,c,dm_fulltext_index_user



Copy login ticket result into the next command and update with:





“s1” is now the 'dm_fulltext_index_user' session descriptor.


Use this session descriptor to register “dm_checkout” and “dm_unlock” events for “dm_sysobject” type.

Replace in both lineswith r_object_id you have got previously:



API> register,s1,,dm_checkout,,F
API> register,



In case you have configured another index agent for your repository, do not forget to reproduce same procedure for the other user: “dm_fulltext_index_user_01”.

I hope this information will be of help to you.

Oracle system statistics: Display AUX_STATS$ with calculated values and formulas

Wed, 2014-10-15 01:13

System statistics can be gathered in NOWORKLOAD or WORKLOAD mode. Different values will be set depending on that and the others will be calculated - derived from them. We can see defined values from SYS.AUX_STATS$ but here is a script that shows the calculated ones as well.

With no system statistics or NOWORKLOAD the values of IOSEEKTIM (latency in ms) and IOTFRSPEED (transfer in bytes/ms) are set and the SREADTIM (time to read 1 block in ms) and MREADTIM (for multiblock read) are calculated from them. MBRC depends on the defaults or the db_file_multiblock_read_count settings.

With WORKLOAD statistics, the SREADTIM and MREADTIM as well as MBRC are measured and those are the ones that are used by the optimizer.

Here is my script:

set echo off
set linesize 200 pagesize 1000
column pname format a30
column sname format a20
column pval2 format a20

select pname,pval1,calculated,formula from sys.aux_stats$ where sname='SYSSTATS_MAIN'
  reference sga on (
    select name,value from v$sga 
        ) dimension by (name) measures(value)
  reference parameter on (
    select name,decode(type,3,to_number(value)) value from v$parameter where name='db_file_multiblock_read_count' and ismodified!='FALSE'
    union all
    select name,decode(type,3,to_number(value)) value from v$parameter where name='sessions'
    union all
    select name,decode(type,3,to_number(value)) value from v$parameter where name='db_block_size'
        ) dimension by (name) measures(value)
partition by (sname) dimension by (pname) measures (pval1,pval2,cast(null as number) as calculated,cast(null as varchar2(60)) as formula) rules(
  calculated['MREADTIM']=coalesce(pval1['MREADTIM'],pval1['IOSEEKTIM'] + (parameter.value['db_block_size'] * calculated['MBRC'] ) / pval1['IOTFRSPEED']),
  calculated['SREADTIM']=coalesce(pval1['SREADTIM'],pval1['IOSEEKTIM'] + parameter.value['db_block_size'] / pval1['IOTFRSPEED']),
  calculated['   multi block Cost per block']=round(1/calculated['MBRC']*calculated['MREADTIM']/calculated['SREADTIM'],4),
  calculated['   single block Cost per block']=1,
  formula['MBRC']=case when pval1['MBRC'] is not null then 'MBRC' when parameter.value['db_file_multiblock_read_count'] is not null then 'db_file_multiblock_read_count' when parameter.value['_db_file_optimizer_read_count'] is not null then '_db_file_optimizer_read_count' else '= _db_file_optimizer_read_count' end,
  formula['MREADTIM']=case when pval1['MREADTIM'] is null then '= IOSEEKTIM + db_block_size * MBRC / IOTFRSPEED' end,
  formula['SREADTIM']=case when pval1['SREADTIM'] is null then '= IOSEEKTIM + db_block_size        / IOTFRSPEED' end,
  formula['   multi block Cost per block']='= 1/MBRC * MREADTIM/SREADTIM',
  formula['   single block Cost per block']='by definition',
  calculated['   maximum mbrc']=sga.value['Database Buffers']/(parameter.value['db_block_size']*parameter.value['sessions']),
  formula['   maximum mbrc']='= buffer cache size in blocks / sessions'
set echo on

Here is an exemple with default statistics:

PNAME                               PVAL1 CALCULATED FORMULA
------------------------------ ---------- ---------- --------------------------------------------------
CPUSPEEDNW                           1519
IOSEEKTIM                              10
IOTFRSPEED                           4096
SREADTIM                                          12 = IOSEEKTIM + db_block_size        / IOTFRSPEED
MREADTIM                                          26 = IOSEEKTIM + db_block_size * MBRC / IOTFRSPEED
MBRC                                               8 = _db_file_optimizer_read_count
   maximum mbrc                           117.152542 = buffer cache size in blocks / sessions
   single block Cost per block                     1 by definition
   multi block Cost per block                  .2708 = 1/MBRC * MREADTIM/SREADTIM

You see the calculated values for everything. Note the 'maximum mbrc' which limits the multiblock reads when the buffer cache is small. It divides the buffer cache size (at startup - can depend on ASMM and AMM settings) by the sessions parameter.

Here is an example with workload system statistics gathering:

PNAME                               PVAL1 CALCULATED FORMULA
------------------------------ ---------- ---------- --------------------------------------------------
CPUSPEEDNW                           1511
IOSEEKTIM                              15
IOTFRSPEED                           4096
SREADTIM                            1.178      1.178
MREADTIM                              .03        .03
CPUSPEED                             3004
MBRC                                    8          8 MBRC
MAXTHR                            6861824
   maximum mbrc                           114.983051 = buffer cache size in blocks / sessions
   single block Cost per block                     1 by definition
   multi block Cost per block                  .0032 = 1/MBRC * MREADTIM/SREADTIM

here all values are explicitely set

And an example with exadata system statistics that defines noworkload values and sets also the MBRC (see Chris Antognini post about it)

PNAME                               PVAL1 CALCULATED FORMULA
------------------------------ ---------- ---------- --------------------------------------------------
CPUSPEEDNW                           1539
IOSEEKTIM                              16
IOTFRSPEED                         204800
SREADTIM                                       16.04 = IOSEEKTIM + db_block_size        / IOTFRSPEED
MREADTIM                                       18.28 = IOSEEKTIM + db_block_size * MBRC / IOTFRSPEED
MBRC                                   57         57 MBRC
   maximum mbrc                           114.983051 = buffer cache size in blocks / sessions
   single block Cost per block                     1 by definition
   multi block Cost per block                    .02 = 1/MBRC * MREADTIM/SREADTIM

And finaly here is a workload system statistics result but with explicitly setting the db_file_multiblock_read_count to 128:

PNAME                               PVAL1 CALCULATED FORMULA
------------------------------ ---------- ---------- --------------------------------------------------
CPUSPEEDNW                           1539
IOSEEKTIM                              15
IOTFRSPEED                           4096
SREADTIM                                          17 = IOSEEKTIM + db_block_size        / IOTFRSPEED
MREADTIM                                         271 = IOSEEKTIM + db_block_size * MBRC / IOTFRSPEED
MBRC                                             128 db_file_multiblock_read_count
   maximum mbrc                           114.983051 = buffer cache size in blocks / sessions
   single block Cost per block                     1 by definition
   multi block Cost per block                  .1245 = 1/MBRC * MREADTIM/SREADTIM

Here you see that the MBRC in noworkload is coming from the value which is set by the db_file_multiblock_read_count rather from the value 8 which is used by default by the optimizer when it is not set. And the MREADTIM is calculated from that i/o size

For more historical information about system statistics and how multiblock reads are costed (index vs. full table scan choice) see my article on latest OracleScene

As usual if you find anything to improve in that script, please share.

How to measure Oracle index fragmentation

Mon, 2014-10-13 14:39

At Oracle Open World 2014, or rather the Oaktable World, Chris Antognini has presented 'Indexes: Structure, Splits and Free Space Management Internals'. It's not something new, but it's still something that is not always well understood: how index space is managed, block splits, fragmentation, coalesce and rebuilds. Kyle Hailey has made a video of it available here.
For me, it is the occasion to share the script I use to see if an index is fragmented or not.

First, forget about those 'analyze index validate structure' which locks the table, and the DEL_LEAF_ROWS that counts only the deletion flags that are transient. The problem is not the amount of free space. The problem is where is that free space. Because if you will insert again in the same range of values, then that space will be reused. Wasted space occurs only when lot of rows were deleted in a range where you will not insert again. For exemple, when you purge old ORDERS, then the index on the ORDER_DATE - or on the ORDER_ID coming from a sequence - will be affected. Note that the problem occurs only for sparse purges because full blocks are reclaimed when needed and can get rows from an other range of value.

I have a script that shows the number of rows per block, as well as used and free space per block, and aggregates that by range of values.

First, let's create a table with a date and an index on it:

SQL> create table DEMOTABLE as select sysdate-900+rownum/1000 order_date,decode(mod(rownum,100),0,'N','Y') delivered , dbms_random.string('U',16) cust_id
  2  from (select * from dual connect by level

My script shows 10 buckets with begin and end value and for each of them the averge number of rows per block and the free space:

SQL> @index_fragmentation 

ORDER_DAT -> ORDER_DAT rows/block bytes/block %free space     blocks free                   
--------- -- --------- ---------- ----------- ----------- ---------- -----                     
24-APR-12 -> 02-AUG-12        377        7163          11        266                        
03-AUG-12 -> 11-NOV-12        377        7163          11        266                        
11-NOV-12 -> 19-FEB-13        377        7163          11        266                        
19-FEB-13 -> 30-MAY-13        377        7163          11        265                        
30-MAY-13 -> 07-SEP-13        377        7163          11        265                        
07-SEP-13 -> 16-DEC-13        377        7163          11        265                        
16-DEC-13 -> 26-MAR-14        377        7163          11        265                        
26-MAR-14 -> 03-JUL-14        377        7163          11        265                        
04-JUL-14 -> 11-OCT-14        377        7163          11        265                        
12-OCT-14 -> 19-JAN-15        376        7150          11        265                        

Note that the script reads all the table (it can do a sample but here it is 100%). Not exactly the table but only the index. It counts the index leaf blocks with the undocumented function sys_op_lbid() which is used by oracle to estimate the clustering factor.

So here I have no fragmentation. All blocks have about 377 rows and no free space. This is because I inserted them in increasing order and the so colled '90-10' block split occured.

Let's see what I get if I delete most of the rows before the 01-JAN-2014:

SQL> delete from DEMOTABLE where order_dateSQL> @index_fragmentation 

ORDER_DAT -> ORDER_DAT rows/block bytes/block %free space     blocks free                   
--------- -- --------- ---------- ----------- ----------- ---------- -----                     
25-APR-12 -> 02-AUG-12          4          72          99        266 oooo                   
03-AUG-12 -> 11-NOV-12          4          72          99        266 oooo                   
11-NOV-12 -> 19-FEB-13          4          72          99        266 oooo                   
19-FEB-13 -> 30-MAY-13          4          72          99        265 oooo                   
30-MAY-13 -> 07-SEP-13          4          72          99        265 oooo                   
07-SEP-13 -> 16-DEC-13          4          72          99        265 oooo                   
16-DEC-13 -> 26-MAR-14          4          72          99        265 oooo                   
26-MAR-14 -> 03-JUL-14          4          72          99        265 oooo                   
04-JUL-14 -> 11-OCT-14         46         870          89        265 oooo                   
12-OCT-14 -> 19-JAN-15        376        7150          11        265                        

I have the same buckets, and same number of blocks. But blocks which are in the range below 01-JAN-2014 have only 4 rows and a lot of free space. This is exactly what I want to detect: I can check if that free space will be reused.

Here I know I will not enter any orders with a date in the past, so those blocks will never have an insert into them. I can reclaim that free space with a COALESCE:

SQL> alter index DEMOINDEX coalesce;

Index altered.
SQL> @index_fragmentation 

ORDER_DAT to ORDER_DAT rows/block bytes/block %free space     blocks free                      
--------- -- --------- ---------- ----------- ----------- ---------- -----                     
25-APR-12 -> 03-OCT-14        358        6809          15         32                        
03-OCT-14 -> 15-OCT-14        377        7163          11         32                        
15-OCT-14 -> 27-OCT-14        377        7163          11         32                        
27-OCT-14 -> 08-NOV-14        377        7163          11         32                        
08-NOV-14 -> 20-NOV-14        377        7163          11         32                        
20-NOV-14 -> 02-DEC-14        377        7163          11         32                        
02-DEC-14 -> 14-DEC-14        377        7163          11         32                        
14-DEC-14 -> 26-DEC-14        377        7163          11         32                        
27-DEC-14 -> 07-JAN-15        377        7163          11         32                        
08-JAN-15 -> 19-JAN-15        371        7056          12         32                        

I still have 10 buckets because this is defined in my script, but each bucket noew has less rows. I've defragmented the blocks and reclaimed the free blocks.

Time to share the script now. Here it is:

The script is quite ugly. It's SQL generated by PL/SQL. It's generated because it selects the index columns. And as I don't want to have it too large it is not indented nor commented. However, if you run it with set servertoutput on you will see the generated query.

How to use it? Just change the owner, table_name, and index name. It reads the whole index so if you have a very large index you may want to change the sample size.

Alfresco Summit 2014 - Conferences, Day 2 - Best Practices & Monitoring

Fri, 2014-10-10 07:00

As I said in my last blog, the Alfresco Summit 2014 took place in London from October 7 to 9. So yeah, unfortunately yesterday was the last day of this amazing event. It's the first time I had the chance to participate in an Alfresco Summit but I'm quite sure it will not be the last! I can't tell what moment I preferred during this event because the Keynote (presentation of Alfresco One 5.0) was very impressive and I really enjoyed it! All presentations were very interesting too but if I should pick just one thing, I think I would choose... The two evening parties Cool.

So more seriously, on the second day of conferences, I had the opportunity to attend a lot of sessions about best practices. The principal reason why I choose these kind of session is because I wanted to confront dbi services' best practices about Alfresco to best practices of other Alfresco consultants (either Alfresco employees or Alfresco partners). Best practices sessions covered several different fields like:

  • Security
  • Performance
  • Replication, Backup & Disaster Recovery
  • Monitoring
  • Sizing
  • Aso...

Yesterday morning, I had the chance to attend to a session presented by Miguel Rodriguez (Technical Account Manager at Alfresco) about how to monitor an Alfresco installation. For the end of this blog, I will summarize the presentation of Miguel to show you how this solution is powerful! So the presentation wasn't just about how to monitor a JVM, it was really more advanced because Miguel presented how to control all Alfresco environments (Dev, Test, Prod, aso...) in a single place with several tools.

The monitoring of Alfresco can be divided into three different categories:

  • Monitoring and treatment of Alfresco logs
  • Monitoring of the OS/JVM/Alfresco/Solr/Database
  • Sending alerts to a group of people regarding the two categories above


I. Monitor and treat Alfresco logs

This first part about the monitoring and treatment of Alfresco logs needs three software:

  • Logstash: it's an utility that can be used to monitor events or log files and to execute little commands (like a "top")
  • ElasticSearch: this tool is able to index log files to do some restful search and analytics after that. This will be very useful for this part
  • Kibana3: the last software is the one used for the interface. It will just be used to query the index created by ElasticSearch and beautifully display the results. The Kibana3 page is composed of boxes fully customizable directly from the interface and that are refreshed automatically every X seconds (also customizable)

So here, we have a complete log monitoring & treatment solution. This solution can be used to display all errors in all alfresco log files in the last X secondes/minutes/hours, aso... But it can also be used to apply filters on all log files. For example, if you want to find all log entries related to "License", then you just have to type "license" in the search box. As all log entries are indexed by ElasticSearch (using Lucene), then all boxes on the interface will be refreshed to only display data related to "license".

b2ap3_thumbnail_Kibana3.jpgImage from Jeff Potts' blog. This image display data related to JMeter but same results could be applied to Alfresco.


II. Monitoring of the OS/JVM/Alfresco/Solr/Database/...

This second part is as simple as the first one:

  • Logstash: this time, logstash is more used to execute commands to retrieve numerical data. These commands can be Unix commands (e.g. "top" to retrieve current CPU consumption) but they can also be JMX commands to retrieve certain values from the JVM like free heap space. More details are available below.
  • Graphite: this utility will just retrieve all numerical information generated by Logstash and then store them in a database
  • Grafana: as above, this part need an interface... And in the same way, the page is fully customizable directly from the interface. You can add as many boxes (each box represent a diagram, often a line chart) as you want with as many information available in your database.

When I first saw the interface of Grafana, I thought "Oh, it looks like jconsole". But this was really only my first impression. On this interface, you can add different boxes to display numerical data about not only one Alfresco Node but about all Alfresco Nodes! For example, you will be able to display boxes with the CPU consumption, memory consumption, the JVM heap size (permgen, used, free, aso...) and all these boxes will show data related to all Nodes of your cluster. All this kind of stuff is already available through jconsole. But in addition to what jconsole is capable of, you will be able to display boxes related to (non-exhaustive list):

  • All database sessions available and all used
  • Tomcat threads
  • Disk i/o
  • Alfresco documents in the repository and in Solr (Solr store documents in a different way)
  • Health of the indexing: based on different values, we can know if the indexing is working well or if some tuning is required
  • Index workers and tracking status: this chart show if there is a gap between the tracking of documents by Solr and the indexing of these documents which would means that Solr isn't indexing at that time (probably an issue)
  • Solr transactions
  • Concurrent users
  • Aso...

 Grafana.pngImage extracted from the demonstration of Miguel Rodriguez


III. Sending alerts

You are probably aware of some tools like Nagios that can be used to send alerts of different sorts. So let me present you a tool that you may not know: Icinga. This is the last tool that we will use for this monitoring solution and it's pretty much the same thing that Nagios (it's a fork of Nagios (2009)). With this tool, you are able to define some rules with a low threshold and a high threshold. If a threshold is reached, then the status of the rule is updated to Warning or Critical (e.g. Server Free Memory below 20% = Warning ; Server Free Memory below 10% = Critical). At the same time, an alert is sent to the group of people defined as contact in case of threshold reached. Well it's pretty much the same thing that Nagios or other tools like that so I will not describe Icinga deeper.

 b2ap3_thumbnail_Icinga.jpgImage from Toni de la Fuente's Blog


So here we are, we now have a lot of different tools used to provide a really good passive and active monitoring solution. But I hear your complaints from here about "Hey, there are way too much components to install, this is too difficult!". Well let me finish! All tools above are Open Source tools and are available as a package created by Packer. This utility let you create a virtual machine "template" with all needed components pre-installed. So you are able to just download the package and after the execution of a single command, you will have a new virtual machine up and running with all monitoring components. The only remaining step is to install a Logstash agent on each components of Alfresco (Alfresco Nodes, Solr Nodes) and configure this agent to retrieve useful data.

I was really impressed by this session because Miguel Rodriguez showed us a lot of very interesting tools that I didn't know. His monitoring solution looks really pretty well and I'm quite sure I will take some time to try it in the next few weeks.

I hope you have enjoyed this article as much as I enjoyed Miguel's session and if you want to download your own version of this Monitoring VM, take a look at Miguel's Github account.


Documentum upgrade project: ADTS local user or domain user?

Thu, 2014-10-09 18:43

ADTS local user or domain user: Which user should I choose as the installation owner? This question is important if you are installing a new Advanced Document Transformation Services (ADTS).  You can opt for a domain user in order to have one user for several ADTS installations with less AD management. Or you may want to opt for local users to separate the installations, but you will have to manage all users independently. In fact, we will see that you don't really have the choice.

As you may know, the ADTS server is a bunch of software interacting between each other. You have:

  • Microsoft Office products (Word, Excel, PowerPoint ...)
  • EMC Software
  • Adlib for EMC Software

And most of these software work thanks to Windows Services and some of these services need a user to run:


 Service  Log On As
 Start Order  Stop Order  Adlib Exponent Connector  ADTS Install Owner  auto  6  Adlib Exponent Manager  ADTS Install Owner  3  5  Adlib FMR  Local System  2  4  Adlib Process Manager  Local System  1  3  Documentum Content Transformation Monitor Service  Local System  4  auto  Documentum Content Transformation Service  ADTS Install Owner  auto  1  Documentum CTS Admin Agent  Local System  5  2

As you can see in the previous table I put 'ADTS Install Owner' into the table as 'Log On As'. That means a local user must be created for these services. Why can't we use AD user?

That is simple, the services are sessions-related. They are started under the specific user. But an AD user cannot be connected for ever on the system, maybe due to policies. In fact, the session is disabled and the services are stopped or cannot be started properly. Thus, the renditions are not created anymore.

Also note that if you are using Remote Desktop Connection and login with the installation owner, do not forget to unmap the printer as it can cause issues with the Adlib printer which is normally set as the default printer. Edit your .RDP file, go to tab Local Resources and untick Printers. The Adlib printer can be replaced by your own one as default, preventing the ADTS to use the correct printer.

A good practice is to use a different user than the installation owner for basic administration tasks.

Documentum upgrade project - from FAST to xPlore

Thu, 2014-10-09 02:06

We planned to do our full production upgrade in a weekend - and one part was the replacement of FAST by xPlore. As we had to index more than 10 millions objects, we had to find a solution where the full repository would be indexed by Monday morning.

Since we used a clone of production to validate and test our upgrade process, functions, and performance, we decided to prepare the production fulltext servers by using them first against the clone repository. After the production repository upgrade, the prepared fulltext servers could then be used for production where only the gap of objects (new and modified objects) since the last refresh of the clone would be indexed.


1) Preparation

- Creation of a clone

- ADTS installation

- D2-Client, D2-Config installation

- Installation of xPlore in High Availability on servers which will be used later on for production

- Full repository fulltext indexing


2) Testing

- Applications

- Rendition

- Fulltext search


3) Production rollout preparation

Once the testing was finished and the go live date defined, we had to do first the preparation before the rollout weekend of the xPlore fulltext servers to reuse them in production. We removed the index agent on both xPlore index server using the GUI to have a clean installation on them where only the index server files/folders were kept.


4) Production rollout

Before doing the upgrade, we cleaned the objects related to the FAST fulltext indexing (dm_ftindex_agent-config, dm_ftengine_config, dm_fulltext_index_s) from the repository, unregistered the event related to both fulltext users (dm_fulltext_index_user, dm_fulltext_index_user_01) and at the end removed the old queue objects related to both fulltext index users.

With these steps the repository was ready, once upgraded, to install the xPlore Index agents.

So after the repository upgrade, we installed the Index Agents to have a HA configuration and ran, on both index servers, the ftintegrity tool to have the documents r_object_id list to be refeed in the xPlore collection.

To resubmit the r_object_id listed in ObjectId-dctmOnly.txt and ObjectId-common-version-mismatch:


cp ObjectId-dctmOnly.txt /dctm/opt/documentum/xPlore/jboss7.1.1/server/DctmServer_IndexagentDms/deployments/IndexAgent.war/WEB-INF/classes/ids.txt


Once the file has been processed, the ids.txt is renamed to ids.txt.done


cut -f1 ObjectId-common-version-mismatch.txt -d " " > /dctm/opt/documentum/xPlore/jboss7.1.1/server/DctmServer_IndexagentDms/deployments/IndexAgent.war/WEB-INF/classes/ids.txt  


Update the acls and group by running

On the standby server, update the script by adding in the java command:


 -ftEngineStandby T


Remove objects in fulltext DB if required by running:




With this approach, we had the fulltext database ready on Monday morning when people started to work with the system.

SQL Server 2014 Tips: Create indexes directly via CREATE TABLE

Thu, 2014-10-09 00:16

This week, I discovered a new enhancement which is not hidden but not really visible in SQL Server 2014. Among Hekaton and other new features in SQL Server 2014, a little option appears in table creation.Surprised

On the msdn website, we can see a new section in the "CREATE TABLE" code:





We can directly create an index in the "CREATE TABLE" query. Ok, then let's go!


In my example, I create a diver table with a default clustered index on a primary key (DiverId) and 2 other non-clustered indexes on Diving Level (OWD, AOWD, Level 1, Level 2,…) and Diving Organization (PADI, CCMAS, SSI,…).


Prior to SQL Server 2014, you create indexes after setting up the table like this.




We need to have 3 instructions to create a table and 2 non-clustered indexes.

In SQL 2014, it is very easy with just one query:




I'm very happy to share new features with you that are not necessarily in the light!Cool

You will find the MSDN "CREATE TABLE" reference for SQL Server 2014 here.

Alfresco Summit 2014 - Conferences, Day 1 - Status & New version!

Wed, 2014-10-08 10:13

Today, I have the chance to be in London for the amazing Alfresco Summit 2014 (7-9 October 2014)! Two weeks ago, the Alfresco Summit took place in San Francisco but unfortunately I wasn't available at that time. This year, the Alfresco Summit in London is a three day event: the first day is, as often, a training day with a complete day course and the two other days are composed of conferences (General, Business, Technical or Solution sessions).

So yesterday, I attended the Alfresco University training course with Rich McKnight (Principal Consultant of Alfresco) about "Creating Custom Content Models in Alfresco". It was a really good training, very well presented. I'm sure that a lot of people with basic knowledge of Alfresco now have a better understanding of the general concepts of Content Models in Alfresco.


The conferences have started today with a small status of Alfresco by Doug Dennerline (CEO of Alfresco) with quite a lot of interesting figures. After that, Thomas DeMeo (VP of Product Management at Alfresco) presented the Alfresco Product Keynote... And here we are, the brand new Alfresco Enterprise major version (Alfresco One 5.0) was presented with all its new features. Of course, I already had the opportunity to test almost everything in this version thanks to the Alfresco Community 5.0.a (there is often little differences between Community and Enterprise versions) but it's always good to view a presentation of what's new in an Alfresco Enterprise version especially a major version! Moreover, I can assure you that you will LOVE this new Alfresco version because of the following points:

  • Alfresco 5.0 now use Solr 4.9 instead of Solr 1.4 (for Alfresco 4.x) which involves a lot of improvements
  • Live Search feature: Start typing anything in the search field and alfresco will show you results as you type for documents, sites, people, wiki or blogs
  • Search suggestions and spell check feature: When typing, Alfresco present suggestions related to what you are writing and if you mistyped a word, then Alfresco is able to present you what the word should be (e.g. for 'Especialy', Alfresco will suggest you 'Especially')
  • Search by facets: A facet is a search filter based on a pre-defined metadata like document extension, creator, modifier, aso...
  • Default search operator switched from OR to AND: A consequence of that is that generally, there are less results to display (which means better performance) and these results are more relevant (better for user satisfaction)
  • New document previewer (faster, better browser support) with a search feature to directly search into the document preview. The new previewer let users crop videos or images directly from Alfresco to only keep important data on the repository
  • An improved WYSIWYG editor
  • A new page for administrators to manage all sites in the same place, finally!
  • Improuved Outlook integration: directly from the outlook interface, it's possible to create folders/docs, search documents, manage workflows, aso...
  • Improved Microsoft Office integration: document creation, modification and upload can be done directly from MS Office and moreover, you can specify the Type of the document directly from MS Office (even if it's a company specific Type. e.g. dbi_Meetings_Summary) and once done, you can view and edit all Type specific Properties on your document! All these elements will then be sent to Alfresco when the document is saved
  • Better integration between Alfresco & SharePoint: It's possible to use SharePoint as a UI (Client) and Alfresco as a Repository where documents are stored (Server). With this solution, all new features of Alfresco 5.0 are available directly through the SharePoint Sites (Live Search, Search with facets, Documents previewer, aso...)


This new major version of Alfresco brings its new features but some others will disappear:

  • The eclipse specific SDK: Now everything is built using Maven
  • Liferay portlets don't exist anymore. Well, they were not very useful, so...
  • The Alfresco Explorer client doesn't exist anymore! It's the turn of Alfresco Share to shine!


From what I've seen, there are a lot of other interesting things that will need your attention in the next few months like:

  • New version of Alfresco mobile apps (ATM iOS & Android) with some profiles to completly changes the way it looks like depending on your roles in Alfresco (Business, Sales, Technical, Management, aso...)
  • BI directly in Alfresco (using Dashlets)?
  • The new Alfresco Activiti Enterprise release (a new separate software with its own UI) who let business users write their own workflow logic and create the form they want for all steps of this workflow using a very simple graphical interface


So the first day was very interesting AND exciting! I hope that the Conferences Day 2 will be at least as cool! See you tomorrow for a new article with a lot of other interesting things (well if I have time for this Tongue out).


OOW14 Day 5 - not only Oracle OpenWorld

Sat, 2014-10-04 11:45

Oracle's OpenWorld has ended. It was the fist time I attended this great event and it really is a "great" event:

  • 60000 attendees from 145 countries
  • 500 partners or customers in the exhibit hall
  • 400 demos in the DEMOgrounds
  • 2500 sessions

SQL Patch: Another way to change the plan without changing the code

Fri, 2014-10-03 09:02

Recently, at a customer site, I faced a performance issue. However, as often the statement is embedded in the application so it's not possible to rewrite the query. In this blog post, we'll change the execution plan to solve the problem without changing the code - thanks to SQL Patch.

Java get Class names from package String in classpath

Fri, 2014-10-03 01:28

As a Java developer you probably used to know about reflexion. However, in order to keep your software architecture flexible, some functionalities are sometimes not provided out of the box by the JVM.

In my particular case, I needed to find out every Class and Sub-Classes inside a package, thus reparteed within several Jars.

Internet has lots of solution, but it remains complicated for everybody to reach this goal. After googleing, I found a link which provided a partial solution. I would like to thank the website author:

Some other solution invited us to deploy external libraries as well. But I was not interested to manage another lib in my soft just for that purpose.

So, the solution was to recover all jars from the context classloader and loop on them in order to find out the classes we are looking for.

Following, you will see a complete Java class resolving this issue:








import java.util.ArrayList;

import java.util.Enumeration;

import java.util.HashMap;

import java.util.List;

import java.util.jar.JarEntry;

import java.util.jar.JarFile;






 * @author Philippe Schweitzer dbi services Switzerland



public class ClassFinder {


    public static void main(String[] args) throws ClassNotFoundException {


        List<Class> classes = ClassFinder.getClassesFromPackage("YOUR PACKAGE NAME");


        System.out.println("START ClassList:");

        for (Class c : classes) {

            System.out.println(c.toString());// + " " + c.getCanonicalName());


        System.out.println("END ClassList:");





     * Attempts to list all the classes in the specified package as determined     *

     * by the context class loader...


     * @param pckgname the package name to search

     * @return a list of classes that exist within that package

     * @throws ClassNotFoundException if something went wrong



    public static List getClassesFromPackage(String pckgname) throws ClassNotFoundException {


        ArrayList result = new ArrayList();

        ArrayList<File> directories = new ArrayList();

        HashMap packageNames = null;


        try {

            ClassLoader cld = Thread.currentThread().getContextClassLoader();

            if (cld == null) {

                throw new ClassNotFoundException("Can't get class loader.");



            for (URL jarURL : ((URLClassLoader) Thread.currentThread().getContextClassLoader()).getURLs()) {

                System.out.println("JAR: " + jarURL.getPath());


                getClassesInSamePackageFromJar(result, pckgname, jarURL.getPath());

                String path = pckgname;

                Enumeration<URL> resources = cld.getResources(path);


                File directory = null;


                while (resources.hasMoreElements()) {

                    String path2 = resources.nextElement().getPath();

                    directory = new File(URLDecoder.decode(path2, "UTF-8"));




                if (packageNames == null) {

                    packageNames = new HashMap();


                packageNames.put(directory, pckgname);



        } catch (NullPointerException x) {

            throw new ClassNotFoundException(pckgname + " does not appear to be a valid package (Null pointer exception)");


        } catch (UnsupportedEncodingException encex) {

            throw new ClassNotFoundException(pckgname + " does not appear to be a valid package (Unsupported encoding)");


        } catch (IOException ioex) {

            throw new ClassNotFoundException("IOException was thrown when trying to get all resources for " + pckgname);




        for (File directory : directories) {

            if (directory.exists()) {

                String[] files = directory.list();


                for (String file : files) {

                    if (file.endsWith(".class")) {

                        try {

                      //      System.out.println(packageNames.get(directory).toString() + '.' + file.substring(0, file.length() - 6));


                            result.add(Class.forName(packageNames.get(directory).toString() + '.' + file.substring(0, file.length() - 6)));

                        } catch (Throwable e) {




            } else {

                throw new ClassNotFoundException(pckgname + " (" + directory.getPath() + ") does not appear to be a valid package");




        return result;






     * Returns the list of classes in the same directories as Classes in

     * classes.


     * @param result

     * @param classes

     * @param jarPath



    private static void getClassesInSamePackageFromJar(List result, String packageName, String jarPath) {


        JarFile jarFile = null;


        try {

            jarFile = new JarFile(jarPath);

            Enumeration<JarEntry> en = jarFile.entries();


            while (en.hasMoreElements()) {

                JarEntry entry = en.nextElement();

                String entryName = entry.getName();

                packageName = packageName.replace('.', '/');


                if (entryName != null && entryName.endsWith(".class") && entryName.startsWith(packageName)) {

                    try {

                        Class entryClass = Class.forName(entryName.substring(0, entryName.length() - 6).replace('/', '.'));


                        if (entryClass != null) {



                    } catch (Throwable e) {

// do nothing, just continue processing classes




        } catch (Exception e) {


        } finally {

            try {

                if (jarFile != null) {




            } catch (Exception e) {