Skip navigation.

Feed aggregator

Service Provider initiated SSO on WLS11g using SAML2.0

Darwin IT - Fri, 2014-04-11 05:25
IntroductionAt a recent customer I got the assignment to implement a SAML 2.0 configuration.

The customer is in this setup a Service Provider. They provide a student-administration application for the Dutch Higher Education Sector, like Colleges and Universities. The application conventionally is implemented on premise. But they like to move to a SaaS model. One institute is going to use the application from 'the cloud'. In the Dutch education sector, an organization called SurfConext serves as an authentication broker.

A good schematic explanation of the setup is in the Weblogic 11g docs:



When a user connects to the application, Weblogic finds that the user is not authenticated: it lacks a SAML2.0 token (2). So when configured correctly the browser is rerouted to SurfConext (3). On an authentication request SurfConext displays a so-called ‘Where Are You From’ (WAYF) page, on which a user can choose the institute to which he or she is connected. SurfConext then provides a means to enter the username and password (4). On submit SurfConext validates the credentials against the actual IdP, which is provided by the user’s institute (5). On a valid authentication, SurfConext provides a SAML2.0 token identifying the user with possible assertions (6). The page is refreshed and redirected to the landing page of the application (7).

For Weblogic SurfConext is in fact the Identity Profider, although in fact, based on the choice on the WAYF page, it reroutes the authentication request to the IdP of the particular institute.

Unfortunately I did not find a how-to of that particular setup in the docs. Although I found this.  But I did find the following blog: https://blogs.oracle.com/blogbypuneeth/entry/steps_to_configure_saml_2, that helped me much. Basically the setup is only the service provider part of that description.

So let me walk you through it. This is a larger blog, in fact I copy&paste larger parts from the configuration document I wrote for the customer

Configure Service providerPre-RequisitesTo be able to test the setup against a test-IdP of SurfConext the configured Weblogic need to be reachable from internet. Appropriate firewall and proxy-server configuration need to be done upfront to enable both SurfConext to connect to the Weblogic Server as well as a remote user.

All configuration regarding url’s need to be done using the outside url’s configured above.

A PC with a direct internet connection that is enabled to connect through these same URL’s is needed to test the configuration. When connecting a pc to the intranet of the customer enables the pc to connect to internet, but the internal network configuration prevented connecting to the weblogic server using the remote url’s.

During the configuration a so called SAML Metadata file is created. This file is requested by SurfConext to get acquainted with the Service Provider. This configuration can change through reconfigurations. So SurfConext requests this through a HTTPS url. This url need to be configured, and also remotely connectable. An option is the htdocs folder of a webserver that is connectable through https. In other SAML2 setups you might need to upload the metadata-file to the identity provider's server.

You also need the SAML metadata of SurfConext. It can be downloaded from: https://wiki.surfnet.nl/display/surfconextdev/Connection+metadata.
Update Application
The application need to be updated and redeployed to use the weblogic authenticators instead of the native logon-form. To do so the web.xml need to be updated. In the web.xml (in the WEB-INF of the application war file) look for the following part:
  <login-config>
<auth-method>FORM</auth-method>
<realm-name>jazn.com</realm-name>
<form-login-config>
<form-login-page>/faces/security/pages/Login.jspx>/form-login-page>
<form-error-page>/loginErrorServlet>/form-error-page>
</form-login-config>
</login-config>
And replace it with:
  <login-config>
<auth-method>BASIC>/auth-method>
<realm-name>myrealm>
</login-config>
Repackage and redeploy the application to weblogic.
Add a new SAML2IdentityAserterHere we start with the first steps to configure Weblogic: create a SAML2IdentityAsserter on the Service Provider domain.
  1. Login to ServiceProvider domain - Weblogic console
  2. Navigate to “Security Realms”:
  3.  Click on ”myrealm” 
  4. Go to the tab  ”Providers–>Authentication” :
  5. Add a new “SAML2IdentityAsserter”
  6. Name it for example: “SurfConextIdentityAsserter”:
  7. Click Ok, Save and activate changes if you're in a production domain (I'm not going to repeat that every time again in the rest of this blog). 
  8. Bounce the domain (All WLServers including AdminServer)
Configure managed server to use SAML2 Service Provider In this part the managed server(s) serving the application need to be configured for the so called 'federated services'. It need to know how to behave as a SAML2.0 Service Provider.
 So perform the following steps:
  1.  Navigate to the managed server, and select the “Federation Services–>SAML 2.0 Service Provider” sub tab:

  2. Edit the following settings:
  3. FieldValueEnabledCheckPreferred BindingPOSTDefault URLhttp://hostname:portname/application-URI.
    This URL should be accessible from outside the organization, that is from SurfConext.
  4. Click Save.
  5. Navigate to the managed server, and select the “Federation Services–>SAML 2.0 General” sub tab:
  6. Edit the following settings:
  7. FieldValueReplicated Cache EnabledUncheck or Check if neededContact Person Given NameEg. Jean-MichelContact Person SurnameEg. JarreContact Person TypeChoose one from the list, like 'technical'.Contact Person CompanyEg. Darwin-IT ProfessionalsContact Person Telephone NumberEg. 555-12345Contact Person Email Addressinfo@hatseflats.comOrganization NameEg. Hatseflats B.V.Organization URLwww.hatseflats.comPublished Site URLhttp://www.hatseflats.com:7777/saml2
    This URL should be accessible from outside the organization, that is from SurfConext. The Identity Provider needs to be able to connect to it.Entity IDEg. http://www.hatseflats.com
    SurfConext expect an URI with at least a colon (‘:’), usually the URL of the SP.Recipient Check EnabledUncheck.
    When checked Weblogic will check the responding Url to the URL in the original request. This could result in a ‘403 Forbidden’ message.Single Sign-on Signing Key Aliasdemoidentity
    If signing is used the alias of the proper private certificate in the keystore that is configured in WLS is to be provided.Single Sign-on Signing Key Pass PhraseDemoIdentityPassPhraseConfirm Single Sign-on Signing Key Pass PhraseDemoIdentityPassPhrase
  8. Save the changes and export the IDP metadata into a XML file: 
    1. Restart the server
    2. Click on 'Publish Meta Data'
      1. Restart the server
        1. Click on 'Publish Meta Data'
        2. Provide a valid path, like /home/oracle/Documents/... and click 'OK'.
        3. Copy this to a location on a http-server that is remotely connectable through HTTPS and provide the url to SurfConext.
        Configure Identity Provider metadata on SAML Service Provider in Managed ServerAdd new “Web Single Sign-On Identity Provider Partner” named for instance "SAML_SSO_SurfConext".
        1. In Admin Console navigate to the myrealm Security Realm and select the “Providers–>Authentication
        2. Select the SurfConextIdentityAsserter SAML2_IdentityAsserter and navigate to the “Management” tab:
        3. Add a new “Web Single Sign-On Identity Provider Partner
          1. Name it: SAML_SSO_SurfConext
          2. Select “SurfConext-metadata.xml”
          3. Click 'OK'.
        4. Edit the created SSO Identity Provider Partner “SAML_SSO_SurfConext” and Provide the following settings:
        5. FieldValueNameSAML_SSO_SurfConextEnabledCheckDescriptionSAML Single Sign On partner SurfConextRedirect URIs/YourApplication-URI
          These are URI’s relative to the root of the server.
        Add SAMLAuthenticationProviderIn this section an Authentication provider is added.
        1. Navigate to the ‘Providers->Authentication’ sub tab of the ‘myrealm’ Security Realm:
        2. Add a new Authentication Provider. Name it: ‘SurfConextAuthenticator’ and select as type: 'SAMLAuthenticator'.
          Click on the new Authenticator and set the Control Flag to ‘SUFFICIENT’:
        3. Return to the authentication providers and click on 'Reorder'.
          Use the selection boxes and the arrow buttons to reorder the providers as follows: The SurfConext authenticator and Identity Asserter should be first in the sequence.
        Set all other authentication providers to sufficientThe control flag of the Default Authenticator is by default set to ‘REQUIRED’. That means that for an authentication request this one needs to be executed. However, for the application we want the SAMLAuthentication be Sufficient, thus that the other authenticators need not to be executed. So set these other ones (if others including the DefaultAuthenticator exist) to ‘SUFFICIENT’ as well.
        Enable debug on SAMLTo enable debug messages on SAML, navigate to the 'Debug' tab of the Managed Server:
        Expand the nodes ‘weblogic -> security’. Check the node ‘Saml2’ and click 'Enable'. This will add SAML2 related logging during authentication processes to the server.log. To disable the logging, check the node or higher level nodes and click 'Disable'.
        Deploy the Identity Name MapperSurfConnext generates a userid for each connected user. SurfConext provides two options for this: a persistent userid throughout all sessions or a userid per session. Either way, the userid is generated as a GUID that is not registered within the customers application and also on itself not to relate to known users in the application. In the SAML token however, also the username is provided. To map this to the actual userid that Weblogic provides to the application, an IdentityMapper class is needed. The class implements a certain interface of weblogic, and uses a custom principal class that implements a weblogic interface as well. The implementation is pretty straightforward. I found an example that uses an extra bean for a Custom Principal. The IdentityMapper class is as follows:
        package nl.darwin-it.saml-example;

        import com.bea.security.saml2.providers.SAML2AttributeInfo;
        import com.bea.security.saml2.providers.SAML2AttributeStatementInfo;
        import com.bea.security.saml2.providers.SAML2IdentityAsserterAttributeMapper;
        import com.bea.security.saml2.providers.SAML2IdentityAsserterNameMapper;
        import com.bea.security.saml2.providers.SAML2NameMapperInfo;

        import java.security.Principal;

        import java.util.ArrayList;
        import java.util.Collection;
        import java.util.logging.Logger;

        import weblogic.logging.LoggingHelper;

        import weblogic.security.service.ContextHandler;


        public class SurfConextSaml2IdentityMapper implements SAML2IdentityAsserterNameMapper,
        SAML2IdentityAsserterAttributeMapper {
        public static final String ATTR_PRINCIPALS = "com.bea.contextelement.saml.AttributePrincipals";
        public static final String ATTR_USERNAME = "urn:mace:dir:attribute-def:uid";

        private Logger lgr = LoggingHelper.getServerLogger();
        private final String className = "SurfConextSaml2IdentityMapper";


        @Override
        public String mapNameInfo(SAML2NameMapperInfo saml2NameMapperInfo,
        ContextHandler contextHandler) {
        final String methodName = className + ".mapNameInfo";
        debugStart(methodName);
        String user = null;

        debug(methodName,
        "saml2NameMapperInfo: " + saml2NameMapperInfo.toString());
        debug(methodName, "contextHandler: " + contextHandler.toString());
        debug(methodName,
        "contextHandler number of elements: " + contextHandler.size());

        // getNames gets a list of ContextElement names that can be requested.
        String[] names = contextHandler.getNames();

        // For each possible element
        for (String element : names) {
        debug(methodName, "ContextHandler element: " + element);
        // If one of those possible elements has the AttributePrinciples
        if (element.equals(ATTR_PRINCIPALS)) {
        // Put the AttributesPrincipals into an ArrayList of CustomPrincipals
        ArrayList<CustomPrincipal> customPrincipals =
        (ArrayList<CustomPrincipal>)contextHandler.getValue(ATTR_PRINCIPALS);
        int i = 0;
        String attr;
        if (customPrincipals != null) {
        // For each AttributePrincipal in the ArrayList
        for (CustomPrincipal customPrincipal : customPrincipals) {
        // Get the Attribute Name and the Attribute Value
        attr = customPrincipal.toString();
        debug(methodName, "Attribute " + i + " Name: " + attr);
        debug(methodName,
        "Attribute " + i + " Value: " + customPrincipal.getCollectionAsString());
        // If the Attribute is "loginAccount"
        if (attr.equals(ATTR_USERNAME)) {
        user = customPrincipal.getCollectionAsString();
        // Remove the "@DNS.DOMAIN.COM" (case insensitive) and set the username to that string
        if (!user.equals("null")) {
        user = user.replaceAll("(?i)\\@CLIENT\\.COMPANY\\.COM", "");
        debug(methodName, "Username (from loginAccount): " + user);
        break;
        }
        }
        i++;
        }
        }

        // For some reason the ArrayList of CustomPrincipals was blank - just set the username to the Subject
        if (user == null || "".equals(user)) {
        user = saml2NameMapperInfo.getName(); // Subject = BRID

        debug(methodName, "Username (from Subject): " + user);
        }

        return user;
        }
        }

        // Just in case AttributePrincipals does not exist
        user = saml2NameMapperInfo.getName(); // Subject = BRID
        debug(methodName, "Username (from Subject): " + user);

        debugEnd(methodName);

        // Set the username to the Subject
        return user;

        // debug(methodName,"com.bea.contextelement.saml.AttributePrincipals: " + arg1.getValue(ATTR_PRINCIPALS));
        // debug(methodName,"com.bea.contextelement.saml.AttributePrincipals CLASS: " + arg1.getValue(ATTR_PRINCIPALS).getClass().getName());


        // debug(methodName,"ArrayList toString: " + arr2.toString());
        // debug(methodName,"Initial size of arr2: " + arr2.size());


        }
        /* public Collection<Object> mapAttributeInfo0(Collection<SAML2AttributeStatementInfo> attrStmtInfos, ContextHandler contextHandler) {
        final String methodName = className+".mapAttributeInfo0";
        if (attrStmtInfos == null || attrStmtInfos.size() == 0) {
        debug(methodName,"CustomIAAttributeMapperImpl: attrStmtInfos has no elements");
        return null;
        }

        Collection<Object> customAttrs = new ArrayList<Object>();

        for (SAML2AttributeStatementInfo stmtInfo : attrStmtInfos) {
        Collection<SAML2AttributeInfo> attrs = stmtInfo.getAttributeInfo();
        if (attrs == null || attrs.size() == 0) {
        debug(methodName,"CustomIAAttributeMapperImpl: no attribute in statement: " + stmtInfo.toString());
        } else {
        for (SAML2AttributeInfo attr : attrs) {
        if (attr.getAttributeName().equals("AttributeWithSingleValue")){
        CustomPrincipal customAttr1 = new CustomPrincipal(attr.getAttributeName(), attr.getAttributeNameFormat(),attr.getAttributeValues());
        customAttrs.add(customAttr1);
        }else{
        String customAttr = new StringBuffer().append(attr.getAttributeName()).append(",").append(attr.getAttributeValues()).toString();
        customAttrs.add(customAttr);
        }
        }
        }
        }
        return customAttrs;
        } */

        public Collection<Principal> mapAttributeInfo(Collection<SAML2AttributeStatementInfo> attrStmtInfos,
        ContextHandler contextHandler) {
        final String methodName = className + ".mapAttributeInfo";
        Collection<Principal> principals = null;
        if (attrStmtInfos == null || attrStmtInfos.size() == 0) {
        debug(methodName, "AttrStmtInfos has no elements");
        } else {
        principals = new ArrayList<Principal>();
        for (SAML2AttributeStatementInfo stmtInfo : attrStmtInfos) {
        Collection<SAML2AttributeInfo> attrs = stmtInfo.getAttributeInfo();
        if (attrs == null || attrs.size() == 0) {
        debug(methodName,
        "No attribute in statement: " + stmtInfo.toString());
        } else {
        for (SAML2AttributeInfo attr : attrs) {
        CustomPrincipal principal =
        new CustomPrincipal(attr.getAttributeName(),
        attr.getAttributeValues());
        /* new CustomPrincipal(attr.getAttributeName(),
        attr.getAttributeNameFormat(),
        attr.getAttributeValues()); */
        debug(methodName, "Add principal: " + principal.toString());
        principals.add(principal);
        }
        }
        }
        }
        return principals;
        }

        private void debug(String methodName, String msg) {
        lgr.fine(methodName + ": " + msg);
        }

        private void debugStart(String methodName) {
        debug(methodName, "Start");
        }

        private void debugEnd(String methodName) {
        debug(methodName, "End");
        }

        }
        The commented method ‘public Collection<Object> mapAttributeInfo0’ is left in the source as an example method. The CustomPrincipal bean:
        package nl.darwin-it.saml-example;

        import java.util.Collection;
        import java.util.Iterator;

        import weblogic.security.principal.WLSAbstractPrincipal;
        import weblogic.security.spi.WLSUser;


        public class CustomPrincipal extends WLSAbstractPrincipal implements WLSUser{

        private String commonName;
        private Collection collection;
        public CustomPrincipal(String name, Collection collection) {
        super();
        // Feed the WLSAbstractPrincipal.name. Mandatory
        this.setName(name);
        this.setCommonName(name);
        this.setCollection(collection);
        }

        public CustomPrincipal() {
        super();
        }

        public CustomPrincipal(String commonName) {
        super();
        this.setName(commonName);
        this.setCommonName(commonName);
        }

        public void setCommonName(String commonName) {
        // Feed the WLSAbstractPrincipal.name. Mandatory
        super.setName(commonName);
        this.commonName = commonName;
        System.out.println("Attribute: " + this.getName());
        // System.out.println("Custom Principle commonName is " + this.commonName);
        }

        public Collection getCollection() {
        return collection;
        }

        public String getCollectionAsString() {
        String collasstr;
        if(collection != null && collection.size()>0){
        for (Iterator iterator = collection.iterator(); iterator.hasNext();) {
        collasstr = (String) iterator.next();
        return collasstr;
        }
        }
        return "null";
        }

        public void setCollection(Collection collection) {
        this.collection = collection;
        // System.out.println("set collection in CustomPrinciple!");
        if(collection != null && collection.size()>0){
        for (Iterator iterator = collection.iterator(); iterator.hasNext();) {
        final String value = (String) iterator.next();
        System.out.println("Attribute Value: " + value);
        }
        }
        }

        @Override
        public int hashCode() {
        final int prime = 31;
        int result = super.hashCode();
        result = prime * result + ((collection == null) ? 0 : collection.hashCode());
        result = prime * result + ((commonName == null) ? 0 : commonName.hashCode());
        return result;
        }

        @Override
        public boolean equals(Object obj) {
        if (this == obj)
        return true;
        if (!super.equals(obj))
        return false;
        if (getClass() != obj.getClass())
        return false;
        CustomPrincipal other = (CustomPrincipal) obj;
        if (collection == null) {
        if (other.collection != null)
        return false;
        } else if (!collection.equals(other.collection))
        return false;
        if (commonName == null) {
        if (other.commonName != null)
        return false;
        } else if (!commonName.equals(other.commonName))
        return false;
        return true;
        }

        }
        Package the classes as a java archive (jar) and place it in a folder on the weblogic server. For instance $DOMAIN_HOME/lib. Although the $DOMAIN_HOME/lib is in the classpath for many usages, for this usage the jar file is not picked-up by the class-loaders. Probably due to the class-loader hierarchy. To have the jar-file (SurfConextSamlIdentityMapper.jar) in the system class path, add the complete path to the jar file to the classpath on the Startup-tab on both the AdminServer as well as the Managed server. In this the AdminServer is needed, since class is configured through the Realm, and during the configuration the existence of the class is checked. Apparently it is required to also add the weblogic.jar before the SurfConextSamlIdentityMapper.jar to the startup-classpath. Then restart the AdminServer as well as the managed servers.
        Configure the Identity Name MapperNow the Identity Name mapper class can be configured:
        1. In Admin Console navigate to the myrealm Security Realm and select the “Providers–>Authentication
        2. Select the SurfConextIdentityAsserter SAML2_IdentityAsserter and navigate to the “Management” tab:
        3. Edit the created SSO Identity Provider Partner “SAML_SSO_SurfConext”.

          Provide the following settings:
          FieldValueIdentity Provider Name Mapper Class Namenl.darwin-it.saml-example.SurfConextSaml2IdentityMapper
        Test the applicationAt this point the application can be tested. Browse using the external connected PC to the application using the remote URL. For instance: https://www.hatseflats.com:7777/YourApplication-URI. If all is well, the browser is redirected to SurfConext’s WhereAreYouFrom page. Choose the following provider:

        Connect as ‘student1’ with password ‘student1’ (or one of the other test creditials like student2, student3, see https://wiki.surfnet.nl/display/surfconextdev/Test+and+Guest+Identity+Providers). After a succesfull logon, the browser should be redirected to the application. The choosen credential should of course be known as a userid in the application.
        ConclusionThis is one of the bigger stories on this blog. I actually edited the configuration document as a blog entry. I hope you'll find it usefull. With this blog you have a complete how-to for the ServiceProvider part for an ServiceProvider Initiated SSO setup.

        SAML2 seemed complicated to me at first. And under the covers it still might be. But it turns out that Weblogic11g has a great implementation for it, that is neatly configurable. It's a little pity that you need a mapper class for the identity-mapping. It would be nice if you could configure the attribute-value to be returned as a userid. But the mapper class is not that complicated.

        OBIEE Security: Repositories and Three Layers of Security

        This blog series reviewing OBIEE security has to this point identified how users are defined and authenticated within WebLogic, the major security concerns with WebLogic and how application roles are defined and mapped to LDAP groups within Enterprise Manager. We will now review OBIEE authorization, how OBIEE determines what data users can see after they login. 

        The OBIEE Repository is comprised of three layers. A very simplistic summary is below:

        • Physical layer: Defines all database or data source connections (user id and passwords are entered and stored here), the physical table and columns, primary and foreign key relationships.  
        • Business Model Mapping layer (BMM):  Referencing the physical layer, here is where logical structures are built and aggregation rules are defined.  The BMM is really the heart of an OBIEE application
        • Presentation layer:  Referencing the BMM, this layer presents the tables and columns to end users. For example, remove unwanted columns or rename awkwardly named columns.
        Object and Data Level Security

        Object (Physical layer) and Data (BMM) level security is defined within the identity manager in the Repository. Object security can be set to either allow or deny access to a physical table or column. Data security allows rules to be applied to logical tables or columns (BMM layer). These rules can use static values as well as session variables.

        Navigation:  Open identity manager within the RPD -> select user or role -> click on permissions

        Identity Manager

        Data Filter

         

        Object Filter

        Presentation Layer Security Rule

        If you have questions, please contact us at info@integrigy.com

         -Michael Miller, CISSP-ISSMP

        References Tags: ReferenceOracle Business Intelligence (OBIEE)Security Resource
        Categories: APPS Blogs, Security Blogs

        Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7

        Senthil Rajendran - Fri, 2014-04-11 04:12
        Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7

        Index
        Big Data Oracle NoSQL in No Time - Getting Started Part 1
        Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2
        Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
        Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
        Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
        Big Data Oracle NoSQL in No Time - Smoke Testing Part 6

        Let us expand our environment.
        If your NoSQL store has write bottleneck then adding a storage node would help.
        If your NoSQL store had read bottlenech then increasing the replication factor would help.

        Steps to make 3x4 (to increase the write throughput)

        kv-> plan deploy-sn -dc dc1 -port 5300 -wait -host server4
        kv-> plan change-parameters -service sn4 -wait -params capacity=3
        kv-> topology clone -current -name 3x4
        kv-> topology change-repfactor -name 3x4 -pool AllStorageNodes -rf 4 -dc dc1
        kv-> topology preview -name 3x4
        kv-> plan deploy-topology -name 3x4 -wait



        Steps to make 4x4 (to increase the read throughput)

        kv-> plan change-parameters -service sn1 -wait -params capacity=4
        kv-> plan change-parameters -service sn2 -wait -params capacity=4
        kv-> plan change-parameters -service sn3 -wait -params capacity=4
        kv-> plan change-parameters -service sn4 -wait -params capacity=4
        kv-> topology clone -current -name 4x4
        kv-> topology redistribute -name 4x4 -pool AllStorageNodes
        kv-> topology preview -name 4x4
        kv-> plan deploy-topology -name 4x4 -wait



        Install latest patch of APEX 4.2 (4.2.5)

        Dimitri Gielis - Fri, 2014-04-11 02:34
        A few days ago Oracle brought out a new patch for APEX 4.2, this will be the latest version of this build, the next version of APEX will be 5.0.
        If you already have APEX 4.2.x installed you can download a patch from support.oracle.com, the patch number is 17966818.
        If you have an earlier version of APEX you can download the full version of APEX and install that.
        As with other patch sets, this one is not different; it includes some bug fixes, updates in the packaged apps and the introduction of some new apps. You find the full patch set notes here.
        Installing the patch in my APEX 4.2.4 environment took less than 15 minutes and everything went fine. 

        I recommend everybody moving to the latest version as this is the final build of APEX 4.2.

        Update 16-APR-2014: we actually hit one issue, which was fixed by Oracle today. So I would install this additional patch too. In support.oracle.com search for Patch 18609856: APEX_WEB_SERVICE.CLOBBASE642BLOB CONVERTS INCORRECTLY.
        Categories: Development

        Big Data Oracle NoSQL in No Time - Smoke Testing Part 6

        Senthil Rajendran - Fri, 2014-04-11 00:07
        Big Data Oracle NoSQL in No Time - Smoke Testing Part 6

        Index
        Big Data Oracle NoSQL in No Time - Getting Started Part 1
        Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2
        Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
        Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
        Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5

        Oracle NoSQL can be smoke tested in different ways but the most common one is the ping command and a simple java program.
        Customers can design their own somke testing program as needed.

        Let us compile what is in the documentation
        $ export KVBASE=/oraclenosql/lab
        $ export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39
        $ cd $KVHOME
        $ javac -cp lib/kvclient.jar:examples examples/hello/*.java
        $ java -cp $KVHOME/lib/kvclient.jar:$KVHOME/examples hello.HelloBigDataWorld -port 5000 -host server1 -store mystore
        Hello Big Data World!
        $

        With all the three storage nodes up and running the below is the output of ping command and the java program

        $ export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39
        $ java -jar $KVHOME/lib/kvstore.jar ping -port 5000 -host server1
        Pinging components of store mystore based upon topology sequence #67
        mystore comprises 30 partitions and 3 Storage Nodes
        Storage Node [sn1] on server1:5000    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
                Rep Node [rg1-rn1]      Status: RUNNING,MASTER at sequence number: 255 haPort: 5011
                Rep Node [rg3-rn2]      Status: RUNNING,REPLICA at sequence number: 135 haPort: 5013
                Rep Node [rg2-rn2]      Status: RUNNING,REPLICA at sequence number: 135 haPort: 5012
        Storage Node [sn2] on server2:5100    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
                Rep Node [rg3-rn3]      Status: RUNNING,REPLICA at sequence number: 135 haPort: 5112
                Rep Node [rg1-rn2]      Status: RUNNING,REPLICA at sequence number: 255 haPort: 5111
                Rep Node [rg2-rn1]      Status: RUNNING,MASTER at sequence number: 135 haPort: 5110
        Storage Node [sn3] on server3:5200    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
                Rep Node [rg3-rn1]      Status: RUNNING,MASTER at sequence number: 135 haPort: 5210
                Rep Node [rg2-rn3]      Status: RUNNING,REPLICA at sequence number: 135 haPort: 5212
                Rep Node [rg1-rn3]      Status: RUNNING,REPLICA at sequence number: 255 haPort: 5211
        $
        $ java -cp $KVHOME/lib/kvclient.jar:$KVHOME/examples hello.HelloBigDataWorld -port 5000 -host server1 -store mystore
        Hello Big Data World!
        $


        Let us take down the third storage node. You will see the ping confirming that the third storage node is unreachable and the java program works fine with the storage nodes.

        $ export KVHOME=$KVBASE//server3/oraclesoftware/kv-2.0.39
        $ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE//server3/storage
        $
        $ export KVHOME=$KVBASE//server1/oraclesoftware/kv-2.0.39
        $ java -jar $KVHOME/lib/kvstore.jar ping -port 5000 -host server1
        Pinging components of store mystore based upon topology sequence #67
        mystore comprises 30 partitions and 3 Storage Nodes
        Storage Node [sn1] on server1:5000    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
                Rep Node [rg1-rn1]      Status: RUNNING,MASTER at sequence number: 255 haPort: 5011
                Rep Node [rg3-rn2]      Status: RUNNING,REPLICA at sequence number: 137 haPort: 5013
                Rep Node [rg2-rn2]      Status: RUNNING,REPLICA at sequence number: 135 haPort: 5012
        Storage Node [sn2] on server2:5100    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
                Rep Node [rg3-rn3]      Status: RUNNING,MASTER at sequence number: 137 haPort: 5112
                Rep Node [rg1-rn2]      Status: RUNNING,REPLICA at sequence number: 255 haPort: 5111
                Rep Node [rg2-rn1]      Status: RUNNING,MASTER at sequence number: 135 haPort: 5110
        Storage Node [sn3] on server3:5200    Datacenter: datacenter1 [dc1] UNREACHABLE
                Rep Node [rg3-rn1]      Status: UNREACHABLE
                Rep Node [rg2-rn3]      Status: UNREACHABLE
                Rep Node [rg1-rn3]      Status: UNREACHABLE
        $
        $ java -cp $KVHOME/lib/kvclient.jar:$KVHOME/examples hello.HelloBigDataWorld -port 5000 -host server1 -store mystore
        Hello Big Data World!
        $

        Let us take down the second storage node. With this we are up and running with one storage node and two are down.
        It is very clear from the java program that the nosql store is not functional because the default commit policy is simple majority which requires two replicas.

        $ export KVHOME=$KVBASE//server2/oraclesoftware/kv-2.0.39
        $ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE//server2/storage
        $
        $ export KVHOME=$KVBASE//server1/oraclesoftware/kv-2.0.39
        $ java -jar $KVHOME/lib/kvstore.jar ping -port 5000 -host server1
        Pinging components of store mystore based upon topology sequence #67
        mystore comprises 30 partitions and 3 Storage Nodes
        Storage Node [sn1] on server1:5000    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
                Rep Node [rg1-rn1]      Status: RUNNING,MASTER at sequence number: 257 haPort: 5011
                Rep Node [rg3-rn2]      Status: RUNNING,UNKNOWN at sequence number: 137 haPort: 5013
                Rep Node [rg2-rn2]      Status: RUNNING,UNKNOWN at sequence number: 135 haPort: 5012
        Storage Node [sn2] on server2:5100    Datacenter: datacenter1 [dc1] UNREACHABLE
                Rep Node [rg3-rn3]      Status: UNREACHABLE
                Rep Node [rg1-rn2]      Status: UNREACHABLE
                Rep Node [rg2-rn1]      Status: UNREACHABLE
        Storage Node [sn3] on server3:5200    Datacenter: datacenter1 [dc1] UNREACHABLE
                Rep Node [rg3-rn1]      Status: UNREACHABLE
                Rep Node [rg2-rn3]      Status: UNREACHABLE
                Rep Node [rg1-rn3]      Status: UNREACHABLE
        $
        $ java -cp $KVHOME/lib/kvclient.jar:$KVHOME/examples hello.HelloBigDataWorld -port 5000 -host server1 -store mystore
        oracle.kv.DurabilityException: (JE 5.0.74) Commit policy: SIMPLE_MAJORITY required 2 replicas. But none were active with this master. (11.2.2.0.39)
        Fault class name: com.sleepycat.je.rep.InsufficientReplicasException
        Remote stack trace: com.sleepycat.je.rep.InsufficientReplicasException: (JE 5.0.74) Commit policy: SIMPLE_MAJORITY required 2 replicas. But none were active with this master.
        $

        By bring up storage nodes 2 & 3 our store is operational.

        $ export KVHOME=$KVBASE//server3/oraclesoftware/kv-2.0.39
        $ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE//server3/storage &
        $ export KVHOME=$KVBASE//server2/oraclesoftware/kv-2.0.39
        $ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE//server2/storage &

        $ java -jar $KVHOME/lib/kvstore.jar ping -port 5000 -host server1
        Pinging components of store mystore based upon topology sequence #67
        mystore comprises 30 partitions and 3 Storage Nodes
        Storage Node [sn1] on server1:5000    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
                Rep Node [rg1-rn1]      Status: RUNNING,REPLICA at sequence number: 265 haPort: 5011
                Rep Node [rg3-rn2]      Status: RUNNING,MASTER at sequence number: 141 haPort: 5013
                Rep Node [rg2-rn2]      Status: RUNNING,REPLICA at sequence number: 141 haPort: 5012
        Storage Node [sn2] on server2:5100    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
                Rep Node [rg3-rn3]      Status: RUNNING,REPLICA at sequence number: 141 haPort: 5112
                Rep Node [rg1-rn2]      Status: RUNNING,REPLICA at sequence number: 265 haPort: 5111
                Rep Node [rg2-rn1]      Status: RUNNING,REPLICA at sequence number: 141 haPort: 5110
        Storage Node [sn3] on server3:5200    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
                Rep Node [rg3-rn1]      Status: RUNNING,REPLICA at sequence number: 141 haPort: 5210
                Rep Node [rg2-rn3]      Status: RUNNING,MASTER at sequence number: 141 haPort: 5212
                Rep Node [rg1-rn3]      Status: RUNNING,MASTER at sequence number: 265 haPort: 5211
        $

        $ export KVHOME=$KVBASE//server1/oraclesoftware/kv-2.0.39
        $ java -cp $KVHOME/lib/kvclient.jar:$KVHOME/examples hello.HelloBigDataWorld -port 5000 -host server1 -store mystore
        Hello Big Data World!
        $

        The Riley Family, Part III

        Chet Justice - Thu, 2014-04-10 20:44


        That's Mike and Lisa, hanging out at the hospital. Mike's in his awesome cookie monster pajamas and robe...must be nice, right? Oh wait, it's not. You probably remember why he's there, Stage 3 cancer. The joys.

        In October, we helped to send the entire family to Game 5 of the World Series (Cards lost, thanks Red Sox for ruining their night).

        In November I started a GoFundMe campaign, to date, with your help, we've raised $10,999. We've paid over 9 thousand dollars to the Riley family (another check to be cut shortly).

        In December, Mike had surgery. Details can be found here. Shorter: things went fairly well, then they didn't. Mike spent 22 days in the hospital and lost 40 lbs. He missed Christmas and New Years at home with his family. But, as I've learned over the last 6 months, the Riley family really knows how to take things in stride.

        About 6 weeks ago Mike started round 2 of chemo, he's halfway through that one now. He complains (daily, ugh) about numbness, dizziness, feeling cold (he lives in St. Louis, are you sure it's not the weather?), and priapism (that's a lie...I hope).

        Mike being Mike though, barely a complaint (I'll let you figure out where I'm telling a lie).

        Four weeks ago, a chilly (65) Saturday night, Mike and Lisa call. "Hey, I've got some news for you."

        "Sweet," I think to myself. Gotta be good news.

        "Lisa was just diagnosed with breast cancer."

        WTF?

        ARE YOU KIDDING ME? (Given Mike's gallows humor, it's possible).

        "Nope. Stage 1. Surgery on April 2nd."

        FFS

        (Surgery was last week. It went well. No news on that front yet.)

        Talking to them two of them that evening you would have no idea they BOTH have cancer. Actually, one of my favorite stories of the year...the hashtag for Riley Family campaign was #fmcuta. Fuck Mike's Cancer (up the ass). I thought that was hilarious, but I didn't think the Riley's would appreciate it. They did. They loved it. I still remember Lisa's laugh when I first suggested it. They've dropped the latest bad news and Lisa is like, "Oh, wait until you hear this. I have a hashtag for you."

        "What is it?" (I'm thinking something very...conservative. Not sure why, I should know better by now).

        #tna

        I think about that for about .06 seconds. Holy shit! Did you just say tna? Like "tits and ass?"

        (sounds of Lisa howling in the background).

        Awesome. See what I mean? Handling it in stride.

        "We're going to need a bigger boat." All I can think about now is, "what can we do now?"

        First, I raised the campaign goal to 50k. This might be ambitious, that's OK, cancer treatments are expensive enough for one person, and 10K (the original amount) was on the low side. So...50K.

        Second, Scott Spendolini created a very cool APEX app, ostensibly called the Riley Support Group (website? gah). It's a calendar/scheduling app that allows friends and family coordinate things like meals, young human (children) care and other things that most of us probably take for granted. Pretty cool stuff. For instance, Tim Gorman provides pizza on Monday nights (Dinner from pizza hut...1 - large hand-tossed cheese lovers, 1 - large thin-crispy pepperoni, 1 - 4xpepperoni rolls, 1 - cheesesticks).

        Third. There is no third.

        So many of you have donated your hard earned cash to the Riley family, they are incredibly humbled by, and grateful for, everyone's generosity. They aren't out of the woods yet. Donate more. Please. If you can't donate, see if there's something you can help out with (hit me up for details, Tim lives in CO, he's not really close). If you can't do either of those things, send them your prayers or your good thoughts. Any and all help will be greatly appreciated.
        Categories: BI & Warehousing

        How to install and manage a Kerberos Server

        Yann Neuhaus - Thu, 2014-04-10 19:04


        For some time now, I have been working on how to set up a Single Sign-On (SSO) solution in my company. As a big fan of Open Source solutions, I have obviously proposed the implementation of a solution based on Kerberos. What I mean by that is a solution based on the true Kerberos, i. e. MIT Kerberos. Indeed, Kerberos was originally a research project at the Massachusetts Institute for Technology (MIT) in the early 1980s.

        Before starting this kind of project, it's important to clearly define and have in mind the following points:

        • Architecture of the enterprise
        • Operating systems used by end users
        • Operating systems used by applications which must be kerberized
        • Is it difficult to kerberized these applications?

        The answers to these elements provide insight on which types of solutions are possible. For example if there is no restrictions on which operating system an end user can use (Windows or different Linux distribution or ...) then the introduction of a solution based on a Linux Kerberos could be a good idea. That's why in this blog, I will first explain how to install a MIT Kerberos Server. In the second part, I will focus on how to manage this Server.

         

        1. Install MIT Kerberos

        It's important to note that the server and the client share the same release and that the MIT Keberos server could only be installed on a Unix-like. The Mac release is available as part of the Mac OS X since version 10.3 (the current release is Mavericks: 10.9). The Key Distribution Center (KDC) is the Kerberos Server where all identities (users, computers and kerberized applications) will be stored.

        For this installation, let's define the followings properties/variable:

        • example.com = the DNS Domain
        • EXAMPLE.COM = the KDC REALM which should be the DNS Domain in UPPERCASE. In case where there should be more than one KDC, all names must be unique and self descriptive
        • kdc01oel.example.com = the FQDN of the KDC
        • 192.168.1.2 = the IP of kdc01oel.example.com

        So let's begin the installation. Obviously, the first thing to do is to download the current release of the MIT Kerberos distribution for the target operating system. This could be done at the following URL: http://web.mit.edu/kerberos/dist/index.html. The current Linux release is krb5-1.12.1-signed.tar:

        [root@oel opt]# wget http://web.mit.edu/kerberos/dist/krb5/1.12/krb5-1.12.1-signed.tar
        --2014-04-01 14:00:28--  http://web.mit.edu/kerberos/dist/krb5/1.12/krb5-1.12.1-signed.tar
        Resolving web.mit.edu... 23.58.214.151
        Connecting to web.mit.edu|23.58.214.151|:80... connected.
        HTTP request sent, awaiting response... 200 OK
        Length: 11950080 (11M) [application/x-tar]
        Saving to: “krb5-1.12.1-signed.tar”

        100%[===============================================>] 11,950,080  1.52M/s   in 7.3s
        2014-04-01 14:00:38 (1.56 MB/s) - “krb5-1.12.1-signed.tar” saved [11950080/11950080]
        [root@oel opt]# tar -xvf krb5-1.12.1-signed.tar
        krb5-1.12.1.tar.gz
        krb5-1.12.1.tar.gz.asc

        As you could see, this file is signed and you could (should) verify the integrity and identity of the software. This can be done, for example, using GNU Privacy Guard (need another file that can be found on the MIT Kerberos download page):

        [root@oel opt]# gpg --verify krb5-1.12.1.tar.gz.asc

        After that, just extract the MIT Kerberos source code and build it:

        [root@oel opt]# tar -zxf krb5-1.12.1.tar.gz
        [root@oel opt]# cd krb5-1.12.1/src/
        [root@oel src]# ./configure
        ...
        [root@oel src]# yum install *yacc*
        ...
        [root@oel src]# make
        ...
        [root@oel src]# make install
        ...

        At this step, Kerberos should be installed properly and the binaries, libraries and the documentation should be under /usr/local. The default location is sufficient in almost all cases:

        [root@oel src]# krb5-config --all
        Version:     Kerberos 5 release 1.12.1
        Vendor:      Massachusetts Institute of Technology
        Prefix:      /usr/local
        Exec_prefix: /usr/local

        As Kerberos should be installed properly, the next step is to configure it. This is done through a configuration file named krb5.conf:

        [root@oel src]# vi /etc/krb5.conf

        [libdefaults]
          default_realm = EXAMPLE.COM
          forwardable = true
          proxiable = true

        [realms]
          EXAMPLE.COM = {
            kdc = kdc01oel.example.com:88
            admin_server = kdc01oel.example.com:749
            default_domain = example.com
          }

        [domain_realm]
          .example.com = EXAMPLE.COM
          example.com = EXAMPLE.COM

        [logging]
          kdc = FILE:/var/log/krb5kdc.log
          admin_server = FILE:/var/log/kadmin.log
          default = FILE:/var/log/krb5lib.log

        To avoid hostname resolution issues, the file /etc/hosts must contain the fully qualified domain name of the server as well as the IP address:

        [root@oel src]# vi /etc/hosts

        127.0.0.1         localhost   kdc01oel
        192.168.1.2       kdc01oel.example.com   kdc01oel

        The next thing to do is to create the realm and the KDC database. Let's begin with the creation of the database parent folder:

        [root@oel src]# cd /usr/local
        [root@oel local]# mkdir /usr/local/var
        [root@oel local]# mkdir /usr/local/var/krb5kdc
        [root@oel local]# chmod 700 /usr/local/var/krb5kdc

        The file krb5.conf (just above) is the generic Kerberos configuration file but the KDC also has is own configuration file (kdc.conf). Create this file and populate it as follow:

        [root@oel local]# cd /usr/local/var/krb5kdc/
        [root@oel krb5kdc]# vi kdc.conf

        [kdcdefaults]
          kdc_ports = 749,88

        [realms]
          EXAMPLE.COM = {
            database_name = /usr/local/var/krb5kdc/principal
            admin_keytab = /usr/local/var/krb5kdc/kadm5.keytab
            acl_file = /usr/local/var/krb5kdc/kadm5.acl
            key_stash_file = /usr/local/var/krb5kdc/.k5.EXAMPLE.COM
            kdc_ports = 749,88
            max_life = 10h 0m 0s
            max_renewable_life = 7d 0h 0m 0s
          }

        So let's create the Kerberos database using this configuration file:

        [root@oel krb5kdc]# /usr/local/sbin/kdb5_util create -s
        Loading random data
        Initializing database '/usr/local/var/krb5kdc/principal' for realm 'EXAMPLE.COM',
        master key name 'K/document.write(['M','EXAMPLE.COM'].join('@'))'
        You will be prompted for the database Master Password.
        It is important that you NOT FORGET this password.
        Enter KDC database master key:
        Re-enter KDC database master key to verify:
        [root@oel krb5kdc]#

        If there is any error at this point, it is certainly due to a misconfiguration of the /etc/krb5.conf file or because Kerberos can't resolve the hostname (the /etc/hosts file isn't configure properly).

        This finaly conclude the first part about the installation of the MIT Kerberos Server.


        2. Manage the KDC

        For this part, I assume that the KDC is set up according to what I've explain above. In the previous part, I just shown how to install the KDC but in fact the KDC isn't running for the moment. So the first thing to do is to configure which will be able to connect to the KDC (that mean obtain a ticket) and with which permissions.

        To enter to the KDC administration console, use the kadmin.local (only for the local machine):

        [root@oel krb5kdc]# /usr/local/sbin/kadmin.local
        Authenticating as principal root/document.write(['admin','EXAMPLE.COM'].join('@')) with password.
        kadmin.local:

        Once in the kadmin.local, several command can be used to manage the KDC. The following command can be used to list them all:

        kadmin.local:  ?
        Available kadmin.local requests:

        add_principal, addprinc, ank
                                 Add principal
        delete_principal, delprinc
                                 Delete principal
        modify_principal, modprinc
                                 Modify principal
        rename_principal, renprinc
                                 Rename principal
        change_password, cpw     Change password
        get_principal, getprinc  Get principal
        list_principals, listprincs, get_principals, getprincs
                                 List principals
        add_policy, addpol       Add policy
        modify_policy, modpol    Modify policy
        delete_policy, delpol    Delete policy
        get_policy, getpol       Get policy
        list_policies, listpols, get_policies, getpols
                                 List policies
        get_privs, getprivs      Get privileges
        ktadd, xst               Add entry(s) to a keytab
        ktremove, ktrem          Remove entry(s) from a keytab
        lock                     Lock database exclusively (use with extreme caution!)
        unlock                   Release exclusive database lock
        purgekeys                Purge previously retained old keys from a principal
        get_strings, getstrs     Show string attributes on a principal
        set_string, setstr       Set a string attribute on a principal
        del_string, delstr       Delete a string attribute on a principal
        list_requests, lr, ?     List available requests.
        quit, exit, q            Exit program.

        So for example let's create two principal. One with administrator capabilities (xxx/admin) and another one without:

        kadmin.local:  addprinc mpatou/admin
        WARNING: no policy specified for mpatou/document.write(['admin','EXAMPLE.COM'].join('@')); defaulting to no policy
        Enter password for principal "mpatou/document.write(['admin','EXAMPLE.COM'].join('@'))":
        Re-enter password for principal "mpatou/document.write(['admin','EXAMPLE.COM'].join('@'))":
        Principal "mpatou/document.write(['admin','EXAMPLE.COM'].join('@'))" created.

        kadmin.local:  addprinc mpatou
        WARNING: no policy specified for document.write(['mpatou','EXAMPLE.COM'].join('@')); defaulting to no policy
        Enter password for principal "document.write(['mpatou','EXAMPLE.COM'].join('@'))":
        Re-enter password for principal "document.write(['mpatou','EXAMPLE.COM'].join('@'))":
        Principal "document.write(['mpatou','EXAMPLE.COM'].join('@'))" created.

        With a new "listprincs", the two new principals should be displayed but for now, mpatou/admin has no administrative access because this account isn't declared in the access control list. In the ACL file, permissions can be defined by using the following characters:

        • A = Addition of users or policies into the KDC database
        • D = Deletion of users or policies from the KDC database
        • M = Modification of users or policies in the KDC database
        • C = Changing principals' passwords in the KDC database
        • I = Inquiries into the database, to list principal information
        • L = Listing of the principals contained in the database
        • * = Grants the user all of the above permissions

        Moreover, the wildcard (*) can be used to match more than one user. For example */admin will match all administrative account. Let's create the ACL file as define in the KDC configuration file (kdc.conf):

        kadmin.local:  exit
        [root@oel krb5kdc]# vi /usr/local/var/krb5kdc/kadm5.acl
        */document.write(['admin','EXAMPLE.COM'].join('@'))        ADMCIL

        So there is a kadmin.local for local administration but there is also a remote administration console which is kadmin. This remote access can't be used for now because it need a file that doesn't exist. This file (a "keytab"), is a file that store a principal and an encryption key derived from the principal's password. It could be used to log into Kerberos without being prompted for a password and that's why this is useful for all kerberized applications.

        [root@oel krb5kdc]# /usr/local/sbin/kadmin.local
        Authenticating as principal root/document.write(['admin','EXAMPLE.COM'].join('@')) with password.
        kadmin.local:  ktadd -k /usr/local/var/krb5kdc/kadm5.keytab kadmin/admin kadmin/changepw
        Entry for principal kadmin/admin with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/usr/local/var/krb5kdc/kadm5.keytab.
        Entry for principal kadmin/admin with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/usr/local/var/krb5kdc/kadm5.keytab.
        Entry for principal kadmin/admin with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:/usr/local/var/krb5kdc/kadm5.keytab.
        Entry for principal kadmin/admin with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:/usr/local/var/krb5kdc/kadm5.keytab.
        Entry for principal kadmin/changepw with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/usr/local/var/krb5kdc/kadm5.keytab.
        Entry for principal kadmin/changepw with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/usr/local/var/krb5kdc/kadm5.keytab.
        Entry for principal kadmin/changepw with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:/usr/local/var/krb5kdc/kadm5.keytab.
        Entry for principal kadmin/changepw with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:/usr/local/var/krb5kdc/kadm5.keytab.
        kadmin.local:  exit

        The location of this kadm5.keytab is also define in the kdc.conf file. Now the KDC server process is ready to start:

        [root@oel krb5kdc]# /usr/local/sbin/krb5kdc
        [root@oel krb5kdc]# /usr/local/sbin/kadmind

        If there is no error, then the KDC should be running and ready to reply to any client with a valid principal. The easiest way to test it is to try to obtain a TGT (Ticket Granting Ticket) using the kinit command:

        [root@oel krb5kdc]# cd /usr/local/bin
        [root@oel bin]# klist
        klist: Credentials cache file '/tmp/krb5cc_0' not found
        [root@oel bin]# kinit mpatou
        Password for document.write(['mpatou','EXAMPLE.COM'].join('@')):
        [root@oel bin]# klist
        Ticket cache: FILE:/tmp/krb5cc_0
        Default principal: document.write(['mpatou','EXAMPLE.COM'].join('@'))

        Valid starting       Expires              Service principal
        04/03/2014 09:54:48  04/03/2014 19:54:48  krbtgt/document.write(['EXAMPLE.COM','EXAMPLE.COM'].join('@'))
            renew until 04/04/2014 09:54:47
        [root@oel bin]# kdestroy
        [root@oel bin]# klist
        klist: Credentials cache file '/tmp/krb5cc_0' not found

        The klist command can be used to list all existing tickets whereas the kdestroy is used to remove them. The KDC is now fully operational and some possible additional steps can be done (e.g. set up slaves KDC).

        This finally concludes this blog about how to install a MIT Kerberos Server. If you need more information about Kerberos (MIT or Heimdal or Active Directory implementation), I strongly suggest you to read the book "Kerberos, The Definitive Guide" by Jason Garman. This book was for me the best source of knowledge on this subject.

        Oracle Priority Service Infogram for 10-APR-2014

        Oracle Infogram - Thu, 2014-04-10 16:35

        Security
        The Heartbleed vulnerability is causing a major stir. Here are a couple of articles to help clarify what you should do and when:
        From lifehackerLastPass Now Tells You Which Heartbleed-Affected Passwords to Change.
        and another good article from Mashable: The Heartbleed Hit List: The Passwords You Need to Change Right Now.
        Hadoop
        Using Sqoop for Loading Oracle Data into Hadoop on theBigDataLite VM, from RittmanMead.
        RDBMS
        From The Oracle Instructor: Initialization Parameter Handling for Pluggable Databases in #Oracle 12c.
        12c New Feature: Limit the PGA, from Peter's DBA Blog.
        From The ORACLE-BASE Blog: Online Move Datafile in Oracle 12c
        Oracle Internals
        From the internals guru Tanel Poder: Oracle X$ tables – Part 1 – Where do they get their data from?.
        High Availability
        Improving Performance via Parallelism in Oracle Event Processing Pipelines with High-Availability, from the Oracle A-TEAM Chronicles.
        Linux
        From the dbwhisperer: Multi-threaded Oracle 12c architecture on Linux.
        OIM
        OIM 11g R2 Self Registration with CAPTCHA, from Oracle A-TEAM Chronicles.
        Middleware
        From Proactive Support - WebCenter Content: Free Learning Sessions on Oracle Fusion Middleware.
        SOA
        SOA Governance Through Enterprise Architecture, from the SOA & BPM Partner Community Blog.
        Business
        Presentations are serious business, which is why you have to look serious, but relaxed: 10 Body Language Tips Every Speaker Must Know (Infographic), from Entrepeneur.
        EBS
        Over at the Oracle E-Business Suite Support Blog:
        Should You Apply The R12.1.3+ EBS Wide RPC or Wait for a Payables Specific RPC?
        Learn All About Channel Revenue Management Rebates
        Webcast: E-Business Tax Series, Session 2 – Basic Overview, Regime To Rate Setup & Transactional Flow (US Based Setup) From A Financials Perspective
        Webcast: E-Business Tax Series, Session 1 – Prerequisites for Regime to Rate Flow Creation
        Asset Tracking: How to Capitalize Serialized Normal Items Through Sales Order Shipment
        How Can One Disable Continuous Price Breaks in R12?

        ‘Heartbleed’ (CVE-2014-0160) Vulnerability in OpenSSL

        Oracle Security Team - Thu, 2014-04-10 12:44

        Hi, this is Eric Maurice.

        A vulnerability affecting certain versions of the OpenSSL libraries was recently publicly disclosed.  This vulnerability has received the nickname ‘Heartbleed’ and the CVE identifier CVE-2014-0160. 

        Oracle is investigating the use of the affected OpenSSL libraries in Oracle products and solutions, and will provide mitigation instructions when available for these affected Oracle products. 

        Oracle recommends that customers refer to the 'OpenSSL Security Bug - Heartbleed CVE-2014-0160' page on the Oracle Technology Network (OTN) for information about affected products, availability of fixes and other mitigation instructions.  This page will be periodically updated as Oracle continues its assessment of the situation.   Oracle customers can also open a support ticket with My Oracle Support if they have additional questions or concerns.

         

        For More Information:

        The CVE-2014-016 page on OTN is located at http://www.oracle.com/technetwork/topics/security/opensslheartbleedcve-2014-0160-2188454.html

        The Heartbleed web site is located at http://heartbleed.com/.  This site is not affiliated with Oracle and provides a list of affected OpenSSL versions.

        The My Oracle Support portal can be accessed by visiting https://support.oracle.com

         

        Complément : Présentations de l'évènement DB Innovation Day

        Jean-Philippe Pinte - Thu, 2014-04-10 09:52
        Retrouvez ci-dessous les présentations faites lors de l'évènement Database Innovation Day :

        How to restrict data coming back from a SOAP Call

        Angelo Santagata - Thu, 2014-04-10 09:51

        In sales cloud a big positive of the SOAP interface is that lots of related data is returned by issuing a single query, including master-detail data (ie multiple email addresses in contacts) however these payloads can be very very large, e.g. In my system querying single person you get 305 Lines(!), whereas I only want the firstName,LastName and partyId which is 3 lines per record..

        Solution

        For each findCriteria element you can add multiple <findAttribute> element indicating what elements you want to return. By default if you provide <findAttribute> entries then only those attributes are returned, and this functionality can be reversed by setting the <excludeAttributes> to true.


        Example 1 :  only retrieving PersonLastName,PersonFirstName,PartyId

        <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/apps/cdm/foundation/parties/personService/applicationModule/types/" xmlns:typ1="http://xmlns.oracle.com/adf/svc/types/">

           <soapenv:Header/>

           <soapenv:Body>

              <typ:findPerson>

                 <typ:findCriteria xsi:type="typ1:FindCriteria" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

                        <typ1:fetchStart>0</typ1:fetchStart>

                        <typ1:fetchSize>100</typ1:fetchSize>

        <typ1:findAttribute>PersonLastName</typ1:findAttribute>

                        <typ1:findAttribute>PersonFirstName</typ1:findAttribute>

                        <typ1:findAttribute>PartyId</typ1:findAttribute>

                    <typ1:excludeAttribute>false</typ1:excludeAttribute>

                 </typ:findCriteria>

              </typ:findPerson>

           </soapenv:Body>

        </soapenv:Envelope>

        Notes

        findAttributes work on the level1 attributes of that findCriteria, the value can be a attribute or an element

        If you want to restrict SubElements you can use childFindCriterias for that subelement and then add findAttributes within that

        Example 2 :  Only Retrieving PartyId, and within Email element only EmailAddress     

        <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/apps/cdm/foundation/parties/personService/applicationModule/types/" xmlns:typ1="http://xmlns.oracle.com/adf/svc/types/">

           <soapenv:Header/>

           <soapenv:Body>

              <typ:findPerson>

                 <typ:findControl>

                    <typ1:retrieveAllTranslations/>

                 </typ:findControl>

                 <typ:findCriteria xsi:type="typ1:FindCriteria" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

                    <typ1:fetchStart>0</typ1:fetchStart>

                    <typ1:fetchSize>100</typ1:fetchSize>

        <typ1:findAttribute>PartyId</typ1:findAttribute>

                    <typ1:findAttribute>Email</typ1:findAttribute>

                    <typ1:excludeAttribute>false</typ1:excludeAttribute>

        <typ1:childFindCriteria>

                       <typ1:fetchStart>0</typ1:fetchStart>

                       <typ1:fetchSize>10</typ1:fetchSize>

        <typ1:findAttribute>EmailAddress</typ1:findAttribute>

                       <typ1:excludeAttribute>false</typ1:excludeAttribute>

                       <typ1:childAttrName>Email</typ1:childAttrName>

        </typ1:childFindCriteria>

                 </typ:findCriteria>

              </typ:findPerson>

           </soapenv:Body>

        </soapenv:Envelope>

        Notes

        For a childFindCriteria to work you must query it in the parent, which is why “Email” is referenced in a findAttribute on line 14

        What Happens in Vegas, Doesn’t Stay in Vegas – Collaborate 14

        Pythian Group - Thu, 2014-04-10 08:04

        IOUG’s Collaborate 14, is star-studded this year with the Pythian team illuminating various tracks in the presentation rooms. It’s acting like a magnet in the expo halls of The Venetian for data lovers. It’s a kind of rendezvous for those who love their data. So if you want your data to be loved, feel free to drop by at Pythian booth 1535.

        Leading from the front is Paul Vallée with an eye-catching title, with real world gems. Then there is Michael Abbey’s rich experience, Marc Fielding’s in-depth technology coverage and Vasu’s forays into Apps Database Administration. There is my humble attempt at Exadata IORM, and Rene’s great helpful tips, and Alex Gorbachev’s mammoth coverage of mammoth data – it’s all there with much more to learn, share and know.

        Vegas Strip is buzzing with the commotion of Oracle. Even the big rollers are turning their necks to see what the fuss is about. Poker faces have broken into amazed grins, and even the weird, kerbside card distribution has stopped. Everybody is focused on the pleasures of Oracle technologies.

        Courtesy of social media, all of this fun isn’t confined to Vegas. You can follow @Pythian on Twitter to know it all, live, and in real time.

        Come Enjoy!

        Categories: DBA Blogs

        Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5

        Senthil Rajendran - Thu, 2014-04-10 04:23
        Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
        Index
        Big Data Oracle NoSQL in No Time - Getting Started Part 1
        Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2
        Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
        Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4

        With the current 3x1 setup the NoSQL store is write efficient. In order to make it read efficient the replication factor has to be increased which internally creates more copies of the data to improve performance.

        In the below scenario we are going to increase the replication from 1 to 3 to the  existing topology to make it read friendly.


        export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39
        java -jar $KVHOME/lib/kvstore.jar runadmin -port 5000 -host server1
        kv-> show topologystore=mystore  numPartitions=30 sequence=60  dc=[dc1] name=datacenter1 repFactor=1
          sn=[sn1]  dc=dc1 server1:5000 capacity=1 RUNNING    [rg1-rn1] RUNNING          No performance info available  sn=[sn2]  dc=dc1 server2:5100 capacity=1 RUNNING    [rg2-rn1] RUNNING          No performance info available  sn=[sn3]  dc=dc1 server3:5200 capacity=1 RUNNING    [rg3-rn1] RUNNING          No performance info available
          shard=[rg1] num partitions=10    [rg1-rn1] sn=sn1  shard=[rg2] num partitions=10    [rg2-rn1] sn=sn2  shard=[rg3] num partitions=10    [rg3-rn1] sn=sn3
        kv-> plan change-parameters -service sn1 -wait -params capacity=3Executed plan 8, waiting for completion...Plan 8 ended successfullykv-> plan change-parameters -service sn2 -wait -params capacity=3Executed plan 9, waiting for completion...Plan 9 ended successfullykv-> plan change-parameters -service sn3 -wait -params capacity=3Executed plan 10, waiting for completion...Plan 10 ended successfullykv-> topology clone -current -name 3x3Created 3x3kv-> topology change-repfactor -name 3x3 -pool AllStorageNodes -rf 3 -dc dc1Changed replication factor in 3x3kv-> topology preview -name 3x3Topology transformation from current deployed topology to 3x3:Create 6 RNs
        shard rg1  2 new RNs : rg1-rn2 rg1-rn3shard rg2  2 new RNs : rg2-rn2 rg2-rn3shard rg3  2 new RNs : rg3-rn2 rg3-rn3
        kv-> plan deploy-topology -name 3x3 -waitExecuted plan 11, waiting for completion...Plan 11 ended successfullykv-> show topologystore=mystore  numPartitions=30 sequence=67  dc=[dc1] name=datacenter1 repFactor=3
          sn=[sn1]  dc=dc1 server1:5000 capacity=3 RUNNING    [rg1-rn1] RUNNING          No performance info available    [rg2-rn2] RUNNING          No performance info available    [rg3-rn2] RUNNING          No performance info available  sn=[sn2]  dc=dc1 server2:5100 capacity=3 RUNNING    [rg1-rn2] RUNNING          No performance info available    [rg2-rn1] RUNNING          No performance info available    [rg3-rn3] RUNNING          No performance info available  sn=[sn3]  dc=dc1 server3:5200 capacity=3 RUNNING    [rg1-rn3] RUNNING          No performance info available    [rg2-rn3] RUNNING          No performance info available    [rg3-rn1] RUNNING          No performance info available
          shard=[rg1] num partitions=10    [rg1-rn1] sn=sn1    [rg1-rn2] sn=sn2    [rg1-rn3] sn=sn3  shard=[rg2] num partitions=10    [rg2-rn1] sn=sn2    [rg2-rn2] sn=sn1    [rg2-rn3] sn=sn3  shard=[rg3] num partitions=10    [rg3-rn1] sn=sn3    [rg3-rn2] sn=sn1    [rg3-rn3] sn=sn2
        kv->


        So what we have done ?

        plan change-parameters -service sn1 -wait -params capacity=3plan change-parameters -service sn2 -wait -params capacity=3plan change-parameters -service sn3 -wait -params capacity=3We are increasing the capacity from 1 to 3 with the change-parameters command.
        topology clone -current -name 3x3We are cloning the current topology with the new name 3x3
        topology change-repfactor -name 3x3 -pool AllStorageNodes -rf 3 -dc dc1We are using the change-repfactor method to modify the replication factor to 3. The replication factor cannot be changed for this topology after executing this command.
        You can use the show topology command to verify if the storage nodes are up and running. Alternatively , use the web interface to check the storage nodes 3x3 distributions.

        Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4

        Senthil Rajendran - Thu, 2014-04-10 02:39
        Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
        Index
        Big Data Oracle NoSQL in No Time - Getting Started Part 1
        Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2
        Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3

        Previously we setup 1x1 topology and now we are going to move into a 3x1 topology.
        By doing so we are going to increase the data that is distributed in the NoSQL Store. The main advantage of doing so will increase the write throughput and this is achieved using the redistribute command. During the redistribution partitions are distributed across the new shards and the end result is you have more replication nodes that will help your write operations.

        In the below scenario we are going to add two replication nodes to the existing topology to make it write friendly.

        $ export KVBASE=/oraclenosql/lab
        $ export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39
        $ java -jar $KVHOME/lib/kvstore.jar runadmin -port 5000 -host server1
        kv-> plan deploy-sn -dc dc1 -port 5100 -wait -host server2
        Executed plan 5, waiting for completion...
        Plan 5 ended successfully
        kv-> plan deploy-sn -dc dc1 -port 5200 -wait -host server3
        Executed plan 6, waiting for completion...
        Plan 6 ended successfully
        kv-> show topology
        store=mystore  numPartitions=30 sequence=36
          dc=[dc1] name=datacenter1 repFactor=1

          sn=[sn1]  dc=dc1 server1:5000 capacity=1 RUNNING
            [rg1-rn1] RUNNING
                  No performance info available
          sn=[sn2]  dc=dc1 server2:5100 capacity=1 RUNNING
          sn=[sn3]  dc=dc1 server3:5200 capacity=1 RUNNING

          shard=[rg1] num partitions=30
            [rg1-rn1] sn=sn1

        kv->
        kv-> topology clone -current -name 3x1
        Created 3x1
        kv-> topology redistribute -name 3x1 -pool AllStorageNodes
        Redistributed: 3x1
        kv-> topology preview -name 3x1
        Topology transformation from current deployed topology to 3x1:
        Create 2 shards
        Create 2 RNs
        Migrate 20 partitions

        shard rg2
          1 new RN : rg2-rn1
          10 partition migrations
        shard rg3
          1 new RN : rg3-rn1
          10 partition migrations

        kv-> plan deploy-topology -name 3x1 -wait
        Executed plan 7, waiting for completion...
        Plan 7 ended successfully
        kv-> show topology
        store=mystore  numPartitions=30 sequence=60
          dc=[dc1] name=datacenter1 repFactor=1

          sn=[sn1]  dc=dc1 server1:5000 capacity=1 RUNNING
            [rg1-rn1] RUNNING
                  No performance info available
          sn=[sn2]  dc=dc1 server2:5100 capacity=1 RUNNING
            [rg2-rn1] RUNNING
                  No performance info available
          sn=[sn3]  dc=dc1 server3:5200 capacity=1 RUNNING
            [rg3-rn1] RUNNING
                  No performance info available

          shard=[rg1] num partitions=10
            [rg1-rn1] sn=sn1
          shard=[rg2] num partitions=10
            [rg2-rn1] sn=sn2
          shard=[rg3] num partitions=10
            [rg3-rn1] sn=sn3

        kv->


        So what we have done ?

        plan deploy-sn -dc dc1 -port 5100 -wait -host server2
        We are adding the second storage node into the datacenter dc1 which already has one storage node.

        plan deploy-sn -dc dc1 -port 5200 -wait -host server3
        We are adding one more storage node into the datacenter dc1 making it three storage nodes.

        topology clone -current -name 3x1
        We are cloning the existing 1x1 topology to a new candidate topology 3x1. This topology will be used for the change operations that is planned to be performed.

        topology redistribute -name 3x1 -pool AllStorageNodes
        We are redistributing the partitions on to the 3x1 topology

        topology preview -name 3x1
        We can preview the topology before deploying it to the store.

        plan deploy-topology -name 3x1 -wait
        We are approving the deployment plan 3x1 and the deployment will take time to complete as it depends on the store size.

        You can use the show topology command to verify if the storage nodes are up and running. Alternatively , use the web interface to check the storage nodes 3x1 distributions. 

        JDeveloper XSL Mapper tip

        Darwin IT - Thu, 2014-04-10 01:48
        Of course you know already that in Jdeveloper you can create xsl maps, just by drawing lines between source and target elements. In many cases you need functions or complex expressions in between. Those are "drag-and-dropable" as well. I found that you can even drop a function on a line and the function will be added to the expression. So with a little thought of the sequence of "drag-and-drops" of functions you can assemble pretty complex expressions together just by using the mouse. Although I'm not affraid to hack back in the source code of the xsl for quickness, I found that this allowed me to spare a few switches between the Design and the Source tab. That is convenient, since hacking the source and switching back to the Design tab will cause the Designer to initialize again, driving you to expand again all the nodes you were working on. Understandable, but inconvenient with large XSD's.

        What I did not know until recently is how to set a variable to an element. So what I did before was to hack in the source a piece of code like:
        <ns0:countryCode>
        <xsl:value-of select="$landCodeNL" />
        </ns0:countryCode>


        It turns out that you can do that by "drag-and-drop" as well. In the component-palette you need to select the "Advanced" functions:
         At the bottom you find  a xpath-expression element. Drag-and-drop that in the design area and connect it to the target element.





        When you edit it you can just type in your expression, for instance just a variable. When you start with a dollar sign, it even gives you a drop-down-list with available variables. Just pick the right one and our done.

        I admit, no high-standard tip, but convenient enough though, for me at least.

        _direct_read_decision_statistcs_driven, _small_table_threshold and direct path reads on partitioned tables in 11.2.0.3 (Part 2)

        Mihajlo Tekic - Thu, 2014-04-10 00:30
        This is continuation of my last post regarding direct path reads on partitioned tables in Oracle 11.2.0.3.

        To recap, the behavior I observed is that direct path reads will be performed if number of blocks for all partitions that will be accessed exceeds _small_table_threshold value. That is if a table is consisted of 10 partitions each having 100 blocks and if a query goes after two of the partitions, direct path reads will be performed if _small_table_threshold is lower than 200.

        Also regardless of how much data has been cached(in the buffer cache)  for each of the partitions, if direct path reads are to be performed, all partition segments will be directly scanned. So, it is all or nothing situation.

        I also indicated that _direct_read_decision_statistics_driven parameter was set to TRUE (default) for the tests done in my earlier post.

        What is _direct_read_decision_statistics_driven anyway? According to the parameter description, it enables direct path read decision to be based on optimizer statistics. If the parameter is set to FALSE Oracle will use segment headers to determine how many blocks the segment has. (read Tanel Poder’s blogpost for more information)

        Let’s see how queries that access table partitions (full scan) behave if _direct_read_decsision_statiscs_driven parameter is set to FALSE in 11.2.0.3. My expectation was that it should be the same as if it was set to TRUE. I thought that once Oracle gets information about the number of blocks in each of the partitions it would use the same calculation as if the parameter was set to TRUE. Let’s see.

        But, before moving forward a small disclaimer: Do not perform these tests in production or any other important environment. Changing of undocumented parameters should be done under the guidance of Oracle Support. The information presented here is for demonstration purposes only.

        I will use the same table, TEST_PART, that I used in my earlier post.

        I started with flushing the buffer cache (to make sure none of the partitions has blocks in the cache).

        I set _direct_read_decision_statistcs_driven parameter to false and ran a query that selects data from PART_1 partition only. Each of the partitions contains 4000 rows stored in 65 blocks, plus one segment header block.

        _small_table_threshold in my sandbox environment was set to 117.


        SQL> alter session set "_direct_read_decision_statistics_driven"=FALSE;

        Session altered.

        SQL> SELECT count(1) FROM test_part WHERE col1 in (1);

        COUNT(1)
        ----------
        4000


        As expected, no direct path reads were performed (I used my sese.sql script that scans v$sesstat for statistics that match given keyword)


        SQL> @sese direct

        no rows selected


        Now let’s see what happens with a query that accesses the first two partitions. Remember if _direct_read_decision_statistcs_driven parameter is set to TRUE, this query would perform direct path reads because the number of blocks in both partitions, 130 (2x65) exceeds
        _small_table_threshold(117) parameter.


        SQL> select count(1) from test_part where col1 in (1,2);

        COUNT(1)
        ----------
        8000

        SQL> @sese direct

        no rows selected


        No direct reads. Definitely different compared to when _direct_read_decision_statistcs_driven was set to TRUE.

        How about for a query that accesses three partitions:


        SQL> select count(1) from test_part where col1 in (1,2,3);

        COUNT(1)
        ----------
        12000

        SQL> @sese direct

        no rows selected


        Still no direct path reads.

        How about if we access all 7 partitions:


        SQL> select count(1) from test_part where col1 in (1,2,3,4,5,6,7);

        COUNT(1)
        ----------
        28000

        SQL> @sese direct

        no rows selected


        No direct path reads.

        So what is going on? Seems when _direct_read_decision_statistcs_driven is set to FALSE, Oracle makes decision on partition by partition basis. If the number of blocks in the partition is less or equal than _small_table_threshold buffer cache will be used, otherwise direct path reads.

        What if some of the partitions were already cached in the buffer cache?

        In the next test I’ll:
        • Flush the buffer cache again
        • Set _direct_read_decision_statistcs_driven is set to FALSE
        • Run a query that accesses the first two partitions
        • Decrease the value for _small_table_threshold to 60
        • Run a query that accesses the first three partitions.
        • Check if direct path reads were performed and how many
        With this test I’d like to see if Oracle will utilize the buffer cache if the segment data is cached and the number of blocks in partition is greater than _small_table_threshold.

        SQL> alter system flush buffer_cache;

        System altered.

        SQL> alter session set "_direct_read_decision_statistics_driven"=FALSE;

        Session altered.

        SQL> select count(1) from test_part where col1 in (1,2);

        COUNT(1)
        ----------
        8000

        SQL> @sese direct

        no rows selected


        At this point, PART_1 and PART_2 partitions should be entirely in the buffer cache. If you want, you could query X$KCBOQH to confirm this (from a different session logged in as SYS).


        SQL> conn /as sysdba
        Connected.
        SQL> select o.subobject_name, b.obj#, sum(b.num_buf)
        2 from X$KCBOQH b, dba_objects o
        3 where b.obj#=o.data_object_id
        4 and o.object_name='TEST_PART'
        5 group by o.subobject_name, b.obj#
        6 order by 1;

        SUBOBJECT_NAME OBJ# SUM(B.NUM_BUF)
        ------------------------------ ---------- --------------
        PART_1 146024 66
        PART_2 146025 66


        As expected, both partitions are in the buffer cache.

        Now let’s change decrease _small_table_threshold to 60 and run a query that scans the first three partitions:


        SQL> alter session set "_small_table_threshold"=60;

        Session altered.

        SQL> alter session set events '10046 trace name context forever, level 8';

        Session altered.

        SQL> select count(1) from test_part where col1 in (1,2,3);

        COUNT(1)
        ----------
        12000

        alter session set events '10046 trace name context off';

        SQL> @sese direct

        SID ID NAME VALUE
        ---------- ---------- -------------------------------------------------- ----------
        9 76 STAT.consistent gets direct 65
        9 81 STAT.physical reads direct 65
        9 380 STAT.table scans (direct read) 1


        Here they are, 65 direct path reads, one table scan (direct read) which means one of the partitions was scanned using direct path reads. Which one? Yes, you are right, the one that is not in the buffer cache (PART_3 in this example).

        If you query X$KCBOQH again you can see that only one block of PART_3 is in the cache. That is the segment header block.


        SQL> conn /as sysdba
        Connected.
        SQL> select o.subobject_name, b.obj#, sum(b.num_buf)
        2 from X$KCBOQH b, dba_objects o
        3 where b.obj#=o.data_object_id
        4 and o.object_name='TEST_PART'
        5 group by o.subobject_name, b.obj#
        6 order by 1;

        SUBOBJECT_NAME OBJ# SUM(B.NUM_BUF)
        ------------------------------ ---------- --------------
        PART_1 146024 66
        PART_2 146025 66
        PART_3 146026 1 <===


        This means that when _direct_read_decision_statistcs_driven is set to FALSE, in 11.2.0.3, Oracle uses totally different calculation compared to the one used when the parameter is set to TRUE (see in my earlier post).

        Moreover, seems Oracle examines each of the partitions separately (which I initially expected to be a case even when _direct_read_decision_statistcs_driven is set to TRUE ) and applies the rules as described in Alex Fatkulin’s blogpost. That is, if any of the following is true, oracle will scan the data in the buffer cache, otherwise direct path reads will be performed: 
        •  the number of blocks in the segment is lower or equal than _small_table_threshold 
        •  at least 50% of the segment data blocks are in the buffer cache
        •  at least 25% of the data blocks are dirty 
        The conclusion so far is that in 11.2.0.3, you may observe different behavior for the queries that access table partitions using FTS if you decide to change _direct_read_decision_statistcs_driven parameter.

        I will stop here. I ran the same tests against 11.2.0.4 and 12.1.0.1 and noticed some differences in the behavior compared to the one I just wrote about (11.2.0.3). I will post these results in the next few days.

        Stay tuned...



        Paul Vallée’s Interview with Oracle Profit Magazine

        Pythian Group - Wed, 2014-04-09 23:00

        Aaron Lazenby, Editor at Oracle’s Profit Magazine interviewed Pythian Founder, Paul Vallée this week to discuss the growing risk of internal threats to IT. Paul Vallee

        “What we need to create is complete accountability for everything that happens around a data center, and that’s where our industry is not up to snuff right now. We tend to think that if you secure access to the perimeter of the data center, then what happens in the meeting inside can be unsupervised. But that’s not good enough.” says Paul.

        The interview, Inside Job, is a preview to Paul’s Collaborate ’14 session taking place in later today in Las Vegas. If you’re at Collaborate, make sure you don’t miss Paul’s presentation Thou Shalt Not Steal: Securing Your Infrastructure in the Age of Snowden. The presentation begins at 4:15 PM Pacific at the Venetian, Level 3 – Murano 3306.

        What are your thoughts? How else can organizations mitigate the risk of internal threats? Comment below.

        Categories: DBA Blogs

        SQL Developer’s Interface for GIT: Interacting with a GitHub Repository Part 1

        Galo Balda's Blog - Wed, 2014-04-09 22:45

        In my previous post, I showed how to clone a GitHub repository using SQL Developer. In this post I’m going to show to synchronize the remote and local repositories after remote gets modified.

        Here I use GitHub to commit a file called sp_test_git.pls.  You can create files by clicking on the icon the red arrow is pointing to.

        new_file

        The content of the file is a PL/SQL procedure that prints a message.

        file_content

        At this point, the remote repository and the local repository are out of sync. The first thing that you may want to do before modifying any repository, is to make sure that you have the most current version of it so that it includes the changes made by other developers. Let’s synchronize remote and local.

        Make sure you open the Versions window. Go to the main menu click on Team -> Versions.

        versions

        Open the Local branch and click on master, then go to main menu click on Team -> Git -> Fetch to open the “Fetch from Git” wizard. Fetching a repository copies changes from the remote repository into your local system, without modifying any of your current branches. Once you have fetched the changes, you can merge them into your branches or simply view them. We can see the changes on the Branch Compare window by going to the main menu click on Team -> Git -> Branch Compare.

        branch_compare

         Branch Compare is showing that sp_test_git.pls has been fetched from the remote master branch. We can right click on this entry and select compare to see the differences.

        compare

        The window on the left displays the content of the fetched file and the window on right displays the content of the same file in the local repository. In this case the right windows is empty because this is a brand new file that doesn’t exist locally. Let’s accept the changes and merge them into the local repository. We go to the Branch Compare window, right click on the entry, select merge and click on the “Ok” button.

        merge

        Now the changes should have been applied to the local repository.

        local_update

        We can go to the path where the local repository is located and confirm that sp_test_git.pls is there.

         

         


        Filed under: Source Control, SQL Developer Tagged: Source Control, SQL Developer
        Categories: DBA Blogs

        New Web Services Capabilities available

        Anthony Shorten - Wed, 2014-04-09 17:59

        As part of Oracle Utilities Application Framework V4.2.0.2.0, a new set of Web Services capabilities is now available to replace the Multi-Purpose Listener (MPL) and also XAI Servlet completely with more exciting capabilities.

        Here is a summary of the facilities:

        • There is a new Inbound Web Services (IWS) capability to replace the XAI Inbound Services and XAI Servlet (which will be deprecated in a future release). This capability combines the meta data within the Oracle Utilities Application Framework with the power of the native Web Services capability within the J2EE Web Application Server to give the following advantages:
          • It is possible to define individual Web Services to be deployed on the J2EE Web Application Server. Web based and command line utilities have been provided to allow developers to design, deploy and manage individual Inbound Web Services.
          • It is now possible to define multiple operations per Web Service. XAI was restricted to a single operation with multiple transaction types. IWS supports multiple operations separated by transaction type. Operations can even extend to different objects within the same Web Service. This will aid in rationalizing Web Services.
          • IWS  makes it  possible to monitor and manage individual Web Services from the J2EE Web Application Server console (or Oracle Enterprise Manager). These metrics are also available from Oracle Enterprise Manager to provide SLA and trend tracking capabilities. These metrics can also be fine grained to the operation level within a Web Service.
          • IWS allows greater flexibility in security. Individual Services can now support standards such as WS-Policy, WS-ReliableMessaging etc as dictated by the capabilities of the J2EE Web Application Server. This includes message and transport based security, such as SAML, X.509 etc and data encryption.
          • For customers lucky enough to be on Oracle WebLogic and/or Oracle SOA Suite, IWS now allows full support for Oracle Web Services Manager (OWSM) on individual Web Services. This also allows the Web Services to enjoy additional WS-Policy support, as well as, for the first time, Web Service access rules. These access rules allow you to control when and who can run the individual service using simple or complex criteria ranging from system settings (such as dates and times), security (the user and roles) or individual data elements in the payload.
          • Customers migrating from XAI to IWS will be able to reuse a vast majority of their existing definitions. The only change is that each IWS service has to be registered and redeployed to the server, using the provided tools, and the URL for invoking the service will be altered. XAI can be used in parallel to allow for flexibility in migration.
        • The IWS capability and the migration path for customers using XAI Inbound Services is available in a new whitepaper Migrating from XAI to IWS (Doc Id: 1644914.1) available from My Oracle Support.

        Over the next few weeks I will be publishing articles highlighting capabilities for both IWS and the OSB to help implementations upgrade to these new capabilities.