Skip navigation.

Feed aggregator

Pro-active AWR Data Mining to Find Change in SQL Execution Plan

Pythian Group - Mon, 2014-07-07 11:11

Many times we have been called for the poor performance of a database and it has been narrowed down to a  SQL statement. Subsequent analysis have shown that the execution plan has been changed and a wrong execution plan was being used.

Resolution normally, is to fix the execution plan in 11g by running

variable x number
begin
:x :=
    dbms_spm.load_plans_from_cursor_cache(
    sql_id=>'&sql_id',
    plan_hash_value=>&plan_hash,
    fixed=>'YES');
end;
/

or for 10g, SQL_PROFILE is created as mentioned in Carlos Sierra’s blog .

A pro-active approach can be to mine AWR data for any SQL execution plan changes.

Following query from dba_hist_sqlstat can retrieve the list of SQL IDs whose plans have changed. It orders the SQL IDs,so that those SQL IDs for which maximum gains can be achieved by fixing plan, are listed first.

 
spool sql_with_more_than_1plan.txt
set lines 220 pages 9999 trimspool on
set numformat 999,999,999
column plan_hash_value format 99999999999999
column min_snap format 999999
column max_snap format 999999
column min_avg_ela format 999,999,999,999,999
column avg_ela format 999,999,999,999,999
column ela_gain format 999,999,999,999,999
select sql_id,
       min(min_snap_id) min_snap,
       max(max_snap_id) max_snap,
       max(decode(rw_num,1,plan_hash_value)) plan_hash_value,
       max(decode(rw_num,1,avg_ela)) min_avg_ela,
       avg(avg_ela) avg_ela,
       avg(avg_ela) - max(decode(rw_num,1,avg_ela)) ela_gain,
       -- max(decode(rw_num,1,avg_buffer_gets)) min_avg_buf_gets,
       -- avg(avg_buffer_gets) avg_buf_gets,
       max(decode(rw_num,1,sum_exec))-1 min_exec,
       avg(sum_exec)-1 avg_exec
from (
  select sql_id, plan_hash_value, avg_buffer_gets, avg_ela, sum_exec,
         row_number() over (partition by sql_id order by avg_ela) rw_num , min_snap_id, max_snap_id
  from
  (
    select sql_id, plan_hash_value , sum(BUFFER_GETS_DELTA)/(sum(executions_delta)+1) avg_buffer_gets,
    sum(elapsed_time_delta)/(sum(executions_delta)+1) avg_ela, sum(executions_delta)+1 sum_exec,
    min(snap_id) min_snap_id, max(snap_id) max_snap_id
    from dba_hist_sqlstat a
    where exists  (
       select sql_id from dba_hist_sqlstat b where a.sql_id = b.sql_id
         and  a.plan_hash_value != b.plan_hash_value
         and  b.plan_hash_value > 0)
    and plan_hash_value > 0
    group by sql_id, plan_hash_value
    order by sql_id, avg_ela
  )
  order by sql_id, avg_ela
  )
group by sql_id
having max(decode(rw_num,1,sum_exec)) > 1
order by 7 desc
/
spool off
clear columns
set numformat 9999999999

The sample output for this query will look like

SQL_ID        MIN_SNAP MAX_SNAP PLAN_HASH_VALUE          MIN_AVG_ELA              AVG_ELA             ELA_GAIN     MIN_EXEC     AVG_EXEC
------------- -------- -------- --------------- -------------------- -------------------- -------------------- ------------ ------------
ba42qdzhu5jb0    65017    67129      2819751536       11,055,899,019       90,136,403,552       79,080,504,532           12            4
2zm7y3tvqygx5    65024    67132       362220407       14,438,575,143       34,350,482,006       19,911,906,864            1            3
74j7px7k16p6q    65029    67134      1695658241       24,049,644,247       30,035,372,306        5,985,728,059           14            7
dz243qq1wft49    65030    67134      3498253836        1,703,657,774        7,249,309,870        5,545,652,097            1            2

MIN_SNAP and MAX_SNAP are the minimum/maximum snap id where the SQL statement occurs

PLAN_HASH_VALUE is the hash_value of the plan with the best elapsed time

ELA_GAIN is the estimated improvement in elapsed time by using this plan compared to the average execution time.

Using the output of the above query, sql execution plans can be fixed, after proper testing.  This method can help DBAs pin-point and resolve problems with SQL execution plans, faster.

Categories: DBA Blogs

Salt Stack for Remote Parallel Execution of Commands

Pythian Group - Mon, 2014-07-07 11:08

There are many scenarios when a SysAdmin has to do a “box walk” of the entire infrastructure to execute a command across many servers. This is universally accepted as one of the less glamorous parts of our job. The larger the infrastructure, the longer these box walks take, and the greater chance that human error will occur.

Even giving this task to a junior resource, as is often the case, is not sustainable as the infrastructure grows, and does not represent the best value to the business in terms of resource utilization. Additionally, too much of this type of “grind” work can demoralize even the most enthusiastic team member.

Thankfully the days of having to do these box walks are over. Thanks to configuration management and infrastructure automation tools, the task has been automated and no longer requires the investment in time by a human SysAdmin that it once did. These tools allow you, at a very high level, to off load this repetitive work to the computer, with the computer doing the heavy lifting for you.

 

Introducing Salt Stack

Salt Stack is a distributed remote execution system used to execute commands and query data on remote nodes, either individually or by arbitrary selection criteria. Salt Stack is also a configuration management system in it’s own right but this post will be focusing on Salt from a “Command and Control” point of view.

Salt has 2 main components, the “salt master” (server) and the “salt minions” (clients). Once the minions are accepted by the master, then further execution of commands can come directly from the central salt master server.

Once you have installed your packages the minion needs to be configured to know where its master is. This can be accomplished through a DNS or hosts-file entry or by setting the variable in the /etc/salt/minion config.


master: XXX.XXX.XXX.XXX

Where “XXX.XXX.XXX.XXX” is the IP Address of your master server. Once that is done, and the salt-minion service has been started the minion will generate and ship an SSL key back to the master to ensure all communication is secure.

The master must accept the key from the minion before any control can begin.


# Listing the Keys

[root@ip-10-154-193-216 ~]# salt-key -L
Accepted Keys:
Unaccepted Keys:
ip-10-136-76-163.ec2.internal
Rejected Keys:

# Adding The Key

[root@ip-10-154-193-216 ~]# salt-key -A
The following keys are going to be accepted:
Unaccepted Keys:
ip-10-136-76-163.ec2.internal
Proceed? [n/Y] y
Key for minion ip-10-136-76-163.ec2.internal accepted.

# Nailed It! Now the Master can control the Minion!

[root@ip-10-154-193-216 ~]# salt-key -L
Accepted Keys:
ip-10-136-76-163.ec2.internal
Unaccepted Keys:
Rejected Keys:

Note: Not Shown – I added a 2nd Minion

Now that your master has minions the fun begins. From your master you can now query information from your minions such as disk space:


[root@ip-10-154-193-216 ~]# salt '*' disk.percent

ip-10-136-76-163.ec2.internal:
----------
/:
15%
/dev/shm:
0%
ip-10-147-240-208.ec2.internal:
----------
/:
14%
/dev/shm:
0%

And you can also execute remote commands such as finding out service status, and restarting services.


[root@ip-10-154-193-216 ~]# salt '*' cmd.run "service crond status"

ip-10-136-76-163.ec2.internal:
crond (pid 1440) is running...
ip-10-147-240-208.ec2.internal:
crond (pid 1198) is running...

[root@ip-10-154-193-216 ~]# salt '*' cmd.run "service crond restart"
ip-10-136-76-163.ec2.internal:
Stopping crond: [ OK ]
Starting crond: [ OK ]
ip-10-147-240-208.ec2.internal:
Stopping crond: [ OK ]
Starting crond: [ OK ]

These are only the most basic use cases for what Salt Stack can do, but even from these examples it is clear that salt can become a powerful tool which can reduce the potential for human error and increase the efficiency of your SysAdmin Team.

By Implementing Configuration Management and Infrastructure Automation tools such as Salt Stack you can free up the time of your team members to work on higher quality work which delivers more business value.

Salt Stack (depending on your setup) can be deployed in minutes. On RHEL/CentOS/Amazon Linux using the EPEL repo I was able to be up and running with Salt in about 5 minute on the 3 nodes I used for the examples in this post. Salt can be deployed using another configuration management tool, it can be baked into your provisioning environment, or into base images. If all else fails, (ironically) you can do a box walk to install the package on your existing servers.

Even if you have another configuration management solution deployed, depending on what you are trying to accomplish using Salt for parallel command execution rather then the Config Management system can often prove a much simpler and lightweight solution.

Salt is also a great choice in tools for giving other teams access to execute commands on a subset of boxes without requiring them to have shell access to all of the servers. This allows those teams to get their job done without the SysAdmin team becoming a bottle neck.

Categories: DBA Blogs

Recurring Conversations: AWR Intervals (Part 1)

Doug Burns - Mon, 2014-07-07 07:36
I've seen plenty of blog posts and discussions over the years about the need to increase the default AWR retention period beyond the default value of 8 days. Experienced Oracle folk understand how useful it is to have a longer history of performance metrics to cover an entire workload period so that we can, for example, compare the acceptable performance of the last month end batch processes to the living hell of the current month end. You'll often hear a suggested minimum of 35-42 days and I could make good arguments for even more history for trending and capacity management.

That subject has been covered well enough, in my opinion. (To pick one example, this post and it's comments are around 5 years old.)  Diagnostics Pack customers should almost always increase the default AWR retention period for important systems, even allowing for any additional space required in the SYSAUX tablespace.

However, I've found myself talking about the best default AWR snapshot *interval* several times over recent months and years and realising that I'm slightly out of step with the prevailing wisdom on the subject, so let's talk about intervals.

I'll kick off by saying that I think people should stick to the default 1 hour interval, rather than the 15 or 30 minute intervals that most of my peers seem to want. Let me explain why.

Initially I was influenced by some of the performance guys working in Oracle and I remember being surprised by their insistence that one hour is a good interval, which is why they picked it. Hold on, though - doesn't everyone know that a 1 hour AWR report smoothes out detail too much?

Then I got into some discussions about Adaptive Thresholds and it started to make more sense. If you want to compare performance metrics over time and trigger alerts automatically based on apparently unusual performance events or workload profiles, then comparing specific hours today to specific hours a month ago makes more sense than getting down to 15 minute intervals which would be far too sensitive to subtle changes. Adaptive Thresholds would become barking mad if the interval granularity was too fine. But when nobody used Adaptive Thresholds too much even though they seemed like a good idea (sorry JB ;-)) this argument started to make less sense to me.

However, I still think that there are very solid reasons to stick to 1 hour and they make more sense when you understand all of the metrics and analysis tools at your disposal and treat them as a box of tools appropriate to different problems.

Let's go back to why people think that a 1 hour interval is too long. The problem with AWR, Statspack and bstat/estat is that they are system-wide reporting tools that capture the difference (or deltas) between the values of various metrics over a given interval. There are at least a couple of problems with that that come to mind.

1) Although a bit of a simplification, almost all of the metrics are system-wide which makes them a poor data source for analysing an individual users performance experience or an individual batch job because systems generally have a mixture of different activities running concurrently. (Benchmarks and load tests are notable exceptions.)

2) Problem 1 becomes worse when you are looking at *all* of the activity that occurred over a given period of time (the AWR Interval), condensed into a single data set or report. The longer the AWR period you report on, the more useless the data becomes. What use is an AWR report covering a one week period? So much has happened during that time and we might only be interested in what was happening at 2:13 am this morning.

In other words, AWR reports combine a wide activity scope (everything on the system) with a wide time scope (hours or days if generated without thought). Intelligent performance folks reduce the impact of the latter problem by narrowing the time scope and reducing the snapshot interval so that if a problem has just happened or is happening right now, they can focus on the right 15 minutes of activity1.

Which makes complete sense in the Statspack world they grew up in, but makes a lot less sense since Oracle 10g was released in 2004! These days there are probably better tools for what you're trying to achieve.

But, as this post is already getting pretty long, I'll leave that for Part 2.

1The natural endpoint to this narrowing of time scope is when people use tools like Swingbench for load testing and select the option to generate AWR snapshots immediately before and after the test they're running. Any AWR report of that interval will only contain the relevant information if the test is the only thing running on the system. At last year's Openworld, Graham Wood and I also covered the narrowing of the Activity scope by, for example, running the AWR SQL report (awrrpt.sql) to limit the report to a single SQL statement of interest. It's easy for people to forget - it's a *suite* of tools and worth knowing the full range so that you pick the appropriate one for the problem at hand.

Adaptive Learning Market Acceleration Program (ALMAP) Summer Meeting Notes

Michael Feldstein - Mon, 2014-07-07 05:04

I recently attended the ALMAP Summer Meeting. ALMAP is a program funded by the Gates Foundation, with the goals described in this RFP webinar presentation from March 2013:

We believe that well implemented personalized & adaptive learning has the potential to dramatically improve student outcomes

Our strategy to accelerate the adoption of Adaptive Learning in higher education is to invest in market change drivers… …resulting in strong, healthy market growth

As the program is in its mid stage (without real results to speak of yet), I’ll summarize Tony Bates style with summary of program and some notes at the end. Consider this my more-than-140-character response to Glenda Morgan:

@PhilOnEdTech was the agenda of the Gates Summit online at all?

— Glenda Morgan (@morganmundum) June 30, 2014

Originally planned for 10 institutions, the Gates Foundation funded 14 separate grantees at a level of ~$100,000 each. The courses must run for 3 sequential semesters with greater than 500 students total (per school), and the program will take 24 months total (starting June 2013). The awards were given to the following schools:

Gates has also funded SRI International to provide independent research on the results of each grant.

The concept of accelerator as used by the Gates Foundation is to push adaptive learning past the innovator’s adoption category into the majority category (see RFP webinar).

ALMAP accelerator

The meeting was organized around quick updates from most of the grantees along with panels of their partner software providers (Knewton, ALEKS, CogBooks, Cerego, OLI, ASSISTments, Smart Sparrow), faculty, and several local students. Here is a summary of the meeting agenda.

ALMAP Agenda

Notes

Adaptive Learning is becoming a hotter topic in higher education recently, and I expect that we will hear more from ALMAP as the results come in. In the meantime, here are some preliminary notes from the meeting (some are my own, some are group discussions that struck me as very important).

  • Despite the potential importance of this funding program, I can only find one full article (outside of Gates publications) about the program. Campus Technology had an article in April titled “The Great Adaptive Learning Experiment”. David Wiley referred to the program in his take on the risks of adaptive learning. Scientific American (among a few others) described ALMAP in one paragraph of a larger story on Adaptive Learning.
  • We really need a taxonomy to describe Adaptive Learning and Personalized Learning as both terms are moving into buzzword and marketing-speak territory. During the break out groups, it seemed there was unanimous agreement on this problem of a lack of precise terminology. While the Gates Foundation also funded two white papers on Adaptive Learning, I did not hear the ALMAP participants using the embedded taxonomy (see below) to improve language usage. I’m not sure why. I provided a short start in this post before EDUCAUSE, but I think Michael and I will do some more analysis on the field and terminology soon. Michael also has a post that was published in the American Federation of Teachers publication AFT On Campus, titled “What Faculty Should Know About Adaptive Learning”, that is worth reading.
  • The above problem (lack of accepted taxonomy, different meanings of adaptive), along with faculty flexibility in determining how to use the software, will make the research challenging, at least in terms of drawing conclusions across the full set of experiments. SRI has its work cut out for them.
  • There appears to be a divide in the vendor space between publisher models, where the content is embedded with the platform, and a platform-only model, where content is provided from external sources. Examples of the former include ALEKS, Adapt Courseware and OLI. Examples of the latter include ASSISTments, Smart Sparrow, CogBooks, Cerego. Cerego might be the only example where they provide “starter” content but also allow the user to provide or integrate their own content. Credit to Neil Heffernan from WPI and ASSISTments for this observation over drinks.
  • Programs of this type (pushing innovation and driving for changes in behavior) should not be judged by the first semester of implementation, when faculty are figuring out how to work out the new approach. Real results should be judged starting in the second semester, and one attendee even recommended to avoid results publication until the third semester. This is the primary reason I am choosing to not even describe the individual programs or early results yet.
  • Kudos to the Gates Foundation for including a student panel (like 20MM Evolve and upcoming WCET conference). Below are a few tweets I sent during this panel.

Student on panel: Profs matter a lot – could tell the ones who don't like teaching. Ones who love teaching are contagious, her best classes.

— Phil Hill (@PhilOnEdTech) June 27, 2014

Conversely, fac who use tech poorly – don't understand, no instructions, no effort to use well – have very negative impact on students

— Phil Hill (@PhilOnEdTech) June 27, 2014

Whether it's from prof or from adaptive sw (or both), student panel wants clear instructions on assignments, timely feedback

— Phil Hill (@PhilOnEdTech) June 27, 2014

Expect to hear more from e-Literate as well as e-Literate TV not only on the ALMAP awardees and their progress, but also from the general field of personalized and adaptive learning.

Below is the taxonomy provided as part of the Gates-funded white paper from Education Growth Advisors.

AL Whitepaper Taxonomy

 

Update: I did not mention the elephant in the room for adaptive learning – whether software will replace faculty – because it was not an elephant in this room; however, this is an important question in general.

@ricetopher Good point. Unclear if gates funded automation would eliminate teachers… Are we becoming the machine? @PhilOnEdTech

— Whitney Kilgore (@whitneykilgore) July 7, 2014

At the ALMAP meeting, I believe that most grantees had faculty members present. From these faculty members (including a panel specifically on faculty experiences), there were discussions about changing roles (“role is facilitator, coach, lifeguard in a sense”), the fact that faculty were requested to participate rather than initiate the change, and the challenge of getting students to come to class for hybrid models. One faculty member mentioned that the adaptive software allow more instruction on real writing and less on skill-and-drill activities.

But the way the grantees implemented adaptive learning software was not based on replacing faculty, at least for this program.

The post Adaptive Learning Market Acceleration Program (ALMAP) Summer Meeting Notes appeared first on e-Literate.

Benefits of Single Tenant Deployments

Asif Momen - Mon, 2014-07-07 04:54
While presenting at a database event, I had a question from one of the attendees on benefits of running Oracle databases in Single Tenant Configuration.  I thought this would be a nice if I post it on my blog as it would benefit others too.
From Oracle documentation, “The multitenant architecture enables an Oracle database to function as a multitenant container database (CDB) that includes zero, one, or many customer-created pluggable databases (PDBs). A PDB is a portable collection of schemas, schema objects, and non-schema objects that appears to an Oracle Net client as a non-CDB. All Oracle databases before Oracle Database 12c were non-CDBs”.
Following are the benefits of running databases in Single Tenant Configuration:
  1. Alignment with Oracle’s new multi-tenant architecture
  2. Cost saving. You save on license fee as single tenant deployments do not attract Multi-tenant option license fee. License is applicable should you have two or more PDBs.
  3. Upgrade/patch your single PDB from 12.1.0.1 to 12.x easily with reduced downtime
  4. Secure separation of duties (between CDBA & DBA)
  5. Easier PDB cloning

I would recommend running all your production and non-production databases in single-tenant configuration (if you are not planning for consolidation using multi-tenant option) once you upgrade them to Oracle Database 12c. I expect to see single tenant deployments become the default deployment model for the customers.

UnifiedPush Server 0.11 is out!

Matthias Wessendorf - Mon, 2014-07-07 03:07

Today we are extremely happy to announce an all new AeroGear UnifiedPush Server!

UnifiedPush Server

The UnifiedPush Server comes with a completely rewritten Angular.js based UI and is now powered by Keycloak! Thanks to the Keycloak team for the great work they delivered helping the AeroGear team to make the Keycloak integration happen.

Getting started

Getting started w/ the new server is still very simple:

  • Setup a database (here is an example for the H2 Database engine. Copy into $JBOSS/standalone/deployments)
  • Download the two WAR files (core and auth) and copy into $JBOSS/standalone/deployments
  • Start the JBoss server

The 0.11.0 release contains a lot of new features, here is a more detailed list:

  • Keycloak Integration for user management
  • Angular.js based AdminUI
  • Metrics and Dashboard for some Analytics around Push Messages
  • Code snippet UI now supports Swift
  • and a lot of fixes and other improvements! See JIRA for all the items

Besides the improvements on the server, we also have some Quickstarts to help you get going with the Push Server

Hello World

The HelloWorld is a set of simple clients that show how to register a device with the UnifiedPush Server. On the Admin UI of the server you can use the “Send Push” menu to send a message to the different applications, running on your phone.

Mobile Contacts Quickstart

The Mobile Contacts Quickstart is a Push-enabled CRUD example, containing several client applications (Android, Apache Corodva and iOS) and a JavaEE-based backend. The backend app is a secured (Picketlink) JAX-RS application which sends out push messages when a new contact has been created. Sometimes the backend (for a mobile application) has to run behind the firewall. For that the quickstart contains a Fabric8 based Proxy server as well.

Thanks again to the Keycloak team for their assistance.

Now, get your hands dirty and send some push messages! We hope you like the new server!

Next ?

We are now polishing the server for the 1.0.0 push release this summer. See the roadmap for details.


Introduction to BatchEdit

Anthony Shorten - Sun, 2014-07-06 21:14

BatchEdit is a new wizard style utility to help you build a batch architecture quickly with little fuss and technical knowledge. Customers familiar with the WLST tool that is shipped with Oracle WebLogic will recognize the style of utility I am talking about it. The idea behind BatchEdit is simple. It is there to provide a simpler method of configuring batch by boiling down the process to its simplest form. The power of the utility is the utility itself and the set of preoptimized templates shipped with the utility to generate as much of the configuration as possible but still have a flexible approach to configuration.

First of all, the BatchEdit utility, shipped with OUAF 4.2.0.2.0 and above, is disabled by default for backward compatibility. To enable it  you must execute the configureEnv[.sh] -a utility and in option 50 set the Enable Batch Edit Functionality to true and save the changes. The facility is now available to use.

Once enabled, the BatchEdit facility can be executed using the bedit[.sh] <options> utility where <options> are the options you want to use with the command. The most useful is the -h and --h which display the help for the command options and extended help. You will find lots of online help in the utility. Just typing help <topic> you will get an explanation and further advice on a specific topic.

The next step is using the utility. The best approach is to think of the configuration is various layers. The first layer is the cluster. The next layer is the definition of threadpools in that cluster and then the submitters (or jobs) that are submitted to those threadpools. Each of those layers has configuration files associated with them.

Concepts

Before understanding the utility, lets discuss a few basic concepts:

  • The BatchEdit allows for "labels" to be assigned to each layer. This means you can group like configured components together. For example, say you wanted to setup a specific threadpoolworker for a specific set of processes and that threadpoolworker had unique characteristics like unique JVM settings. You can create a label template for that set of jobs and dynamically build that. At runtime you would tell the threadpoolworker[.sh] command to use that template (using the -l option). For submitters the label is the Batch Code itself.
  • The BatchEdit will track if changes are made during a session. If you try and exit without saving a warning is displayed to remind you of unsaved changes. Customers of Oracle Enterprise Manager pack for Oracle Utilities will be able to track configuration file version changes within Oracle Enterprise Manager, if desired.
  • BatchEdit essentially edits existing configuration files (e.g. tangosol-coherence-override.xml for the cluster, threadpoolworker.properties for threadpoolworker etc). To ascertain what particular file is being configured during a session use the what command.
  • BatchEdit will only show the valid options for the scope of the command and the template used. This applies to the online help which is context sensitive.
Using the utility

The BatchEdit utility has two distinct modes to build and maintain various configuration files.

  • Initiation Mode - The first mode of the utility is to invoke the utility with the scope or configuration file to create and/or manage. This is done by specifying the valid options at the command line. This mode is recorded in a preferences file to remember specific settings across invocations. For example, once you decide which cluster type you want to adopt, the utility will remember this preference and show  the options for that preference only. It is possible to switch preferences by re-invoking the command with the appropriate options.
  • Edit Mode - Once you have invoked the command, a list of valid options are presented which can be altered using the set command. For example, the set port 42020 command will set the port parameter to 42020. You can add new sections using the add command, and so forth. Online help will show the valid commands. The most important is the save command which saves all changes.
Process for configuration

To use the command effectively here is a summary of the process you need to follow:

  • Decide your cluster type first. Oracle Utilities Application Framework supports, multi-cast, uni-cast and single server clusters. Use the bedit[.sh] -c [-t wka|mc|ss] command to set and manage the cluster parameters. For example:
$ bedit.sh -c
Editing file /oracle/FW42020/splapp/standalone/config/tangosol-coherence-override.xml using template /oracle/FW42020/etc/tangoso
l-
coherence-override.ss.be

Batch Configuration Editor 1.0 [tangosol-coherence-override.xml]
----------------------------------------------------------------

Current Settings

  cluster (DEMO_SPLADM)
  address (127.0.0.1)
  port (42020)
  loglevel (1)
  mode (dev)

> help loglevel

loglevel
--------
Specifies which logged messages will be output to the log destination.

Legal values are:

  0    - only output without a logging severity level specified will be logged
  1    - all the above plus errors
  2    - all the above plus warnings
  3    - all the above plus informational messages
  4-9  - all the above plus internal debugging messages (the higher the number, the more the messages)
  -1   - no messages

> set loglevel 2

Batch Configuration Editor 1.0 [tangosol-coherence-override.xml]
----------------------------------------------------------------

Current Settings

  cluster (DEMO_SPLADM)
  address (127.0.0.1)
  port (42020)
  loglevel (2)
  mode (dev)

> save
Changes saved
> exit
  • Setup your threadpoolworkers. For each group of threadpoolworkers use the bedit[.sh] -w [-l <label>] where <label> is the group name. We supply a default (no label) and cache threadpool templates. For example:
$ bedit.sh -w
Editing file /oracle/FW42020/splapp/standalone/config/threadpoolworker.properties using template /oracle/FW42020/etc/threadpoolw
orker.be

Batch Configuration Editor 1.0 [threadpoolworker.properties]
------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  maxperm (256m)
  daemon (true)
  rmiport (6510)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (LOCAL)
      threads (0)

> set pool.2 poolname FRED

Batch Configuration Editor 1.0 [threadpoolworker.properties]
------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  maxperm (256m)
  daemon (true)
  rmiport (6510)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (FRED)
      threads (0)

> add pool

Batch Configuration Editor 1.0 [threadpoolworker.properties]
------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  maxperm (256m)
  daemon (true)
  rmiport (6510)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (FRED)
      threads (0)
  pool.3
      poolname (DEFAULT)
      threads (5)

> set pool.3 poolname LOCAL

Batch Configuration Editor 1.0 [threadpoolworker.properties]
------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  maxperm (256m)
  daemon (true)
  rmiport (6510)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (FRED)
      threads (0)
  pool.3
      poolname (LOCAL)
      threads (5)

> set pool.3 threads 0

Batch Configuration Editor 1.0 [threadpoolworker.properties]
------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  maxperm (256m)
  daemon (true)
  rmiport (6510)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (FRED)
      threads (0)
  pool.3
      poolname (LOCAL)
      threads (0)

>
  • Setup your global submitter settings using the bedit[.sh] -s command or batch job specific settings using the bedit[.sh] -b <batchcode> command where <batchcode> is the Batch Control Id for the job. For example:
$ bedit.sh -b F1-LDAP
File /oracle/FW42020/splapp/standalone/config/job.F1-LDAP.properties does not exist - create? (y/n) y
Editing file /oracle/FW42020/splapp/standalone/config/job.F1-LDAP.properties using template /oracle/FW42020/etc/job.be

Batch Configuration Editor 1.0 [job.F1-LDAP.properties]
-------------------------------------------------------

Current Settings

  poolname (DEFAULT)
  threads (1)
  commit (10)
  user (SYSUSER)
  lang (ENG)
  soft.1
      parm (maxErrors)
      value (500)
>

The BatchEdit facility is an easier way of creating and maintaining the configuration files with little bit of effort. For more examples and how to migrate to this new facility is documented in the Batch Best Practices for Oracle Utilities Application Framework based products (Doc Id: 836362.1) whitepaper available from My Oracle Support.

SQL Plan Baselines

Jonathan Lewis - Sun, 2014-07-06 11:34

Here’s a thread from Oracle-L that reminded of an important reason why you still have to hint SQL sometimes (rather than following the mantra “if you can hint it, baseline it”).

I have a query that takes 77 seconds to optimize (it’s not a production query, fortunately, but one I engineered to make a point). I can enable sql plan baseline capture and create a baseline for it, and given the nature of the query I can be confident that the resulting plan will always be exactly the plan I want. If I have to re-optimize the query at any time  (because it runs once per hour, say, and is constantly being flushed from the library cache) how much time will the SQL plan baseline save for me ?

The answer is NONE.

The first thing that the optimizer does for a query with a stored sql plan baseline is to optimize it as if the baseline did not exist.

If I want to get rid of that 77 seconds I’ll have to extract (most of) the hints from the SQL Plan Baseline and write them into the query.  (Or, maybe, create a Stored Outline – except that they’re deprecated in the latest version of Oracle, and I’d have to check whether the optimizer used the same strategy with stored outlines or whether it applied the outline before doing any optimisation). Maybe we could do with a hint which forces the optimizer to attempt to use an existing, accepted SQL Baseline without attempting the initial optimisation pass.

 


Adjusting Histograms

Jonathan Lewis - Fri, 2014-07-04 13:32

This is a quick response to a question on an old blog post asking how you can adjust the high value if you’ve already got a height-balanced histogram in place. It’s possible that someone will come up with a tidier method, but this was just a quick sample I created and tested on 11.2.0.4 in a few minutes.  (Note – this is specifically for height-balanced histograms,  and it’s not appropriate for 12c which has introduced hybrid histograms that will require me to modify my “histogram faking” code a little).

rem
rem	Script:		adjust_histogram.sql
rem	Author:		Jonathan Lewis
rem	Dated:		Jun 2014
rem	Purpose:
rem
rem	Last tested
rem		11.2.0.4
rem	Not tested
rem		12.1.0.1
rem		11.1.0.7
rem		10.2.0.5
rem	Outdated
rem		 9.2.0.8
rem		 8.1.7.4	no WITH subquery
rem
rem	Notes:
rem	Follow-on from a query on my blog about setting the high value
rem	when you have a histogram.  We could do this by hacking, or by
rem	reading the user_tab_histogram values and doing a proper prepare
rem

start setenv
set timing off

execute dbms_random.seed(0)

drop table t1;

begin
	begin		execute immediate 'purge recyclebin';
	exception	when others then null;
	end;

	begin
		dbms_stats.set_system_stats('MBRC',16);
		dbms_stats.set_system_stats('MREADTIM',10);
		dbms_stats.set_system_stats('SREADTIM',5);
		dbms_stats.set_system_stats('CPUSPEED',1000);
	exception
		when others then null;
	end;
/*
	begin		execute immediate 'begin dbms_stats.delete_system_stats; end;';
	exception	when others then null;
	end;

	begin		execute immediate 'alter session set "_optimizer_cost_model"=io';
	exception	when others then null;
	end;

	begin		execute immediate 'alter session set "_optimizer_gather_stats_on_load" = false';
	exception	when others then null;
	end;
*/

	begin		execute immediate  'begin dbms_space_admin.materialize_deferred_segments(''TEST_USER''); end;';
	exception	when others then null;
	end;

end;
/

create table t1
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		level <= 1e4
)
select
	trunc(sysdate,'YYYY') + trunc(dbms_random.normal * 100,1)	d1
from
	generator	v1,
	generator	v2
where
	rownum <= 1e4
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1',
		method_opt 	 => 'for all columns size 32'
	);

end;
/

spool adjust_histogram.lst

prompt	==================
prompt	Current High Value
prompt	==================

select to_char(max(d1),'dd-Mon-yyyy hh24:mi:ss') from t1;

prompt	==============================
prompt	Initial Histogram distribution
prompt	==============================

select
	endpoint_number,
	to_date(to_char(trunc(endpoint_value)),'J') + mod(endpoint_value,1) d_val,
	endpoint_value,
	lag(endpoint_value,1) over(order by endpoint_number) lagged_epv,
	endpoint_value -
		lag(endpoint_value,1) over(order by endpoint_number)  delta
from	user_tab_histograms
where
	table_name = 'T1'
and	column_name = 'D1'
;

rem
rem	Note - we can't simply overwrite the last srec.novals
rem	because that doesn't adjust the stored high_value.
rem	We have to make a call to prepare_column_values,
rem	which means we have to turn the stored histogram
rem	endpoint values into their equivalent date types.
rem

prompt	==================
prompt	Hacking the values
prompt	==================

declare

	m_distcnt		number;
	m_density		number;
	m_nullcnt		number;
	srec			dbms_stats.statrec;
	m_avgclen		number;

	d_array			dbms_stats.datearray := dbms_stats.datearray();
	ct			number;

begin

	dbms_stats.get_column_stats(
		ownname		=> user,
		tabname		=> 't1',
		colname		=> 'd1',
		distcnt		=> m_distcnt,
		density		=> m_density,
		nullcnt		=> m_nullcnt,
		srec		=> srec,
		avgclen		=> m_avgclen
	); 

	ct := 0;
	for r in (
		select	to_date(to_char(trunc(endpoint_value)),'J') + mod(endpoint_value,1) d_val
		from	user_tab_histograms
		where	table_name = 'T1'
		and	column_name = 'D1'
		order by endpoint_number
	) loop

		ct := ct + 1;
		d_array.extend;
		d_array(ct) := r.d_val;
		if ct = 1 then
			srec.bkvals(ct) := 0;
		else
			srec.bkvals(ct) := 1;
		end if;

	end loop;

	d_array(ct) := to_date('30-Jun-2015','dd-mon-yyyy');

	dbms_stats.prepare_column_values(srec, d_array);

	dbms_stats.set_column_stats(
		ownname		=> user,
		tabname		=> 't1',
		colname		=> 'd1',
		distcnt		=> m_distcnt,
		density		=> m_density,
		nullcnt		=> m_nullcnt,
		srec		=> srec,
		avgclen		=> m_avgclen
	);
end;
/

prompt	============================
prompt	Final Histogram distribution
prompt	============================

select
	endpoint_number,
	to_date(to_char(trunc(endpoint_value)),'J') + mod(endpoint_value,1) d_val,
	endpoint_value,
	lag(endpoint_value,1) over(order by endpoint_number) lagged_epv,
	endpoint_value -
		lag(endpoint_value,1) over(order by endpoint_number)  delta
from	user_tab_histograms
where
	table_name = 'T1'
and	column_name = 'D1'
;

spool off

doc

#


Best of OTN - Week of June 29th

OTN TechBlog - Fri, 2014-07-04 11:00
Java -

Congratulations to the Winners #IoTDevchallenge -
Oracle Technology Network and Oracle Academy are proud to announce the winners of the IoT Developer Challenge. All of them making the Internet of Things come true. And, of course, built with the Java platform at the center of Things. See who the winners are in this blog post - https://blogs.oracle.com/java/entry/announcing_the_iot_developer_challenge.


JavaEE 8 Roadmap? It's right here.

Forum discussion: Would you use an IDE on a tablet? Join in now!

Systems Community -

OS Tips and Tricks for Sysadmins  - This three-session track, part of the Global OTN Virtual Technology Summits; Americas July 9th, EMEA July 10th and APAC July 16th, will show you how to configure Oracle Linux to run Oracle Database 11g and 12c, how to use the latest networking capabilities in Oracle Solaris 11, and how to troubleshoot networking problems in Unix and Linux systems.  Experts will be on hand to answer your questions live. Register now.

Database -

Disaster Recovery with Oracle Data Guard and Oracle GoldenGate -
The best part about preparing for the upcoming OTN Virtual Technology Summit is reading up on the technology we'll be presenting. Today's reading: Disaster recovery with Oracle Data Guard... it's an essential capability that every Oracle DBA should master.

Architect Community

Community blogs and social networks have been buzzing about the recent release of Oracle SOA Suite 12c, Oracle Mobile Application Foundation, and other new stuff. I've shared links to several such posts over the past several days on the OTN ArchBeat Facebook page. The three items below drew the most attention.

SOA Suite 12c: Exploring Dependencies - Visualizing dependencies between SOA artifacts | Lucas Jellema
Oracle ACE Director Lucas Jellema explores the use of the Dependency Explorer in JDeveloper 12c for tracking and visualizing dependencies in artifacts in SOA composites or Service Bus projects.

Managing Files for the Hybrid Cloud Use Cases, Challenges and Requirements | Dave Berry
This paper by Dave Berry, Vikas Anand, and Mala Ramakrishnan discusses Oracle Managed File transfer and best practices for sharing files within your enterprise and externally for partners and cloud services.

Say hello to the new Oracle Mobile Application Framework | Shay Shmeltzer
What's the Oracle Mobile Application Framework (MAF)? Oracle MAF, available as an extension to both JDeveloper and Eclipse, lets you develop a single application that will run on both iOS and Android devices. MAF is based on Oracle ADF Mobile, but adds many new features. Want more information? Click the link to read a post by product manager Shay Shmeltzer.

Funny Stuff

On July 4th Americans will celebrate the US victory over the British in the Revolutionary War by grilling mountains of meat, consuming mass quantities of beer, and making trips to the emergency room to reattach fingers blown off with poorly-handled fireworks. This hilarious video featuring comic actor Stephen Merchant offers a UK perspective on the outcome of that war.

A tip of a three-cornered hat to Oracle ACE Director Mark Rittman and Oracle Enterprise Architect Andrew Bond for bringing this video to my attention.

Log Buffer #378, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-07-04 08:43

New technologies, new ideas, and new tips are forthcoming in abundance in numerous blog posts across Oracle, SQL Server, and MySQL. This Log Buffer Edition covers many of the salient ones.

Oracle:

Wither you use a single OEM and migrating to a new OEM or have multiple OEMs, the need to move templates between environments will arise.

Oracle Coherence is the industry’s leading in-memory data grid solution that enables applications to predictably scale by providing fast, reliable and scalable access to frequently used data.

Needless to say, some ATG applications are more complex than others.  Some ATG applications support a single site, single language, single catalog, single currency, have a single development staff, single business team, and a relatively simple business model.

The purpose of this article is to describe some of the important foundational concepts of ATG.

You can use Ops Center to perform some very complex tasks. For instance, you might use it to provision several operating systems across your environment, with multiple configurations for each OS.

SQL Server:

SSRS In a Flash – Level 1 in the Stairway to Reporting Services.

The “Numbers” or “Tally” Table: What it is and how it replaces a loop.

Arshad Ali demonstrates granular level encryption in detail and explains how it differs from Transparent Data Encryption (TDE).

There were many new DMVs added in SQL Server 2012, and some that have changed since SQL Server 2008 R2.

There are some aspects of tables in SQL Server that a lot of people get wrong, purely because they seem so obvious that one feels embarrassed about asking questions.

MySQL:

A much awaited release from the MariaDB project is now stable (GA) – MariaDB Galera Cluster 10.0.12.

Failover with the MySQL Utilities: Part 2 – mysqlfailover.

HowTo: Integrating MySQL for Visual Studio with Connector/Net.

Single database backup and restore with MEB.

Externally Stored Fields in InnoDB.

Categories: DBA Blogs

Speedy #em12c template export

DBASolved - Thu, 2014-07-03 20:50

Wither you use a single OEM and migrating to a new OEM or have multiple OEMs, the need to move templates between environments will arise.  I had this exact problem come up recently at a customer site between an OEM 11g and OEM 12c.  In order to move the templates, I needed to export the multiple monitoring templates using EMCLI.  The command that I used to do individual exports was the following:


./emcli export_template -name="<template name>" -target_type="<target_type>" -output_file="/tmp/<template name>.xml"

If you have only one template to move, the EMCLI command above will work.  If you have more than one template to move, the easiest thing to do is to have the EMCLI command run in a script.  This is the beauty of EMCLI; the ability to interact with OEM at the command line and use it in scripts for repeated executions.  Below is a script that I wrote to export templates based on target_types.

Note: If you need to identify the target_types that are supported by OEM, they can be found in SYSMAN.EM_TARGET_TYPES in the repository.


#!/usr/bin/perl -w
#
#Author: Bobby Curtis, Oracle ACE
#Copyright: 2014
#
use strict;
use warnings;

#Parameters
my $oem_home_bin = "/opt/oracle/app/product/12.1.0.4/middleware/oms/bin";
my @columns = ("", 0, 0, 0, 0);
my @buf;
my $target_type = $ARGV[0];

#Program

if (scalar @ARGV != 1)
{
 print "\nUsage:\n";
 print "perl ./emcli_export_templates.pl <target_type>\n\n";
 print "<target_type> = target type for template being exported\n";
 print "refer to sysman.em_target_types in repository for more info.";
 print "\n";
 exit;
}

system($oem_home_bin.'/emcli login -username=<userid> -password=<password>');
system($oem_home_bin.'/emcli sync');

@buf = `$oem_home_bin/emcli list_templates`;

foreach (@buf)
{
 @columns = split (/ {2,}/, $_);

 if ($columns[2] eq $target_type )
 {
 my $cmd = 'emcli export_template -name="'.$columns[0].'" -target_type="'.$columns[2].'" -output_file="/tmp/'.$columns[0].'.xml"';
 system($oem_home_bin.'/'.$cmd);
 print "Finished export of: $columns[0] template\n";
 }
}

system($oem_home_bin.'/emcli logout');

If you would like to learn more about EMCLI and other ways to use it have a look at these other blogs:

Ray Smith: https://oramanageability.wordpress.com/
Kellyn Pot’Vin: http://dbakevlar.com/
Seth Miller: http://sethmiller.org/

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: OEM
Categories: DBA Blogs

New ConfigTools Training available on Youtube

Anthony Shorten - Thu, 2014-07-03 18:12

The Oracle Public Sector Revenue Management product team have released a series of training videos for the Oracle Utilities Application Framework ConfigTools component. This component allows customers to use meta data and scripting to enhance and customize Oracle Utilities Application Framework based solutions without the need for Java programming.

The series uses examples and each recording is around 30-40 minutes in duration.

The channel for the videos is Oracle PSRM Training. The videos are not a substitute for the training courses available, through Oracle University, on ConfigTools, but are useful for people trying to grasp individual concepts while getting an appreciation for the power of this functionality.

 At time of publication, the recordings currently available are:


    Partner Webcast - Oracle Coherence & Weblogic Server: Close Integration of Application & Data Grid Tier

    Oracle Coherence is the industry’s leading in-memory data grid solution that enables applications to predictably scale by providing fast, reliable and scalable access to frequently used data. The key...

    We share our skills to maximize your revenue!
    Categories: DBA Blogs

    Malware stirs database security concerns for banks

    Chris Foot - Thu, 2014-07-03 13:40

    In an effort to keep up with the times, many financial institutions have implemented e-banking applications that allow customers to access and manage their finances on the Web or through their smartphones.

    Although electronic solutions may boost satisfaction rates and make it easier for account holders to transfer funds, they can cause major database security woes if proper protective measures aren't taken. As of late, there have been two kinds of malware banks have had to contend with.

    Attacking the mobile arena
    Because it's easy for consumers to get caught up in the luxury of viewing checking information on their smartphones, many forget to follow necessary, defensive protocols. According to ITPro, a new remote access Trojan, named com.II, is targeting Android devices and zeroing in on users with mobile banking applications. 

    The source noted that the malware abides by the following process:

    1. Undermines any security software that's installed
    2. Scans the device for eBanking programs
    3. Replaces any such tools with fraudulent ones
    4. Implements fabricated application updates
    5. Steals and delivers short message service notifications to access contact lists.

    Combating surveillance
    Paco Hope, principal consultant with Cigital, a firm based in the United Kingdom, surmised that the malicious software could infect global banking populations, as it's capable of being manipulated to abide by different languages.

    To prevent the program from entering bank accounts and stealing funds, active database monitoring should be employed by enterprises offering e-banking apps. Com.II has the ability to conduct thorough surveillance of individual checking and savings records, allowing the malware's administrators to potentially carry out transactions. 

    Under the radar
    Many programmers harboring ill intentions have found a way to make malicious software basically unrecognizable. MarketWatch acknowledged a new breed of malware, dubbed Emotet, that tricks people into giving it access to bank accounts. The news source outlined the deployment's protocol.

    1. Spam messages are sent to victims' emails
    2. The contents of those notices detail financial transactions and include links
    3. Upon clicking the link, the malware activates code that sits in browsers
    4. Once a person visits a bank website, the program can monitor all activity

    Trend Micro Vice President of Technology and Solutions JD Sherry asserted that the language used within the encoded messages appears authentic. This makes it easy for individuals to fall victim to the scam.

    The administrator's side of the equation
    Although it's important for e-banking customers to install adequate malware protection programs, the enterprises administering electronic solutions must find a way to defend their accounts. Constant database surveillance needs to be employed so that security breaches don't get out of hand in the event they occur.

    The post Malware stirs database security concerns for banks appeared first on Remote DBA Experts.

    Oracle Priority Support Infogram for 03-JUL-2014

    Oracle Infogram - Thu, 2014-07-03 11:38

    New Releases
    Lots of big ones recently:
    Announcing Oracle VM 3.3 - Delivers Enterprise-Scale Performance Enhancements, from Oracle's Virtualization Blog.
    From ZDNet: Oracle VM 3.3 - another salvo in the virtual machine battle.
    From BPM for Government: BPM 12c is Now Available!!
    New Oracle Framework Targets Cross-Platform Mobile Developers, from Application Development Trends on MAF.
    BPM
    Using Oracle BPM Object Methods in Script Tasks (OBPM 12.1.3), from Venugopal Mangipudi's Blog.
    From AMIS TECHNOLOGY BLOG: BPM Suite 12c: Quick Start installation – 20 minutes and good to go.
    Also from AMIS, on SOA: SOA Suite 12c: Weekend Roundup.
    Fusion
    From Fusion Applications Developer Relations: June in Review.
    WebCenter
    How Oracle WebCenter Customers Build Digital Businesses: Contending with Digital Disruption, from the Oracle WebCenter Blog.
    RDBMS
    Restore datafile from service: A cool #Oracle 12c Feature, from The Oracle Instructor.
    OTN
    Another great month out there on OTN. Check out the Top 10 ArchBeat Videos for June 2014 on ArchBeat.
    MySQL
    From MySQL on Windows: HowTo: Integrating MySQL for Visual Studio with Connector/Net.
    SQL Developer
    From that JEFF SMITH: Clearing the Script Output Buffer in Oracle SQL Developer.
    Java
    Reza Rahman's Blogshared some material on the Java EE 8 road-map here: Java Day Tokyo Trip Report.
    Programming
    Concurrent Crawling with Go and Python, from Venkata Mahalingam.
    And on the lighter side of coding: How to interpret human programming terms.
    PeopleSoft Turbocharged
    From Oracle Applications on Engineered Systems: PeopleSoft on Engineered Systems Documentation.
    EBS
    From the Oracle E-Business Suite Support Blog
    New Release of the PO Approval Analyzer!
    Posting Performance Issues Reported After 11.2.0.4 Database Upgrade
    New Troubleshooting Help for Holds Applied at Invoice Validation
    New OM 12.1 Cumulative Patch Released!
    During R12.2.3 Upgrade QP_UTIL_PUB Is Invalid
    Announcing RapidSR for Oracle Payables: Automated Troubleshooting (AT)
    New Service Contracts Functionality - Contracts Merge

    The Other

    Greg Pavlik - Thu, 2014-07-03 11:33
    It is the nature of short essays or speeches that they can at best explore the surface of an idea. This is a surprisingly difficult task, since ideas worth exploring usually need to be approached with some rigor. The easy use of the speech form is to promote an idea to listeners or readers who already share a common view - that is one reason speeches are effective forms for political persuasion for rallying true believers. It's much more difficult to create new vantage points or vistas into a new world - a sense of something grander that calls for further exploration.

    Yet this is exactly what Ryszard Kapuscinski accomplishes in his series of talks published as The Other. Here, the Polish journalist builds on his experience and most importantly on the reflections on the Lithuanian-Jewish philosopher Emmanual Levinas to reflect on how the encounter with the Other in a broad, cross cultural sense is the defining event - and opportunity - in late (or post) modernity. For Kapuscinski, the Other is the specifically the non-European cultures in which he spent most of his career as a journalist. For another reader it might be someone very much like Kapuscinski himself.

    There are three simple points that Kapuscinski raises that bear attention:

    1) The era we live in provides a unique, interpersonal opportunity for encounter with the Other - which is to say that we are neither in the area of relative isolation from the Other that dominated much of human history nor are we any longer in the phase of violent domination that marked the period of European colonial expansion. We have a chance to make space for encounter to be consistently about engagement and exchange, rather than conflict.

    2) This encounter cannot primarily technical, its must be interpersonal. Technical means are not only anonymous but more conducive to inculcating mass culture rather than creating space for authentic personal engagement. The current period of human history - post industrial, urbanized, technological - is given to mass culture, mass movements, as a rule - this is accelerated by globalization and communications advances. And while it is clear that the early "psychological" literature of the crowd - and I am thinking not only of the trajectory set by Gustave LeBon, but the later and more mature reflections of Ortega y Gasset - were primarily reactionary, nonetheless they point consistently to the fact that the crowd involves not just a loss of identity, but a loss of the individual: it leaves little room for real encounter and exchange.

    While the increasing ability to encounter different cultures offers the possibility of real engagement,  at the same time modern mass culture is the number one threat to the Other - in that it subordinates the value of whatever is unique to whatever is both common and most importantly sellable. In visiting Ukraine over the last few years, what fascinated me the most were the things that made the country uniquely Ukrainian. Following a recent trip, I noted the following in a piece by New York Times columnist Nicholas Kristof on a visit to Karapchiv: "The kids here learn English and flirt in low-cut bluejeans. They listen to Rihanna, AC/DC and Taylor Swift. They have crushes on George Clooney and Angelina Jolie, watch “The Simpsons” and “Family Guy,” and play Grand Theft Auto. The school here has computers and an Internet connection, which kids use to watch YouTube and join Facebook. Many expect to get jobs in Italy or Spain — perhaps even America."

    What here makes the Other both unique and beautiful is being obliterated by mass culture. Kristof is, of course, a cheerleader for this tragedy, but the true opportunity Kapuscinski asks us to look for ways to build up and offer support in encounter.

    3) Lastly and most importantly, for encounter with the Other to be one of mutual recognition and sharing, the personal encounter must have an ethical basis. Kapuscinski observes that the first half of the last century was dominated by Husserl and Heidegger - in other words by epistemic and ontological models. It is no accident, I think, that the same century was marred by enormities wrought by totalizing ideologies - where ethics is subordinated entirely, ideology can rage out of control. Kapuscinski follows Levinas in response - ultimately seeing the Other as a source of ethical responsibility is an imperative of the first order.

    The diversity of human cultures is, as Solzhenitzyn rightly noted, the "wealth of mankind, its collective personalities; the very least of them wears its own special colors and bears within itself a special facet of God's design." And yet is only if we can encounter the Other in terms of mutual respect and self-confidence, in terms of exchange and recognition of value in the Other, that we can actually see the Other as a treasure - one that helps ground who I am as much as reveals the treasure for what it is. And this is our main challenge - the other paths, conflict and exclusion, are paths we cannot afford to tread.

    Vishal Sikka's Appointment as Infosys CEO

    Abhinav Agarwal - Thu, 2014-07-03 09:21


    My article in the DNA on Vishal Sikka's appointment as CEO of Infosys was published on June 25, 2014.

    This is the full text of the article:


    Vishal Sikka's appointment as CEO of Infosys was by far the biggest news event for the Indian technology sector in some time. Sikka was most recently the Chief Technology Officer at the German software giant SAP, where he led the development of HANA - an in-memory analytics appliance that has proven, since its launch in 2010, to be the biggest challenger to Oracle's venerable flagship product, the Oracle Database. With the launch of Oracle Exalytics in 2012 and Oracle Database In-Memory this month, the final chapter and word on that battle between SAP and Oracle remains to be written. Vishal will watch that battle from the sidelines.

    By all accounts, Vishal Sikka is an extraordinary person, and Infosys has made what could well be the turning point for the iconic Indian software services company. If well executed, five years from now people will refer to this event as the one that catapulted Infosys into a different league altogether. However, there are several open questions, challenges, as well as opportunities that confront Infosys the company, Infoscians and shareholders, that Sikka will need to resolve.

    First off, is Sikka a "trophy CEO?" There will be more than one voice heard whispering that Sikka's appointment is more of a publicity gimmick meant to save face for its iconic co-founder, Narayan Murthy, who has been unable to right the floundering ship of the software services giant. Infosys has seen a steady stream of top-level attrition for some time, which had only accelerated after Murthy's return. The presence of his son Rohan Murthy was seen to grate on several senior executives, and also did not go down too well with corporate governance experts. Infosys had also lagged behind its peers in earnings growth. The hiring of a high-profile executive like Sikka has certainly restored much of the lost sheen for Infosys. To sustain that lustre, however, he will need to get some quick wins under his belt.

    The single biggest question on most people's minds is how well will the new CEO adapt to the challenge of running a services organisation. This is assuming that he sees Infosys' long term future in this area of services. Other key issues include reconciling the "people versus products" dilemma. Infosys lives and grows on the back of its ability to hire more people, place them on billable projects that are offshored, and then to keep its salary expenses low - i.e. a volume business with wafer thin margins that are constantly under pressure. This is different from the hiring philosophy adopted by leading software companies and startups around the world - which is to hire the best, from the best colleges, and provide them with a challenging and yet flexible work environment. It should be clear that a single company cannot have two diametrically opposite work cultures for any extended length of time. This, of course assumes, that Sikka sees a future in Infosys beyond labor cost-arbitraged services. Infosys' CEO, in an interview to the New York Times in 2005, had stated that he did not see the company as aspiring beyond that narrow focus. Whether Sikka subscribes to that view or not is a different question.

    In diversifying, it can be argued that IBM could serve as a model. It has developed excellence in the three areas of hardware, software, and services. But Infosys has neither a presence in hardware - and it is hard to imagine it getting into the hardware business for several reasons - nor does it have a particularly strong software products line of business. There is Finacle, but that too has not been performing too well. Sikka may see himself as the ideal person to incubate several successful products within Infosys. But there are several challenges here.

    Firstly, there is no company, with the arguable exception of IBM, that has achieved excellence in both services and products. Not Microsoft, not Oracle, not SAP. Sikka will have to decide where he needs to focus on. Stabilize the services business and develop niche but world-class products that are augmented by services, or build a small but strong products portfolio as a separate business that is hived off from the parent company - de-facto if not in reality. One cannot hunt with the hound and run with the hare. If he decides to focus on nurturing a products line of business, he leaves the company vulnerable to cut-throat competition on one hand and the exit of talented people looking for greener pastures on the other hand.

    Secondly, if Infosys under Sikka does get into products, then it will need to decide what products it builds. He cannot expect to build yet another database, or yet another operating system, or even yet another enterprise application and hope for stellar results. To use a much-used phrase, he will need to creatively disrupt the market. Here again, Sikka's pedigree points to one area - information and analytics. This is a hot area of innovation which finds itself at the intersection of multiple technology trends - cloud, in-memory computing, predictive analytics and data mining, unstructured data, social media, data visualizations, spatial analytics and location intelligence, and of course, the mother of all buzzwords - big data. A huge opportunity awaits at the intersection of analytics, the cloud, and specialized solutions. Should Infosys choose to walk down this path, the probability of success is more than fair given Sikka's background. His name will alone attract the best of talent from across the technology world. Also remember, the adoption of technology in India, despite its close to one billion mobile subscriber base, is still abysmally low. There is a crying need for innovative technology solutions that can be adopted widely and replicated across the country. The several new cities planned by the government itself presents Sikka and Infosys, and of course many other companies, with a staggering opportunity.

    Thirdly, the new CEO will have the benefit of an indulgent investor community, but not for long. Given the high hopes that everyone has from him, Sikka's honeymoon period with Dalal Street may last a couple of quarters, or perhaps even a year, but not much more. The clock is ticking. The world of technology, the world over, is waiting and watching.

    (The opinions expressed in this article are the author's own, and do not necessarily reflect the views of dna)

    Philosophy 22

    Jonathan Lewis - Thu, 2014-07-03 02:59

    Make sure you agree on the meaning of the jargon.

    If you had to vote would you say that the expressions “more selective” and “higher selectivity” are different ways of expressing the same idea, or are they exact opposites of each other ? I think I can safely say that I have seen people waste a ludicrous amount of time arguing past each other and confusing each other because they didn’t clarify their terms (and one, or both, parties actually misunderstood the terms anyway).

    Selectivity is a value between 0 and 1 that represents the fraction of data that will be selected – the higher the selectivity the more data you select.

    If a test is “more selective” then it is a harsher, more stringent, test and returns less data  (e.g. Oxford University is more selective than Rutland College of Further Education): more selective means lower selectivity.

    If there’s any doubt when you’re in the middle of a discussion – drop the jargon and explain the intention.

    Footnote

    If I ask:  “When you say ‘more selective’ do you mean ….”

    The one answer which is absolutely, definitely, unquestionably the wrong reply is: “No, I mean it’s more selective.”