Skip navigation.

Feed aggregator

Free Apache Cassandra Training Event in Cambridge, MA March 23

Pythian Group - Fri, 2015-03-20 14:24

I’ll be speaking, along with DataStax and Microsoft representatives at Cassandra Essentials Day this coming Monday (March 23) in Cambridge. MA. This free training event will cover the basics of Apache Cassandra and show you how to try it out quickly, easily, and free of charge on the Azure cloud. Expect to learn about the unique aspects of Cassandra and DataStax Enterprise and to dive into real-world use cases.

Space is limited, so register online to reserve a spot.

Categories: DBA Blogs

My Co-op Experience at Pythian

Pythian Group - Fri, 2015-03-20 06:30
That's me in front of our office. I promise there is a bigger Pythian logo!

That’s me in front of our office. I promise there is a bigger Pythian logo!

Unlike most other engineering physics students at Carleton who prefer to remain within the limits of engineering, I had chosen to apply for a software developer co-op position at Pythian in 2014. For those of you who do not know much about the engineering physics program (I get that a lot and so I will save you the trip to Google and tell you), this is how Stanford University describes their engineering physics program: “Engineering Physics prepares students to apply physics to tackle 21st century engineering challenges and to apply engineering to address 21st century questions in physics.” As you can imagine, very little to do with software development. You might ask, then why apply to Pythian?

Programming is changing the way our world functions. Look at the finance sectors: companies rely on complicated algorithms to determine where they should be investing their resources which in turn determines the course of growth for the company. In science and technology, algorithms help us make sense of huge amounts of unstructured data which would otherwise take us years to process, and help us understand and solve many or our 21st century problems. Clearly, learning how to write these algorithms or code cannot be a bad idea, rather, one that will be invaluable. A wise or a not so wise man once said, (you will know what I mean if you have seen the movie iRobot): “If you cannot solve a problem, make a program that can.” In a way, maybe I intend to apply physics to tackle all of 21st century problems by writing programs. (That totally made sense in my head).

Whatever it might be, my interest in programming or my mission to somehow tie physics, engineering, and programming together, I found myself looking forward to an interview with Pythian. I remember having to call in for a Skype interview. While waiting for my interviewers to join the call, I remember thinking about all the horror co-op stories I had heard: How you will be given piles of books to read over your work term (you might have guessed from this blog so far, not much of a reader, this one. If I hit 500 words, first round’s on me!). Furthermore, horror stories of how students are usually labeled as a co-op and given no meaningful work at all.

Just as I was drifting away in my thoughts, my interviewers joined the call. And much to my surprise they were not the traditional hiring managers in their formal dresses making you feel like just another interviewee in a long list of interviewees. Instead they were warm and friendly people who were genuinely interested in what I could offer to the company as a co-op student. The programming languages I knew, which one was my favourite, the kind of programs I had written, and more. They clearly stated the kind of work I could expect as a co-op student, which was exactly the same kind of work that the team was going to be doing. And most importantly, my interviewers seemed to be enjoying the kind of work they do and the place they work at.

So, when I was offered the co-op position at Pythian. I knew I had to say yes!

My pleasant experience with Pythian has continued ever since. The most enjoyable aspect of my work has been the fact that I am involved in a lot of the team projects which means I am always learning something new and gaining more knowledge each day, after each project. I feel that in an industry like this, the best way to learn is by experience and exposure. At Pythian that is exactly what I am getting.

And if those are not good enough reasons to enjoy working for this company, I also have the privilege of working with some extremely experienced and knowledgeable people in the web development industry. Bill Gates had once suggested that he wants to hire the smartest people at Microsoft and surround himself with them. This would create an environment where everyone would learn from each other and excel in their work. And I agree with that. Well now if you are the next Bill Gates, go ahead, create your multibillion dollar company and hire the best of the best and immerse yourself in the presence of all that knowledge and intelligence. But I feel I have found myself a great alternative, a poor man approach, a student budget approach or whatever you want to call it, take full advantage of working with some really talented people and learn as much as you can.

Today, five months into my yearlong placement with Pythian, I could not be more sure and proud of becoming a part of this exciting company, becoming a Pythianite. And I feel my time spent in this company has put me well in course to complete my goal of tying physics, engineering and programming together.

Categories: DBA Blogs

Log Buffer #415, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-03-20 06:25

This Log Buffer Edition covers the Oracle, SQL Server and MySQL with a keen look on the novel ideas.

Oracle:

The case was to roll forward a physical standby with an RMAN SCN incremental backup taken from primary.

Oracle Database 12c: Smart upgrade

This blog covers how to specify query parameters using the REST Service Editor.

Production workloads blend Cloud and On-Premise Capabilities

ALTER DATABASE BEGIN BACKUP and ALTER DATABASE END BACKUP

SQL Server:

Mail Fails with SQLCMD Error

How to get Database Design Horribly Wrong

Using the ROLLUP, CUBE, and GROUPING SETS Operators

The Right and Wrong of T-SQL DML TRIGGERs (SQL Spackle)

How converting extensive, repetitive code to a data-driven approach resolved a maintenance headache and helped identify bugs

MySQL:

Distributing innodb tables made simpler!

Choosing a good sharding key in MongoDB (and MySQL)

Update a grails project from version 2.3.8 to version 2.4.4

MySQL Enterprise Backup 3.12.0 has been released

If table is partitioned it makes it easy to maintain. Table has grown so huge and the backups are just keep running long then probably you need to think of archival or purge.

Categories: DBA Blogs

Using strace to debug application errors in linux

Pythian Group - Fri, 2015-03-20 06:24

strace is a very useful tool which traces system calls and signals for a running process. This helps a lot while debugging application level performance issues and bugs. Aim of this post is to demonstrate the power of strace in pinning down an application bug.

I came across an issue in which nagios was sending the following alerts for a RHEL6 system.

***** Nagios ***** Notification Type: PROBLEM Service: NTP Host: xxxxx Address: xx.xx.xx.xx State: UNKNOWN Date/Time: Tue Feb 17 10:08:36 EST 2015 Additional Info: cant create socket connection

On manually executing the nagios plugin on the affected system, we can see that the command is not running correctly.

# /usr/lib64/nagios/plugins/check_ntp_time -H localhost -w 1 -c 2
can’t create socket connection

I ran strace on the command. This would create a file /tmp/strace.out with strace output.

# strace -xvtto /tmp/strace.out /usr/lib64/nagios/plugins/check_ntp_time -H localhost -w 1 -c 2

Following are the options which I passed.

-x Print all non-ASCII strings in hexadecimal string format.
-v Print unabbreviated versions of environment, stat, termios, etc. calls. These structures
are very common in calls and so the default behavior displays a reasonable subset of struc?
ture members. Use this option to get all of the gory details.
-tt If given twice, the time printed will include the microseconds.
-o filename Write the trace output to the file filename rather than to stderr. Use filename.pid if -ff
is used. If the argument begins with `|’ or with `!’ then the rest of the argument is
treated as a command and all output is piped to it. This is convenient for piping the
debugging output to a program without affecting the redirections of executed programs.

Time stamps displayed with -tt option is not very useful in this example, but it is very useful while debugging application performance issues. -T which shows the time spend in each system call is also useful for those issues.

From strace output,

10:26:11.901173 socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 3
10:26:11.901279 connect(3, {sa_family=AF_INET, sin_port=htons(123), sin_addr=inet_addr(“127.0.0.1″)}, 16) = 0
10:26:11.901413 getsockname(3, {sa_family=AF_INET, sin_port=htons(38673), sin_addr=inet_addr(“127.0.0.1″)}, [16]) = 0
10:26:11.901513 close(3) = 0
10:26:11.901621 socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 3
10:26:11.901722 connect(3, {sa_family=AF_INET6, sin6_port=htons(123), inet_pton(AF_INET6, “::1″, &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 ENETUNREACH (Network is unreachable) <—————-
10:26:11.901830 close(3) = 0
10:26:11.901933 socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 3
10:26:11.902033 connect(3, {sa_family=AF_INET, sin_port=htons(123), sin_addr=inet_addr(“127.0.0.1″)}, 16) = 0
10:26:11.902130 socket(PF_INET6, SOCK_DGRAM, IPPROTO_UDP) = 4
10:26:11.902238 connect(4, {sa_family=AF_INET6, sin6_port=htons(123), inet_pton(AF_INET6, “::1″, &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 ENETUNREACH (Network is unreachable) <—————-
10:26:11.902355 fstat(1, {st_dev=makedev(0, 11), st_ino=3, st_mode=S_IFCHR|0620, st_nlink=1, st_uid=528, st_gid=5, st_blksize=1024, st_blocks=0, st_rdev=makedev(136, 0), st_atime=2015/02/17-10:26:11, st_mtime=2015/02/17-10:26:11, st_ctime=2015/02/17-10:16:32}) = 0
10:26:11.902490 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fc5a8752000
10:26:11.902608 write(1, “can’t create socket connection”, 30) = 30

Let us have a deeper look,

You can see that socket() is opening a socket with PF_INET (IP v4) domain and IPPROTO_IP (tcp) protocol. This returns file descriptor 3. Then connect() is connecting to the socket using the same file descriptor and connects to ntp port (123) in localhost. Then it calls getsockname and closes the file descriptor for the socket.

10:26:11.901173 socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 3
10:26:11.901279 connect(3, {sa_family=AF_INET, sin_port=htons(123), sin_addr=inet_addr(“127.0.0.1″)}, 16) = 0
10:26:11.901413 getsockname(3, {sa_family=AF_INET, sin_port=htons(38673), sin_addr=inet_addr(“127.0.0.1″)}, [16]) = 0
10:26:11.901513 close(3) = 0

Next it does the same but with PF_INET6 (IP v6) domain. But you can see that connect() fails with ENETUNREACH.

10:26:11.901621 socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 3
10:26:11.901722 connect(3, {sa_family=AF_INET6, sin6_port=htons(123), inet_pton(AF_INET6, “::1″, &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 ENETUNREACH (Network is unreachable) <—————-
10:26:11.901830 close(3)

From connect man page,

ENETUNREACH
Network is unreachable.

This process is repeated with IPPROTO_UDP (udp) protocol as well.

On checking the system, I see that that only IPv4 is enabled. ‘inet6 addr’ line is missing.

[root@pbsftp ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:50:56:90:2E:31
inet addr:xx.xx.xx.xx Bcast:xx.xx.xx.xx Mask:xx.xx.xx.xx <——————–
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:5494691 errors:0 dropped:0 overruns:0 frame:0
TX packets:4014672 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:5877759230 (5.4 GiB) TX bytes:5608605924 (5.2 GiB)

IPv6 is disabled in the system using following /etc/sysctl.conf entries.

net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.all.disable_ipv6 = 1

This behavior of nagios plugin is wrong as it should not die when one of the connect fails.

Issue is fixed in upstream patch.

Enabling IPv6 by removing following entries from /etc/sysctl.conf and running ‘sysctl -p’ would act as a workaround.

net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.all.disable_ipv6 = 1

To fix the issue, the upstream patch need to be either backported manually to create an rpm or a support ticket need to be opened with the operating system vendor to backport the patch in their product release.

Categories: DBA Blogs

Using OEM System Targets with Oracle Utilities Products

Anthony Shorten - Thu, 2015-03-19 17:43

One of the common requirements of an IT group is to rack the availability of a system. A system is the total end to end architecture for a particular product. All of the components need to be tracked to truly determine whether the system is available or not.

Within Oracle Enterprise Manager it is possible to track each component individually and is also possible to create a definition which groups the targets in an architecture together with some availability rules. This is known as a Generic System target within Oracle Enterprise Manager. Using the Oracle Application Management Pack for Oracle Utilities in combination with other targets it is possible to define such a target to track the overall availability of the Oracle Utilities product.

Here is how you can set it up. Note: The example below uses Oracle Utilities Customer Care And Billing as an example, this technique applies to any Oracle Utilities product supported by the Oracle Application Management Pack for Oracle Utilities.

  • Logon to Oracle Enterprise Manager using an appropriate administration user.
  • Navigate to the Setup --> Add Target --> Generic System menu option to initiate the maintenance of the Generic System target. For example:

Setup --> Add Target --> Generic System

  • Fill in the appropriate settings for your Generic System.
    • Name the system appropriately for your site.
    • Add comments which are useful for other administrators to understand the target. This might sound trivial but some comments can help other administrators understand the system represented by this target
    • The Privilege Propogating System allows administrators to grant priviledges to other administrators in a manner in which new administrators get the same privileges as its member targets. In other words, granting a privilege to a system target may grant the same privileges to the member targets, if Privilege Propogating System is enabled.
    • Time Zone is set for the target just like any other target for scheduling etc..
    • System Properties allows you to add additional tagging information.
Generic System Information
  • Select the targets to be added to the System definition. At a minimum consider the following:
    • Add the Oracle Utilities Batch Server, Oracle Utilities Web Application and Oracle Utilities Web Services Application to the Generic System. Avoid adding the Oracle Utilities Home and Oracle Utilities System Environment targets as they do not have any availability status.
    • Add the base Fusion Middleware targets including the Oracle WebLogic Domain, Oracle WebLogic Server and Application Deployment targets associated with Oracle Utilities products. Customers using Inbound Web Services should also add that Web Services Application Deployment to track Web Services availability. Oracle WebLogic Cluster targets can be also added if desired.
    • Add the database targets associated with the environment (if you want to track them as well) whether that the Database Instance target (for non-PDB or Oracle 11 and below targets) or Pluggable Database targets (for PDB implementations in Oracle 12c).
    • Avoid duplication of targets. For example, avoid adding Host targets as the child targets are already covered.
    • If you have other targets in your architecture then consider adding them such as SOA etc.. Remember that you may only use targets that are in the base Oracle Enterprise Manager or you have licensed.
Example Target selection
  • Once all the targets are selected, you can proceed to the next stages. For example:

Selecting Targets

  • Confirm the relationships between the targets. Oracle Enterprise Manager will typically default all of it for you but you can add or modify your own relationships as necessary. For example:

Confirming Relationships

  • The next step is to identify the Key Members used to track availability. Targets that are not Key Members are also tracked but do not contribute to the overall system definition of availability. You might ask then why we should added them into a definition. Well, Generic System targets are used for other purposes (that will be covered in another article). Here are a few guidelines:
    • Select targets that are key to the architecture that will represent the critical points of your architecture.
    • Use parent objects to cover as many child objects as possible. For example, use Oracle WebLogic Servers or Oracle WebLogic Cluster as Key Members to cover the Applications deployed within those servers/clusters. This optimizes the definition by minimizing the definitions.
    • Ensure you cover each component of the architecture at least once in your Key Member definition to cover the end to end architecture.
    • Decide whether you want to consider either all or at least one target is available to consider the system available.
    • At a minimum use Oracle WebLogic Server target associated with the Oracle Utilities product deployments to cover online and Web Services, at least one Oracle Utilities Batch Server to cover batch and the database as Key Members for Oracle Utilities products. For example:

Key Targets example

  • The next step allows you to define the metrics and columns to be displayed on the home screen and dashboard for the Generic System target. For the example, the defaults are taken. For example:

Portal Definition

  • Review the definition for completeness. You can alter the definition at any time after saving it or now using the breadcrumbs at the top of the page to navigate to the definition you want to change. For example:

Review the setup

  • The Generic System target is now available for your tracking within Oracle Enterprise Manager. For example:


  • The Generic System target is now also available from the targets page like all other targets. For example:

Example Target on target screen

These targets are very useful and can be defined at a low or high level (and any level in between those extremes). They can be modelled on your IT group or just generically. This technique can combine targets from many different compoennts of the architecture.

    Oracle Priority Support Infogram for 19-MAR-2015

    Oracle Infogram - Thu, 2015-03-19 15:38

    RDBMS
    A new blog I’ve never seen before, Emre Baransel, Support Engineer's Blog, looks good and hands-on. Here are some links:
    TIMESTAMP to DATE Conversion with Online Redefinition
    Standby Database SCN - x$kcvfh
    Auditing on Oracle Database in a Nutshell (11gR2)
    MySQL
    Two from MySQL Enterprise Backup:
    Distributing innodb tables made simpler!
    MySQL Enterprise Backup 3.12.0 has been released
    Java
    WebSocket Client API – Java 8, from Pavel Bucek’s Blog.
    From The Java Source: Java 9 and Beyond
    ADF
    Uploading and showing image file from absolute server path ADF, from WebLogic Partner Community EMEA.
    EPM and BI
    From Business Analytics - Proactive Support:
    Documentation for OBIA 11.1.1.9.1 - Important Information Please Read.
    Patch Set Update: Oracle Hyperion Essbase Family 11.1.2.4
    Patch Set Update: Oracle Hyperion Disclosure Management 11.1.2.3.587
    Oracle BI Applications 11.1.1.9.1 now available for download!
    Tons of good videos on these channels:
    Oracle EPMWebcasts Channel Oracle EPM & BI Tutorials YouTube ChannelOracle Learning LibraryCloud Help Center
    From BI & Analytics Pulse: What’s New in Oracle BI Applications 11.1.1.9.1?
    SOA
    Service Bus 12c – Exposing a Pipeline as a REST Service, from the SOA & BPM Partner Community Blog.
    And from New Generation Database Access: OAuth 2.0 Security Concepts: Part I "Two Key Use Cases De-Mystified"
    Eclipse
    REST Service Editor Query Parameters, from the Oracle Enterprise Pack for Eclipse Blog
    Studio C++
    Boost with Oracle Solaris Studio C++ 12.4, from Studio C++ stories.
    EBS
    From the Oracle E-Business Suite Technology blog:

    March 2015 Updates to AD and TXK for EBS 12.2

    QlikView Tips & Tricks - part II

    Yann Neuhaus - Thu, 2015-03-19 15:00

    QlikView.png
    In a previous blog entry, I started to share some Tips & Tricks about the installation and configuration of some QlikView components/software. I will try to complete this list with some other interesting things to know. On this blog entry, I will try to explain how to enable the Task Performance Summary, to debug or at least enable the Communication with Salesforce and how to install the QlikView Management Console Connector (QlikView Management API).


    For the whole blog entry, let's define the following values:

    • %SYS_PROFILE% = C:/Windows/System32/config/systemprofile
    • %QV_SRV_HOME% = D:/Apps/QlikView Server (defined during the QlikView Server installation)
    • %DS_DATA_HOME% = D:/QlikView/DistributionService (defined in the QMC: System ˃ Setup ˃ Distribution Services ˃ QDS@hostname ˃ General ˃ Settings for QDS ˃ Application Data Folder)
    • %PROXY_HOST% = your-proxy-url.domain.com
    • %PROXY_PORT% = 11210
    • %PROXY_PORT_S% = 11211
    • %HOST_FQDN% = your-qlikview-hostname.domain.com
    • %HOST_IP% = 160.160.20.20
    • %HOST_ALIAS% = qlikview-alias.domain.com
    • %QV_GRPS% = QV_GRP_SUP + QV_GRP_DEV + QV_GRP_ADM (some groups for QlikView, all under the domain "DOMAIN")
    • %QV_SYS_USER% = QV_SYS_USER (the "DOMAIN" user under which QlikView is running)


    Each time you will see one of these parameters or values in the text below, don't forget to replace them with YOUR OWN values. I only associate them for an example and to help you to find a match in your environment.

    I. Task Performance Summary


    Introduced in QlikView 11.2 SR7 and disabled by default, the Task Performance Summary is a new feature of QlikView to analyse the performance of the Tasks (I'm sure you already understood that from the title!). This new feature simply launch a little benchmark during the execution of all tasks to record some useful data like:

    • Name of the process that ran the task with its PID
    • CPU used by the process (Average, Peak)
    • CPU used by the Windows Server (Peak)
    • Virtual RAM used by the process (Average, Peak)
    • Virtual RAM used by the Windows Server (Peak)
    • Physical RAM used by the process (Average, Peak)
    • Physical RAM used by the Windows Server (Peak)
    • Duration of the reload task


    For debugging, performance analysis and performance improvements, it's pretty cool to have this kind of information. Enable the Task Performance Summary is quite simple:

    1. Login to the Windows Server with any Administrator account
    2. Open the file: %SYS_PROFILE%/AppData/Roaming/QlikTech/QlikViewBatch/Settings.ini
      1. Add at the end: EnableQVBProcessSummary=1
      2. Add an empty line at the end of the file (VERY IMPORTANT: the last line MUST be an empty line)
    3. Open a command prompt as Administrator and execute the command: "%QV_SRV_HOME%/Distribution Service/qvb.exe"
    4. In the command prompt, execute another command: services.msc
    5. Restart all QlikView Services
    6. Open the folder: %DS_DATA_HOME%/TaskResults/
    7. Refresh the folder's content until there is a new file created (if nothing new appears, force the execution of a QlikView Task)
    8. Open the last created xml file and check that inside there is a line which include CPU and RAM consumption

    Task1.pngModification of the Settings.ini file to enable the Task Performance Summary

    Task2.pngLine added in the TaskResults' xml files by the Task Performance Summary


    That's it, the Task Performance Summary is now enabled and will record the performance of the future task's executions.

    II. Communication with Salesforce


    If it's your first time with the QlikView Salesforce Connector, then there is one thing that is essential to understand: the QlikView Salesforce Connector requires an internet access to work. To be more precise, the connector will have to be able to access to the Salesforce Website to retrieve some data models that will be used by QlikView for any communication with a Salesforce Data Source. Therefore, if your enterprise network uses a proxy or anything else to prevent or restrict the access to internet, then this will need to be fixed.


    SalesForceIssueNW_P.pngLog file generated during a task execution when the proxy configuration isn't done

    SalesForceIssueW_P.pngLog file generated during a task execution when the proxy configuration is properly done

    On this section, I will just describe how to configure your Windows Server to allow the communication with the Salesforce Website by configuring the proxy settings:

    1. The QlikView Salesforce Connector must be properly installed
    2. Login to the Windows Server with the account under which QlikView is running
    3. Open: Internet Explorer ˃ Internet Options ˃ Connections ˃LAN settings
      1. Click on: Use a proxy server for your LAN
      2. Click on: Bypass proxy server for local addresses
      3. Click on: Advanced
      4. HTTP: Address = %PROXY_HOST%
      5. HTTP: Port = %PROXY_PORT%
      6. Secure: Address = %PROXY_HOST%
      7. Secure: Port = %PROXY_PORT_S%
      8. Exceptions = 127.0.0.1;localhost;%HOST_FQDN%;%HOST_IP%;%HOST_ALIAS%
    4. Click on: OK (3 times)

    SalesForce1.pngConfiguration of the proxy in the Internet Explorer's options


    After this modification, the communication and therefore the reload of QlikView Documents using Salesforce as a Data Source should be successful.

    III. QMC Connector


    In QlikView by default, the license management through the QlikView Management Console is quite difficult... That's why the QlikView Community is really active on this topic and some solutions have been developed to easily manage the QlikView licenses. The most common solution is to use the QMC Connector. This connector will use the QlikView Management API to manage a lot of things directly from a QlikView Document. The installation of this connector is quite easy:

    1. Download the QlikView Connector: QVSManager.zip
    2. Login to the Windows Server with any Administrator account
    3. Extract the QVSManager zip files into: C:/Program Files/Common Files/QlikTech/Custom Data/. This will create the folder "QVSManager" and in this folder, there should be 4 files
    4. Create a Windows local group:
      1. Open a command prompt as Administrator and execute: compmgmt.msc
      2. Open: System Tools ˃ Local Users and Groups ˃ Groups
      3. Create a new group with the following entries:
        1. Name = QlikView Management API
        2. Description = QlikView Management API
        3. Members = %QV_GRPS% + %QV_SYS_USER%
      4. Click on: Create
    5. Restart all QlikView Services
    6. Deploy the QlikView Document: QVSCALManager.qvw (I put it in a zip file for convenience)

    QMCConnector1.pngDeployment of the QVSManager.zip file in the "Custom Data" directory of QlikView

    QMCConnector2.pngCreation and configuration of the Windows Server's local group for the QVSManager

    Once this new QlikView Document is ready (Document deployed, new task created and scheduled, aso...), you should be able to see it in the QlikView AccessPoint and manage the QlikView licenses directly from this new application. Believe me, it will greatly facilitate your work!


    Okay, that was my second blog entry about QlikView, I guess that's enough for now :). I hope some of you found these tips useful and if needed, don't hesitate to let me a little comment below and I'll do my best to help you. See you soon!

    Might need to use Tunneling with Discoverer 11g

    Michael Armstrong-Smith - Thu, 2015-03-19 13:40
    I have noticed a few instances recently of Discoverer 11g Plus failing to open or taking an awful long time to open. In both of the cases where this has been reported to me by my clients, changing the plus communication protocol from Default to Tunneling did the trick.

    To enable tunneling for use with Discoverer Plus, use this workflow:

    1. Launch Enterprise Manager using something like: http://server.domain.com:7002/em
    2. Enter your Username and Password. Username is typically Weblogic
    3. Under Farm, on left-hand side, expand Discoverer and Discoverer(11.1.1.x.0)
    4. In the Components window, highlight Discoverer Plus then click the Configure button
    5. In the Communication Protocols window, click the Tunneling radio button (see below)
    6. Click the Apply button
    7. Shut Down the Discoverer service from the top link by clicking on Discoverer | Control | Shut Down - confirm the action
    8. Restart the Discoverer service from the top link by clicking on Discoverer | Control | Start Up - confirm the action (sometimes you have to do this twice)


     

    The Secret Feature of Bucketsets in Oracle Business Rules

    Jan Kettenis - Thu, 2015-03-19 12:42
    In Oracle Business Rules one can use so-called "Bucketsets". I never liked the term as it is not in the dictionary (did you mean bucketseat?), and never understood what is wrong with "list of values" (LoV) as that is what it is.

    See for example the following bucketset that defines a list of values to be used for some status field:


    Anyway, bucketsets are typically used in decision tables to define the set of values to be used in conditions. Unfortunately you cannot use them to define the set of values to be used in actions. At least, that is what the UI seems to suggest, as bucketsets do not appear in the values you can choose from.


    But don't get fooled, you are being misguided, as it works after all. Just type it in as [bucketset_name].[value_name] and be surprised that it works!


    Funny enough this is not the case for IF-THEN rules.


     Let's call it room for improvement.

    Time to Update the Feed

    Oracle AppsLab - Thu, 2015-03-19 11:40

    For those of you who enjoy our content via the feed (thank you), I have news.

    Next week, I’ll be changing the feed’s name, so if you want to continue to receive AppsLab goodness in your feed reader of choice or in your inbox, you’ll need to come back here and subscribe again.

    Or maybe it’s time to switch over to our Twitter (@theappslab) or Facebook Page, if that’s your thing. I did nuke the Google+ Page, but I doubt anyone will notice it’s gone.

    Nothing else has changed.Possibly Related Posts:

    Our Policy on Cookies and Tracking

    Michael Feldstein - Thu, 2015-03-19 10:00

    By Michael FeldsteinMore Posts (1024)

    In the wake of the Pearson social media monitoring controversy, edubloggers like Audrey Watters and D’arcy Norman have announced their policies regarding code that can potentially track users on their blogs. This is a good idea, so we are following their example.

    We use Google Analytics and WordPress analytics on both e-Literate and e-Literate TV. The main reason we do so is that we believe the information these packages provide help us create more useful content. Even after a decade of blogging, we are still surprised sometimes by which posts earn your attention and which ones don’t. We look at our analytics results fairly regularly to see what we can learn about writing more content that you find to be worth your time. This is by no means the only or even the main way that we decide what we will write, but we think of it as one of relatively few clues we have to understand to which posts and topics will have the most value to you. We do not run ads and have no intention of doing so in the future. In the case of e-Literate TV, where the content is expensive to make, we may also use information regarding the number of viewers of the episodes in the future to demonstrate to sponsors that our content is having an impact. We make no effort to track individuals and, in fact, have always had a policy of letting our readers comment on posts without registering on the site. But Google in particular is likely making more extensive use of the usage data that they gather.

    In addition to the two analytics packages mentioned above, we do embed YouTube videos and use social media buttons, which may carry their own tracking code with them from the companies that supply them. Unfortunately, this is just part of the deal with embedding YouTube videos or adding convenient “Tweet this” links. The tracking code (which usually, but not always, means the same thing as “cookies”) on our site is pretty typical for what you will find for any site that provides these sorts of conveniences.

    But that doesn’t mean that you have to allow yourself to be tracked if you prefer not to be. There are a number of excellent anti-tracking plugins available for the mainstream browsers, including Ghostery and Disconnect. If you are concerned about being tracked (here or anywhere), then we recommend installing one or more of these plugins, and we also recommend spending a little time to learn how they work and what sorts of tracking code are embedded on the different sites you visit so that you can make informed and fine-grained decisions about what information you do and do not want to share. These tools often let you make service-by-service and site-by-site decisions, but they generally start with the default of protecting your privacy by blocking everything.

    To sum up and clarify our privacy policies:

    • We do use Google Analytics and WordPress analytics.
    • We do embed social media tools that in some cases carry their own tracking code.
    • We do not make any effort to track individuals on our sites.
    • We do not use or plan to use analytics for ads or in any way sell the information from our analytics to third parties, including but not limited to ads.
    • We may in the future provide high-level summaries of site traffic and video views to e-Literate TV sponsors.
    • We do support commenting on blog posts without registration.[1]
    • We do provide our full posts in our RSS feed, which excludes most (but not all) tracking code.
    • We do provide CC-BY licensing on our content so that it can be used on other sites, including ones that do not have any tracking code .
    1. Note: We do require an email address from commenters for the sole purpose of providing us with a means of contacting the poster in the event that the person has written something uncivil or marginally inappropriate and we need to discuss the matter with that person privately before deciding what to do about moderation. In the 10-year history of e-Literate, this has happened about three or four times. There are two differences relevant to reader privacy between requiring the email address and requiring registration. First, we allow people to use multiple email addresses or even temporary email addresses if they do not wish that email to be personally identifiable. We only require that the email address be a working address. Second and probably more importantly, without registration, there is no mechanism to link comments to browsing behavior on the site.

    The post Our Policy on Cookies and Tracking appeared first on e-Literate.

    Which PeopleTools version goes with which Application?

    Duncan Davies - Thu, 2015-03-19 06:00

    If you’re ever in the situation where you’re wondering what your options are with PeopleTools versions, and which applications they can be used with, then we have just the document for you.

    It shows you:

    – the Applications, their release dates, and – more critically – the dates that Premier and Extended Support ends

    – the PeopleTools versions, and the date that support ends (yes, 8.52 is now out of support)

    – the combinations of the two, for instance, can you go to Tools 8.53 with a v8.9 application? do we believe that PeopleTools 8.55 will support v9.1 applications?

    – what this means for Fluid. For example, if you’re on v9.1 but upgrade to PeopleTools 8.54, what Fluid functionality do you receive?

    Check out the White Paper here:

    http://www.cedarconsulting.co.uk/news-details/March-2015-New-White-Paper-PeopleSoft-Version-Compatibility/index.html

     

     


    SQL Server: Change Data Capture for Oracle

    Yann Neuhaus - Thu, 2015-03-19 02:55

    This new feature has been introduced in SQL Server 2012 and needs an Enterprise Edition. It uses a Windows Service which scans Oracle Logs and tracks DML changes from Oracle tables into SQL Server change tables.
    In other words, Change data capture records Insert, Update and Delete activities that is applied to Oracle tables.
    Let's see how it works.

    Prerequisites

    The database where you want to capture changes must be in ARCHIVELOG and OPEN.
    The user which will be used to connect to the Oracle database, must have DBA privileges.

    Change Data Capture services installation

    Installation will be done via msi packages. Those packages are not installed automatically with SQL Server 2012 or 2014. You will find them in your installation media under ToolsAttunityCDCOraclex641033.
    There are two msi packages, one for the CDC Service Configuration named AttunityOracleCdcService.msi and another for CDC Designer named AttunityOracleCdcDesigner.msi.

    Double click on AttunityOracleCdcService.msi and install the package, same for AttunityOracleCdcDesigner.msi.

    b2ap3_thumbnail_OracleCDCSercice.jpg

    b2ap3_thumbnail_OracleCDCDesigner.jpg

    When both installations are done, go to the Application Panel and launch “Oracle CDC Service Configuration”.

    b2ap3_thumbnail_OracleCDCSercice2.jpg

    We will now have to prepare our SQL Server instance to use it. Each Oracle CDC Service instance lies a single SQL Server instance which will be used to manage it.
    To create this instance click on "Prepare SQL Server" or right click on Local CDC Service and select “Prepare SQL Server”:

    b2ap3_thumbnail_CDCSQLServerPrepare.jpg

    Once it is done select the SQL Server instance where you want to install the MSXDBCDC database and click on the Run button:

    b2ap3_thumbnail_CDCSQLServerPrepare2.jpg

    The database is created:

    b2ap3_thumbnail_CDCSQLServerPrepare3.jpg

    Let's check from SQL Server Management Studio the newly created database:

    b2ap3_thumbnail_CDCSQLServerPrepare4.jpg

    At this point we can create a new CDC service:

    b2ap3_thumbnail_OracleCDCSercice3.jpg

    To create the windows service we have to provide the following information:

    • The Service Name
    • Use a local system account for the service account

    In addition, we must provide the following information for the associated SQL Server instance:

    • Server name
    • Authentication and login

    The next step consists in creating a master password for CDC service which will be used to create symmetric key.

    b2ap3_thumbnail_OracleCDCSercice4.jpg

    After that we will create an instance from the CDC Designer.
    When I open it, I have to enter the SQL server instance, I have created before, which is associated with the CDC service. Credentials are required to connect to the concerned SQL Server Instance.

    b2ap3_thumbnail_OracleCDCSercice6.jpg

    After connecting to the SQL Server I can see all related Oracle CDC Services. Next, after selecting the OracleCDCService1, I will create a new instance for this service:

    b2ap3_thumbnail_OracleCDCInstance1.jpg

    Provide a name to the future CDC Instance and then create the change associated database:

    b2ap3_thumbnail_OracleCDCInstance2.jpg

    After clicking on “Create Database” button the wizard will assist us to create the new Oracle CDC instance and the change database.

    b2ap3_thumbnail_OracleCDCSercice7.jpg

    Click Next.
    We have now to fill out the form with following information:

    • the Oracle connection string to our listener
    • user name
    • password

    b2ap3_thumbnail_OracleCDCSercice8.jpg

    Check if the connection to the source is successful:

    b2ap3_thumbnail_OracleCDCSercice9.jpg

    Click Next and then click on the Add button to select tables and columns for capturing changes.

    b2ap3_thumbnail_OracleCDCSercice10.jpg

    Select the schema and click on the Search button.
    Select the SCOTT.EMP table and click on the Add button.

    b2ap3_thumbnail_OracleCDCSercice11.jpg

    A message warns up that the SCOTT_EMP table has been added to the list:

    b2ap3_thumbnail_OracleCDCSercice12.jpg

    Now, you can see the table in the list:

    b2ap3_thumbnail_OracleCDCSercice13.jpg

    The Capture instance column, here SCOTT_EMP, will be used to name the capture instance specific to each table object in my SQL Server database.

    At this point, no CDC gating role is specified so it means that no gating role will be used to limit access to the change data.

    By default all the columns of the SCOTT.EMP table are selected for CDC. If I want to select just some columns I can click on the Edit button and choose the columns I want in CDC.
    Click on OK when your selection is finished and after click on Next:

    b2ap3_thumbnail_OracleCDCSercice17.jpg

    To be able to capture Oracle changes, supplemental logging have to be set up for the Oracle databases tables. For this purpose, a script have been generated automatically and have to be run immediately or later but you have to be aware that changes will not be captured until the script is executed.
    Click on the “Run Script” button:

    b2ap3_thumbnail_OracleCDCSercice14.jpg

    In order to run the script some credential must be provided and after click on the Run button:

    b2ap3_thumbnail_OracleCDCSercice15.jpg

    The script has been executed with success:

    b2ap3_thumbnail_OracleCDCSercice16.jpg

    Click on Next and in the last screen click on the “Run” button to finalize the CDC process:

    b2ap3_thumbnail_OracleCDCSercice18.jpg

    It looks like I have some errors...

    b2ap3_thumbnail_OracleCDCSercice19.jpg

    Let’s click on Details:

    b2ap3_thumbnail_OracleCDCSercice20.jpg

    Apparently I forgot to set up my Oracle database for supplemental logging, let’s do it:

    b2ap3_thumbnail_OracleCDCSercice21.jpg

    Now, we can click on the Next button:

    b2ap3_thumbnail_OracleCDCSercice22.jpg

    We have successfully created an Oracle CDC instance. Let’s check on my SQL Server:

    b2ap3_thumbnail_OracleCDCSercice23.jpg

    The mirror table will be always empty conceptually. The generated deployment script denies all DML permissions on the mirror table.

    I have a new database named OracleCDCInstance1 with, for the moment, an empty table named SCOTT.EMP, which is the table I selected earlier from the wizard.

    Back to my CDC Designer, I see that I have now an OracleCDCInstance1 which is for the moment not started:

    b2ap3_thumbnail_OracleCDCSercice24.jpg

    Now let's start the instance:

    b2ap3_thumbnail_OracleCDCSercice25.jpg

    The Detailed status has changed from Initial to IDLE:

    b2ap3_thumbnail_OracleCDCSercice26.jpg

    It may take a few minutes (one or two) to start change capture process. You may notice the detailed status that will change from IDLE to PROCESSING. Likewise, Processing and Counters areas will also change as oracle logs are ridden.
    Here I performed an update of the salary column in my table SCOTT.EMP from my Oracle database:

    b2ap3_thumbnail_OracleCDCSercice28.jpg

    We can see that the twelves operations are reflected in the counters area as twelves reads:

    b2ap3_thumbnail_OracleCDCSercice27.jpg

    If the Detailed Status changes to LOGGER instead of PROCESSING, it means that even you have a temporary delay mining Oracle logs and in this case PROCESSING will come back quickly even you have a problem of mining Oracle logs and in this case check the dbo.xdbcdc_trace in the MSXDBCDC database to check errors.

    We are also able to check information from log traces by clicking on the “Collect diagnostics” link which will generate diagnostics data from both Oracle environment and dbo.xdbcdc_trace table into a trace file:

    b2ap3_thumbnail_OracleCDCSercice29.jpg

    This feature gives the opportunity to use Change Data Capture in a SQL Server database using an Oracle database as a source. A major advantage of using CDC for Oracle is certainly the reduction of data level latency and at the same time, Oracle ETL knowledge becomes useless. It is also a good way to source SQL Server database with Oracle Data.
    Hope this article will help you, envoy ;-)

    Back To The Future: Looking at LMS forecasts from 2011 – 2014

    Michael Feldstein - Wed, 2015-03-18 18:11

    By Phil HillMore Posts (303)

    At today’s Learning Analytics and Knowledge 2015 conference (#LAK15), Charles Severance (aka Dr. Chuck) gave the morning keynote organized around the theme of going back in time to see what people (myself and Richard Katz primarily) were forecasting for education. By looking at the reality of 2015, we can see which forecasts were on track and which were not. I like this concept, as it is useful to go back and see what we got right and wrong, so this post is meant to provide some additional context particularly for LMS market. Chuck’s keynote also gives cover for doing so without seeming too self-absorbed.

    But enough about me. What do you think about me?

    I use the term forecast since I tend to describe patterns and trends and then try to describe the implications. This is different than the Katz video which aimed to make specific predictions as a thought-provoking device.

    Pre-2011

    I introduced the LMS squid diagram in 2008 as a tool to help people see the LMS market holistically rather than focusing on detailed features. Too much of campus evaluations then (and even now) missed the big picture that there were only a handful of vendors and some significant market dynamics at play.

    A 2009 presentation, by the way, was the basis for Michael and me connecting for the first time. Bromance.

    2011

    In early 2011 I wrote a post on Visigoths at the LMS Gates, noting:

    I am less inclined to rely on straight-line projections of market data to look ahead, and am more inclined to think the market changes we are seeing are driven by outside forces with potentially nonlinear effects. Rome may have been weakened from within, but when real change happened, the Visigoths made it happen. [snip]

    Today, there is a flood of new money into the educational technology market. In addition to the potential acquisition of Blackboard, Instructure just raised $8M in venture funding and vying for the role of Alaric in their marketing position, Pearson has been heavily investing in Learning Studio (eCollege for you old-timers), and Moodlerooms raised $7+M in venture funding. Publishing companies, ERP vendors, private equity, venture funding – these are major disruptive forces. And there is still significant moves being made by technology companies such as Google.

    In August I started blogging at e-Literate with this post on Emerging Trends in LMS / Ed Tech Market. The trends I described (summary here, see post for full description):

    From my viewpoint in 2011, the market has essentially moved beyond Blackboard as the dominant player driving most of the market dynamics.

    • The market is more competitive, with more options, than it has been for years.
    • Related to the above, there is a trend towards software as a service (SaaS) models for new LMS solutions.
    • Also related to the above, the market is demanding and getting real Web 2.0 and Web 3.0 advances in LMS user interfaces and functionality. We are starting to see some real improvements in usability in the LMS market.
    • The lines are blurring between content delivery systems (e.g. Cengage MindTap, Pearson MyLabs, etc) and LMS.
    • Along those same lines, it is also interesting in what is not being seen as a strategic blurring of lines – between LMS and student information systems.
    • Analytics and data reporting are not just aspirational goals for LMS deployments, but real requirements driven by real deadlines.

    Looking back at the 2011 posts, I would note the following:

    • I think all of the basic trends have proven to be accurate, although I over-stated the analytics importance of “real requirements driven by real deadlines”. Analytics are important and some schools have real requirements, but for most schools analytics is not far beyond “aspirational goals”.
    • Chuck over-interpreted the “it’s all about MyLabs”. The real point is the blurring of lines between previously distinct categories of delivery platforms and digital content. I would argue that the courseware movement as well as most CBE platforms shows this impact in 2015. MyLabs was just an example in the graphic.
    • My main message about outside forces was that the internal players (Blackboard, Desire2Learn, Moodle, etc) were not going to be the source of change, rather “new competitors and new dynamics” would force change. Through the graphic, I over-emphasized the ERP and big tech players (Oracle, Google, Pearson & eCollege, etc) while I under-emphasized Instructure, which has proven to be the biggest source of change (although driven by VC funding).
    • I still like the Rome / Visigoths / Alaric metaphor.
    2012

    In early 2012 I had a post Farewell to the Enterprise LMS, Greetings to the Learning Platform that formed the basis of the forecasts Chuck commented on in the LAK15 keynote.

    In my opinion, when we look back on market changes, 2011 will stand out as the year when the LMS market passed the point of no return and changed forever. What we are now seeing are some real signs of what the future market will look like, and the actual definition of the market is changing. We are going from an enterprise LMS market to a learning platform market.

    In a second post I defined the characteristics of a Learning Platform (or what I meant by the term):

    1. Learning Platforms are next-generation technology compared to legacy LMS solutions arising in the late 1990’s / early 2000’s. While many features are shared between legacy LMS and learning platforms, the core designs are not constrained by the course-centric, walled-garden approach pioneered by earlier generations.
    2. Learning Platforms tend to be SaaS (software as a service) offerings, based in a public or private cloud on multi-tenant designs. Rather than being viewed as an enterprise application to be set up as a customized instance for each institution, there is a shared platform that supports multiple customers, leveraging a shared technology stack, database, and application web services.
    3. Learning Platforms are intended to support and interoperate with multiple learning and social applications, and not just as extensions to the enterprise system, but as a core design consideration.
    4. Learning Platforms are designed around the learner, giving a sense of identify that is maintained throughout the learning lifecycle. Learners are not just pre-defined roles with access levels within each course, but central actors in the system design.
    5. Learning Platforms therefore are social in nature, supporting connections between learners and customization of content based on learner needs.
    6. Learning Platforms include built-in analytics based on the amalgamation of learner data across courses, across institutions, and even beyond institutions.
    7. Learning Platforms allow for the discovery of instructional content, user-generated content, and of other learners.

    Going back to the Farewell post, the forecast was:

    Another trend that is becoming apparent is that many of the new offerings are not attempting to fully replace the legacy LMS, at least all at once. Rather than competing with all of the possible features that are typical in enterprise LMS solutions, the new platforms appear to target specific institutional problems and offer only the features needed. Perhaps inspired by Apple’s success in offering elegant solutions at the expense of offering all the features, or perhaps inspired by Clayton Christensen’s disruptive innovation model, the new learning platform providers are perfectly willing to say ‘no – we just don’t offer this feature or that feature’.

    Looking back at the 2012 posts, I would note the following:

    • I still see the move from enterprise LMS to learning platform, but it is happening slower than I might have thought and more unevenly. The attributes of SaaS and fewer features has happened (witness Canvas in particular), and the interoperability capabilities are occurring (with special thanks to Chuck and his work with IMS developing LTI). However, the adoption and true usage of multiple learning and social applications connected through the platform is quite slow.
    • The attributes of learner-centric design built-in analytics can be seen in many of the CBE platforms, but not really in the general LMS market itself.
    2013 / 2014

    In 2013 and 2014 I updated the LMS squid graphic.

     

    • Chuck was right to point out the revision that I no longer included the outside forces of ERP & big tech. The key point of 2011 forecasts was outside forces making changes, but by 2013 it was clear that ERP & big tech were not part of this change.
    • There is also a big addition of homegrown solutions, or alternative learning platforms that is worth noting. The entrance of so new CBE platforms designed from the ground up for a specific purposes is an example of this trend.
    Overall Notes

    Thanks to Chuck, this has been informative (to me, at least) to go back and review forecasts and see what I got right and what I got wrong. Chuck’s general point on my forecasts seem to be that I am over-emphasizing the emergence of learning platforms at least as a distinct category from enterprise LMS, and that we’re still seeing LMS market although with changed internals (fewer features, more interoperability). I don’t disagree with this point (if I am summarizing accurately). However, if you read the actual forecasts above, I don’t think Chuck and I are too far apart. I may be more optimistic than he is and need to clarify my terminology somewhat, but we’re in the same ball park.

    Now let’s turn the tables. My main critique with Dr. Chuck’s keynote is that he just didn’t commit on the song. We know he is willing to boldly sing, after all (skip ahead to 1:29).

    Click here to view the embedded video.

    Update: Clarified language on LTI spec

    The post Back To The Future: Looking at LMS forecasts from 2011 – 2014 appeared first on e-Literate.

    SQL Developer 3 and Timestamp Conversion

    Dominic Brooks - Wed, 2015-03-18 11:58

    I shouldn’t be using an old version of SQL Developer like v3 (3.2.10.09) anyway but… cut a long story short … I am.
    I’ve also got v4 which doesn’t display the same problem

    Just pointing out an oddity whilst investigating something for someone:

    with x as 
    (select to_date('28-MAR-2015 01:30','DD-MON-YYYY HH24:MI') dt1
     ,      to_date('29-MAR-2015 01:30','DD-MON-YYYY HH24:MI') dt2
     from   dual)
    select sessiontimezone
    ,      dt2 "date"
    ,      cast(dt2 as timestamp) "timestamp"
    ,      to_Char(cast(dt2 as timestamp),'DD-MON-YYYY HH24:MI') "string"
    from   x;
    

    Run as Script (F5):

    SESSIONTIMEZONE date               timestamp                       string                   
    --------------- ------------------ ------------------------------- -----------------
    Europe/London   29-MAR-2015 01:30  29-MAR6-2015 01.30.00.000000000 29-MAR-2015 01:30
    

    Run Statement (F9):

    Europe/London   29-MAR-2015 01:30   29-MAR-2015 02.30.00.000000000 29-MAR-2015 01:30
    

    Spot the oddity in column “timestamp” in the “Run Statement (F9)” output.

    There is something dodgy going on related to timestamp display.

    Colleagues in other regions have not had the same problem so I wonder if some environment variable is/isn’t getting passed along for one of these tool/driver code paths.

    I’ve seen dodgy timestamp behaviour before when trying to run dbms_xplan.display_cursor(null,null) in SQL Developer and getting the plan not of my target statement but of some timestamp lookup somewhere – either tool or driver.

    For example, if I run the statement above together with a dbms_xplan select but this time via “Run As Script (F5)”, e.g.

    with x as 
    (select to_date('28-MAR-2015 01:30','DD-MON-YYYY HH24:MI') dt1
     ,      to_date('29-MAR-2015 01:30','DD-MON-YYYY HH24:MI') dt2
     from   dual)
    select sessiontimezone
    ,      dt2 "date"
    ,      cast(dt2 as timestamp) "timestamp"
    ,      to_Char(cast(dt2 as timestamp),'DD-MON-YYYY HH24:MI') "string"
    from   x;
    
    select * from table(dbms_xplan.display_cursor(null,null));
    

    Then apparently my last run script (the meaning of the first null parameter to display_cursor) is:

    SQL_ID  0gzt83m5pxufx, child number 0 
    ------------------------------------- 
    /*+ NO_SQL_TRANSLATION */SELECT VALUE FROM V$NLS_PARAMETERS WHERE PARAMETER = 'NLS_TIMESTAMP_FORMAT' 
     
    Plan hash value: 1805486652 
     
    -------------------------------------------------------------------------- 
    | Id  | Operation        | Name             | Rows  | Bytes | Cost (%CPU)| 
    -------------------------------------------------------------------------- 
    |   0 | SELECT STATEMENT |                  |       |       |     1 (100)| 
    |*  1 |  FIXED TABLE FULL| X$NLS_PARAMETERS |     1 |    31 |     0   (0)| 
    -------------------------------------------------------------------------- 
     
    Predicate Information (identified by operation id): 
    --------------------------------------------------- 
     
       1 - filter(("PARAMETER"='NLS_TIMESTAMP_FORMAT' AND 
                  "PARAMETER"<>'NLS_SPECIAL_CHARS' AND "INST_ID"=USERENV('INSTANCE')))
    

    Unfortunately, this could all largely be irrelevant to the OP’s question which was why does the following code return the wrong result:

    select CAST(FROM_TZ( TO_TIMESTAMP('29-MAR-15 16:30:00','DD-MON-YY HH24:MI:SS') , TO_CHAR('Australia/Sydney')) AT TIME ZONE 'America/New_York' AS timestamp)
    from dual;
    
    29-MAR-2015 02.30.00.000000000
    

    But if we run this:

    select CAST(FROM_TZ( TO_TIMESTAMP('29-MAR-15 16:30:00','DD-MON-YY HH24:MI:SS') , TO_CHAR('Australia/Sydney')) AT TIME ZONE 'America/New_York' AS timestamp)
    ,      to_char(CAST(FROM_TZ( TO_TIMESTAMP('29-MAR-15 16:30:00','DD-MON-YY HH24:MI:SS') , TO_CHAR('Australia/Sydney')) AT TIME ZONE 'America/New_York' AS timestamp),'DD-MON-YYYY HH24:MI')
    from dual;
    
    29-MAR-2015 02.30.00.000000000	29-MAR-2015 01:30
    

    Then perhaps it suggests that this is just this tool or jdbc error and not something related to DST timezone data files, etc?

    Also, there’s something about timestamps with (local) time zones which always causes trouble and which you’re forced to relearn every time… maybe it’s just me?


    OAUX Tidbits

    Oracle AppsLab - Wed, 2015-03-18 10:32

    Here come some rapid fire tidbits about upcoming and recently past Oracle Applications User Experience (@usableapps) events.

    Events of the Near Past

    Laurie Pattison’s (@lsptahoe) team (@InnovateOracle) has been organizing events focused around stimulating and fostering innovation for quite some time now.

    I’ve always been a big fan of group-think-and-work exercises, e.g. design jams, hackathons, ShipIts, code sprints, etc.

    Our team frequently participates in and supports these events, e.g. Tony O was on a team that won a couple awards at the Future of Information design jam back in early February and John and Julia served as mentors at the Visualizing Information design jam a few weeks ago.

    You may recall Julia’s visualization analysis and thinking; John has an equally informative presentation, not yet shared here, but we can hope.

    Watch Laurie’s blog for information about more innovation events.

    Events of the Near Future

    It’s conference season again, and we’ll be bouncing around the globe spreading our emerging technologies user experience goodness.

    Fresh off a hot session at UTOUG (h/t OG Friend of the ‘Lab Floyd) and gadget-hounding at SXSW Interactive, Noel (@noelportugal) will be in Eindhoven, the Netherlands for the Oracle Benelux User Group Experience Day, March 23 and 24.

    Our fearless leader, Jeremy Ashley (@jrwashley) will be there as well giving the opening keynote. Bob Rhubart (@OTNArchBeat) recorded a video to tell you all about that. Check it out here:

    While Noel enjoys Europe, I’ll be in Washington D.C. speaking at Oracle HCM World, along with Thao and Ben.

    After that, we’ll have boots on the ground at Oracle Modern CX and Collaborate 15 in Las Vegas. Stay tuned for more, or if you’ll be at any conferences during Conference Season 2015 and wonder if OAUX will be there, check out our Events page.

    Update: Here’s what OAUX will be doing at Collaborate 15. If you’re attending, come by and say hello.Possibly Related Posts:

    Parallel Execution -- 2c PX Servers

    Hemant K Chitale - Wed, 2015-03-18 09:40
    Adding to my two previous posts here and here about identifying usage of Parallel Execution -- exactly how many PX servers were used for a query, here is a third method.  (the first two are V$PX_PROCESS/V$PX_SESSION and V$SQLSTATS.PX_SERVERS_EXECUTIONS).  This method uses V$PQ_SESSTAT.

    However, the limitation of V$PQ_SESSTAT is that it can only be queried from the same session as that which ran the Parallel Query.  The other two methods can be used by a separate "monitoring" session.

    HEMANT>show parameter parallel_max

    NAME TYPE VALUE
    ------------------------------------ ----------- ------------------------------
    parallel_max_servers integer 8
    HEMANT>connect / as sysdba
    Connected.
    SYS>alter system set parallel_max_servers=64;

    System altered.

    SYS>show parameter cpu

    NAME TYPE VALUE
    ------------------------------------ ----------- ------------------------------
    cpu_count integer 4
    parallel_threads_per_cpu integer 4
    resource_manager_cpu_allocation integer 4
    SYS>show parameter parallel_degree_policy

    NAME TYPE VALUE
    ------------------------------------ ----------- ------------------------------
    parallel_degree_policy string MANUAL
    SYS>



    As has been identified earlier, the PARALLEL Hint will use 16 PX Servers (limited by PARALLEL_MAX_SERVERS [see this post] because of the values of CPU_COUNT and PARALLEL_THREADS_PER_CPU (where, in this case, PARALLEL_DEGREE_POLICY is yet MANUAL).

    SYS>alter system flush shared_pool;

    System altered.

    SYS>connect hemant/hemant
    Connected.
    HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

    COUNT(*)
    ----------
    4802944

    HEMANT>select * from v$pq_sesstat;

    STATISTIC LAST_QUERY SESSION_TOTAL
    ------------------------------ ---------- -------------
    Queries Parallelized 1 1
    DML Parallelized 0 0
    DDL Parallelized 0 0
    DFO Trees 1 1
    Server Threads 16 0
    Allocation Height 16 0
    Allocation Width 1 0
    Local Msgs Sent 464 464
    Distr Msgs Sent 0 0
    Local Msgs Recv'd 464 464
    Distr Msgs Recv'd 0 0

    11 rows selected.

    HEMANT>
    HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

    COUNT(*)
    ----------
    4802944

    HEMANT>select * from v$pq_sesstat;

    STATISTIC LAST_QUERY SESSION_TOTAL
    ------------------------------ ---------- -------------
    Queries Parallelized 1 2
    DML Parallelized 0 0
    DDL Parallelized 0 0
    DFO Trees 1 2
    Server Threads 16 0
    Allocation Height 16 0
    Allocation Width 1 0
    Local Msgs Sent 464 928
    Distr Msgs Sent 0 0
    Local Msgs Recv'd 464 928
    Distr Msgs Recv'd 0 0

    11 rows selected.

    HEMANT>

    As we can see, the SESSIONS_TOTAL count of Server Threads does not get updated (although the count of Queries Parallelized is updated).  This behaviour remains in 12c.  (However, there are two additional statistics available in 12c).
    .
    .
    .
    Categories: DBA Blogs

    A funny story with recompile hint at statement level

    Yann Neuhaus - Wed, 2015-03-18 09:31

    Last night, I had an interesting discussion with one of my MVP French friend that faces a weird situation where a query that uses a statement level RECOMPILE hint produces an execution plan that disappointed him. He told me we can simulate the same situation by using the AdventureWorks database and Sales.SalesOrderHeader table.

    First, we have to add a nonclustered index on the orderdate column as follows:

     

    CREATE NONCLUSTERED INDEX [idx_sales_salesorderheader_orderdate] ON [Sales].[SalesOrderHeader] (        [OrderDate] ASC )

     

    Now, let me show you the query. In fact we compare two queries and two potential behaviours. Indeed, the first query will use a local variable and an inequality operator while the second query will be pretty the same except we add the recompile hint option.

     

    declare @date datetime = getdate()   select top 1 SalesOrderID from sales.salesorderheader where orderdate >= @date order by SalesOrderID go   declare @date datetime = getdate()   select top 1 SalesOrderID from sales.salesorderheader where orderdate >= @date order by SalesOrderID option(recompile)

     

    Let’s compare the estimated execution plan of the both queries. In fact, the estimated plan produced by the query optimizer is the same in both cases.

     

    blog_35_-_1_-_estimated_execution_plan

     

    Next, let’s compare their real execution plans.

    --> Concerning the first query:

    blog_35_-_2_-_real_execution_plan_query_1

     

    --> Concerning the second query (with recompile hint):


    blog_35_-_3_-_real_execution_plan_query_2

     

    Interesting, isn’t it? For the same query except the recompile option the query optimizer has decided to use an index scan operator in the first case and an index seek in the second case. At this point, of course we supposed that the recompile hint affects the query optimizer decision but how far? Well, the response lies in the way the query optimizer handles the parameter @date.

    In the first query, the query optimizer is not aware of the @date parameter value at the compile time.

     

    blog_35_-_4_-_real_execution_plan_query_1_-_predicate

     

     

    It means that it will not be able to use the density information and instead it will use the standard guess of 30% selectivity for inequality comparisons. If we take a look at the statistic object of the primary key we may see that 30% of the total rows is equal to 31465 * 0.3 = 9439

     

    dbcc show_statistics ('sales.salesorderheader', 'PK_SalesOrderHeader_SalesOrderID') with stat_header;

     

    blog_35_-_6_-_statistic_primary_key

     

    But at this point, we may wonder why SQL Server is using the primary key rather than the index on the orderdate column for example. In fact, the primary key is a good candidate for this query includes an ORDER BY clause (order by SalesOrderId). It means that data is already ordered by SalesOrderId and the query optimizer doesn’t need to perform an extra step that consists in sorting data before using the top operator as shown below:

     

    declare @date datetime = getdate()   select top 1 SalesOrderID from sales.salesorderheader where orderdate >= @date order by SalesOrderID   select top 1 SalesOrderID from sales.salesorderheader with (index (idx_sales_salesorderheader_orderdate) ) where orderdate >= @date order by SalesOrderID go

     

    blog_35_-_7_-_query_1_-_sort

     

    Let’s continue with the second query and notice how SQL Server is handling the @date parameter this time: the local variable value is transformed as parameter as shown below:

     

    blog_35_-_8_-_real_execution_plan_query_2_-_predicate

     

    Hmm… does it mean that SQL Server is aware of local variable value when using the statement RECOMPILE hint? In fact, yes it does and this detail changes completely the story because in this case SQL Server is able to use the histogram of the index idx_salesorderheader_orderdate. However, we are in such situation where the predicate value is not represented on the histogram and SQL Server will use an estimate of 1 that implies to use an index seek operator. Likewise, we may wonder why SQL Server has decided to seek the index idx_salesorderheader_orderdate with a TopN Sort operator here. Once again, the query optimizer is cost-based and this strategy is surely the less costly. Let’s demonstrate by comparing the both strategies:

     

    declare @date datetime = getdate()   select top 1 SalesOrderID from sales.salesorderheader with (index (PK_SalesOrderHeader_SalesOrderID) ) where orderdate >= @date order by SalesOrderID option(recompile)   select top 1 SalesOrderID from sales.salesorderheader where orderdate >= @date order by SalesOrderID option(recompile)

     

    blog_35_-_9_-_real_execution_plan_query__-_force_using_pk

     

    Let’s take a look at the statistics related on both CPU and IO consumption of the both queries

    --> Concerning the first query (clustered index scan operator)

    Table 'SalesOrderHeader'. Scan count 1, logical reads 689, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.   (1 row(s) affected)   SQL Server Execution Times:    CPU time = 16 ms, elapsed time = 2 ms. SQL Server parse and compile time:    CPU time = 0 ms, elapsed time = 1 ms.

     

    --> Concerning the second query (index seek operator). Yes, in this case you have only to read 2 pages (the index root page and one data page at leaf level)

    Table 'SalesOrderHeader'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.   (1 row(s) affected)   SQL Server Execution Times:    CPU time = 0 ms, elapsed time = 0 ms. SQL Server parse and compile time:    CPU time = 0 ms, elapsed time = 0 ms.

     

    The bottom line: in certain cases we may be surprised by the choices made by the query optimizer but most of the time, we can trust it and this is what I wanted to point out in the blog post! The truth often lies elsewhere :-)

    Happy optimization!


    BEA-090898 during PlugIn activation in clusters

    Frank van Bortel - Wed, 2015-03-18 08:42
    Be Secure I did not mention it in my not so "OAM-in-a-day" entry, but when you run a clustered environment, make sure to set the "Secure" flag on the AdminServer and Managed Server configuration screens. It does have more impact that setting the "Use JSSE" flag on the SSL/Advanced section of the Weblogic console, but when you failed to do so, that's one place to correct it. Why? No particular Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0