Skip navigation.

Feed aggregator

Pillars of Powershell #2: Commanding

Pythian Group - Tue, 2015-03-31 07:28
Introduction

This is the second blog post as a continuance in the series on the Pillars of PowerShell. In the initial blog post we went over the various interfaces that can be used to work with PowerShell. In this blog post we are going to start out by going through a few terms you might find when you start reading up on PowerShell. Otherwise, I will go over three of the cmdlets you will find can be used to discover and get documentation on the cmdlets available to you in PowerShell.

Pillar 2: Commanding

The following are a few terms I will use throughout this series, and ones you might find referenced in any reading material, which I wanted to introduce so we start out on the same page:

  • Session
    When you open PowerShell.exe or PowerShell_ISE.exe it will create a session for you, essentially a blank slate for you to build and create. You can think of this as a query window within the SQL Server Management Studio.
  • Cmdlets
    Pronounced “command-lets”, these are the bread and butter of PowerShell that allow you to do things from getting information or manipulating it. Microsoft has coined the format of Verb-Noun for the cmdlets and it has pretty much stuck. Each version of PowerShell as it is released comes with additional cmdlets, and then product teams like SQL Server and Active Directory are also releasing cmdlets to allow you to interact with them through PowerShell. Each time you open a session with PowerShell there are a set of core cmdlets that are automatically loaded for you.
  • Module
    A module is basically just a set of cmdlets that can be added within your session of PowerShell. When you load a module into your session, the commands are then made available to you. If you close that session and open a new one, you will then have to reload that module to access the commands again.
  • Objects
    PowerShell is based on .NET, this is what is behind the scenes more or less, thus with .NET being an object oriented language PowerShell treats the data that is returned as objects. So, if I use a cmdlet to return the processes running on a machine, each process that it returns is as an object.
  • The Pipeline
    This is named after the symbol used to connect cmdlets together, “|” (vertical bar on your keyboard). You can think of this like a train, each train car represents a set of objects and each car you pass it through can do something to each object until you reach the end.
  • PowerShell Profile
    This is basically the ability to customize your PowerShell session each time you open or start PowerShell. You can do things such as pre-load modules, create custom bits of PowerShell code for reuse or easy access, and many other things. I would compare this to your profile in Windows that keeps up with things like the icons or applications you have pinned to the taskbar or the default browser.

Now I want to take you through a few core cmdlets that are used most commonly to discover what is available in the current version of PowerShell or the module you might be working with in it. I tend to use these commands almost every time I open PowerShell. I do not try to memorize everything, especially when I can look it up so quickly in PowerShell.

Get-Command

This cmdlet does exactly what you think it does, gets a list of commands that are available in your current PowerShell session. You can use the parameters of this cmdlet to filter the list down to what interest you, say all the “get” cmdlets:

Get-Command -Verb Get -CommandType Cmdlet
p2_get-command-verb-get Get-Help

Now you might be wondering where the documentation is for all of the cmdlets you saw using Get-Command? Where is the Books Online equivalent to what you get with SQL Server? Well unlike SQL Server you can actually get the documentation via the cmdlet Get-Help. This cmdlet can return the information to you or you can use a parameter to open it up in the browser, if it is available. So for example one of the best things to look up documentation on initially is the Get-Help cmdlet itself:

Get-Help

The output of this command is good to read through but the main items I want to pull out are three particular parameters:

  1. Online: This will take you to the TechNet page of the documentation for the cmdlet. This may not work with every cmdlet you come across but if Microsoft owns it there should be something.
  2. Examples: This is going to provide a few examples and descriptions of how you can use the cmdlet and the more common parameters.
  3. Full: This will show you pretty much what document is online. This just keeps you in PowerShell instead of view it in the browser.

So let me try bringing up the examples of the Get-Help cmdlet itself:

Get-Help Get-Help -Examples
p2_get-help-examples

If you are using Windows 7 OS or higher, you may receive something similar to this screenshot. This is something that was added in PowerShell version 3.0, the cmdlet Update-Help. This cmdlet is used to actually update the help files on a computer as needed. In the event Microsoft updates the help files, or the online TechNet page, you can use this to download a current version of it. Microsoft has moved to this method in place of trying to do the updates locally with cumulative updates or service packs. It does require Internet access to execute the cmdlet. If your machine is not on the Internet you can download them from Microsoft’s download center. In order to fix the above message I just need to issue the command: Update-Help.

You should see a progress bar as shown below while it is running through updating all the files (which that progress bar is actually done using PowerShell):

p2_update-help

I ended up getting two errors for certain modules and this is because I am not running the cmdlet with elevated privileges. If you open PowerShell.exe with the “Run As Administrator” option and then execute the cmdlet again it will be able to update all help files without error.

Now if you run the previous command again you should see the actual examples, although if you notice it can be annoying to try and scroll back up to read all that information. A tidbit I did not know about right away was that there is a parameter in Get-Help that opens up a separate window, which makes it easier to read called, “-ShowWindow“. It is basically the “-Full” output but with the option to filter out sections that do not interest you.

Get-Help Get-Help -ShowWindow
p2_get-help-showwindow

You actually can use Get-Help to search for cmdlets as well. I tend to do this more than trying to use Get-Command just because it is a bit quicker. You can just issue something like this to find all the Get” cmdlets:

Get-Help get*

One more thing about the help system in PowerShell, it also includes things called “about files” that are basically concept topics that go deeper into certain areas. They offer a wealth of information and you can also get to these online. Something for you to try on your own to see what is available is just issue this command:

Get-Help about*
Get-Member

This cmdlet is a little gem that you will use more than anything. If you pipe any cmdlet (or one-liner) to Get-Member it will provide you a list of the properties or methods available to you for the object(s) passed. This cmdlet also includes a filter of “-MemberType” that I can use to only return the properties available to me. The properties are those that we can “select” to return as output or pass to other cmdlets down the pipe.

Get-Command | Get-Member -MemberType Property
Out-GridView

I am only going to touch on this cmdlet. It can be used to output objects into a table like view, that also offers some filtering attributes too. There are a few different Out-* cmdlets that are available to you for outputting information to various destinations. You can find these using the Get-Command or Get-Help cmdlets. To use Out-GridView on a Windows Server OS you will have to add the PowerShell ISE feature. You will get an error stating as much if you do not.

Get-Command | Out-GridView

get-comand-get-member-out-gridview Summary

The three cmdlets Get-CommandGet-Help, and Get-Member that I spoke on above are ones I think you should become very familiar with and explore them deeply. Once you master using these it will provide you the ability to find out anything and everything about a cmdlet or module that you are trying to use. When you start working with various modules such as Azure or SQL Server PowerShell (SQLPS) these cmdlets are quite useful in discovering what is available.

Categories: DBA Blogs

Pillars of PowerShell #1: Interacting

Pythian Group - Tue, 2015-03-31 06:55
Introduction

PowerShell is a tool that if adopted can be used to help automate and standardize processes in your Windows and SQL Server environment (among other things). This blog series is intended to show you some of the basics (not all of them) that will get you up and running with PowerShell. I say not all of them, because there are areas in PowerShell that you can go pretty deep in, just like SQL Server. I want to just give you the initial tools to get you on your way to discovering the awesomeness within PowerShell. I decided to go with a Greek theme, and just break this series up into pillars. In this first blog post I just wanted to show you the tools that are available to allow you to interact with PowerShell itself.

Pillar 1: Interacting

Interacting with PowerShell is most commonly issuing commands directly on the command line interface (CLI), the step above that would be building out a script that contains multiple commands. The first two options are available “out-of-the-box” on a Windows machine that has PowerShell installed. After this, you have a few third party options available to you that I will point out.

  1. PowerShell.exe
    This is the command prompt (or console as some may call it) that most folks will spend their day-to-day life entering what are referred to as “one-liners”. This is the CLI for PowerShell. You can access this in Windows by going through the Start Menu, or just type it into the Run prompt.
    powershell.exe
  2. PowerShell_ISE.exe
    This is the PowerShell Integrated Scripting Environment and is included in PowerShell 2.0 and up. This tool gives you the ability to have a script editor and CLI in one place. You can find out more about this tool and the various features that come with each version here. You can access this similar to the same way you would PowerShell.exe. In Windows Server 2008 R2 and above though this it is a Windows Feature that has to be added or activated before you can use it.
    powershell_ise
  3. Visual Studio (VS) 2013 Community Edition + PowerShell Tools for Visual Studio 2013
    VS 2013 Community is the free version of Visual Studio that includes the equivalent functionality of Visual Studio Professional Edition. Microsoft opened up the door for many things when they did this, the main one being that you can now develop PowerShell scripts along side your C# or other .NET projects. Adam Driscoll (PowerShell MVP) developed and released an add-on specifically for VS 2013 Community that you can get from GitHub, here.
  4. Third party ISE/Editors
    The following are the main players in the third party offerings for PowerShell ISE or script editors. I have tried all of them before, but since they only exist on the machine you install them on I tend to stick with what is in Windows. They have their place and if you begin to develop PowerShell heavily (e.g. full project solutions) they can be very useful in the management of your scripts:

Summary

This was a fairly short post that just started out with showing you what your options are to start working and interacting with PowerShell. PowerShell is a fun tool to work with and discover new things that it can do for you. In this series I will typically stick with using the CLI (PowerShell.exe) for examples.

One more thing to point out is the versions of PowerShell currently released (as of this blog post) are 2.0, 3.0, and 4.0. The basic commands I am going to go over will work in any version, but where specific nuances exist between each version I will try to point out if needed.

Categories: DBA Blogs

Oracle OpenWorld 2015: Call for Proposals is Now Open

WebCenter Team - Tue, 2015-03-31 06:41
If you’re an Oracle technology expert, conference attendees want to hear it straight from you. So don’t wait—proposals must be submitted by April 29.

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";}

Oracle OpenWorld 2015

Wanted: Outstanding Oracle Experts

The Oracle OpenWorld 2015 Call for Proposals is now open. Attendees at the conference are eager to hear from experts on Oracle business and technology. They’re looking for insights and improvements they can put to use in their own jobs: exciting innovations, strategies to modernize their business, different or easier ways to implement, unique use cases, lessons learned, the best of best practices.

If you’ve got something special to share with other Oracle users and technologists, they want to hear from you, and so do we. Submit your proposal now for this opportunity to present at Oracle OpenWorld, the most important Oracle technology and business conference of the year.

We recommend you take the time to review the General Information, Submission Information, Content Program Policies, and Tips and Guidelines pages before you begin. We look forward to your submissions.

Attention HCM customers: HCM Central @ OpenWorld is designed to provide a single place for all things HCM. Have a story related to Oracle implementations around HCM? Submit here.

Attention CX customers: CX Central @ OpenWorld is designed to provide a single place for all things related to the customer lifecycle for all of Oracle's CX customers whose business requires them to definitively differentiate themselves across all channels, touch points, and interactions. Have a story related to Oracle implementations around the Customer Experience lifecycle? Submit here.

Submit Your Proposal

By submitting a session for consideration, you authorize Oracle to promote, publish, display, and disseminate the content submitted to Oracle, including your name and likeness, for use associated with the Oracle OpenWorld and JavaOne San Francisco 2015 conferences. Press, analysts, bloggers and social media users may be in attendance at OpenWorld or JavaOne sessions.

Submit Now.

PowerShell Script to Manipulate SQL Server Backup Files

Pythian Group - Tue, 2015-03-31 06:39
Scenario

I use Ola Hallengren’s famous backup solution to back up my SQL Server databases. The destination for full backups is a directory on local disk; let’s say D:\SQLBackup\

If you are familiar with Ola’s backup scripts, you know the full path for backup file looks something like:

D:\SQLBackup\InstanceName\DatabaseName\FULL\InstanceName_DatabaseName_FULL_yyyymmdd_hhmiss.bak

Where InstanceName is a placeholder for the name of the SQL server instance, similarly, DatabaseName is for the Database Name.

Problem

Depending upon my retention period settings, I may have multiple copies of full backup files under the said directory. The directory structure is complicated too (backup file for each database is under two parent folders). I want to copy the latest backup file (only) for each database to a UNC share and rename the backup file scrubbing everything but the database name.

Let’s say the UNC path is \\RemoteServer\UNCBackup. The end result would have the latest full backup file for all the databases copied over to \\RemoteServer\UNCBackup with files containing their respective database names only.

Solution

I wrote a PowerShell script to achieve the solution. This script can be run from a PowerShell console or PowerShell ISE. The more convenient way would be to use PS subsystems and schedule a SQL Server agent job to run this PowerShell script. As always, please run this on a test system first and use at your own risk. You may want to tweak the script depending upon your requirement.

 

<#################################################################################

   

Script Name: CopyLatestBackupandRename.ps1                       

Author     : Prashant Kumar                           

Date       : March 29th, 2015

   

Description: The script is useful for those using Ola Hallengren’s backup solution.

             This script takes SQL Server full backup parent folder as an input,

             a remote UNC path as another input and copies the latest backup file

             for each database, renames the backup file to the remote UNC path.

 

 

This Sample Code is provided for the purpose of illustration only and is not

intended to be used in a production environment. THIS SAMPLE CODE AND ANY

RELATED INFORMATION ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER

EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF

MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.

##################################################################################>

 

#Clear screen

cls

 

#Specify Parent folder where Full backup files are originally being taken

$SourcePath = ‘D:\SQLBackup\InstanceName’

 

#Specify UNC path ot network share where backup files has to be copied

$UNCpath = ‘\\RemoteServer\UNCBackup’

 

#Browse thru subfolders (identical to database names) inside $SourcePath

$SubDirs = dir $SourcePath -Recurse | Where-Object {$_.PSIsContainer} | ForEach-Object -Process {$_.FullName}

 

#Browse through each sub-drorectory inside parent folder

ForEach ($Dirs in $SubDirs)

       {

    #List recent file (only one) within sub-directories

       $RecentFile = dir $Dirs | Where-Object {!$_.PSIsContainer} | Sort-Object {$_.LastWriteTime} -Descending | Select-Object -First 1

      

    #Perform operation on each file (listed above) one-by-one

       ForEach ($File in $RecentFile)

              {

       

              $FilePath = $File.DirectoryName

              $FileName = $File.Name

        $FileToCopy=$FilePath+‘\’+$FileName

        $PathToCopy=($filepath -replace [regex]::Escape($SourcePath), $UNCpath)+‘\’

       

        #Forecfully create the desired directory structure at destination if one doesn’t exist

        New-Item -ItemType Dir -Path $PathToCopy -Force

 

        #Copy the backup file

        Copy-Item $FileToCopy $PathToCopy

 

        #Trim the date time from the copied file name, store in a variable

        $DestinationFile = $PathToCopy+$FileName

        $RenamedFile = ($DestinationFile.substring(0,$DestinationFile.length-20))+‘.bak’

 

        #Rename the copied file

        Rename-Item $DestinationFile $RenamedFile

 

        }

             

       }

 

 

 

Categories: DBA Blogs

SQL Server 2012 SP2 Cummulative Update 5

Pythian Group - Tue, 2015-03-31 06:20

Hey folks,

Microsoft released the 5th Cummulative Update for SQL Server 2012 SP2. This update package contains fixes for 27 different issues, distributed as follows:

 

CU5

 

One very important issue that was fixed on this CU release was KB3038943 –   Error 4360 when you restore the backup of secondary replica to another server in AlwaysOn Availability Groups.

If you use SQL Server 2012 SP2 Always On and you offload your log backups to the secondary node it is recommended that you apply this patch!

The full Cummulative Update release and the download links can be found here: http://support.microsoft.com/en-us/kb/3037255/en-us

 

Categories: DBA Blogs

switch user in Oracle

Laurent Schneider - Tue, 2015-03-31 05:14

Almost a decade ago I wrote about su in sqlplus. This 10gR2 “new” feature allows delegation à la sudo.

By checking the DBA_USERS in 12c I found PROXY_ONLY_CONNECT. According to Miguel Anjo, there is a secret syntax for allowing only the proxy user.


SQL> ALTER USER app_user PROXY ONLY CONNECT;
SQL> CONNECT app_user/xyz
ERROR:ORA-28058: login is allowed only through a proxy

Rumors about Oracle Reports 12c

Gerd Volberg - Tue, 2015-03-31 00:53
For nearly six weeks, it is rumored that Oracle Reports will not appear in the version 12c.

Since then, I tried to get information on this topic. After a few emails I got the info I needed.


The rumors are definitely wrong and no one at Oracle wants to cancel Reports 12c.

I hope that this cools down the rumor mill a little bit.

Gerd

Oracle Utilities Customer Self Service (CSS)

Anthony Shorten - Mon, 2015-03-30 22:02

In past releases of Oracle Utilities Customer Care And Billing, a sample Web Self Service (WSS) set of code was shipped for customers and partners to build their own Web Self Service applications.

The WSS sample code has been removed from the Oracle Utilities Customer Care And Billing product in Version 2.4.0.0.0 and above.

It is highly recommended that customers using WSS consider migrating to Oracle Utilities Customer Self Service for a fully integrated self service solution.

The Oracle Utilities Customer Self Service has superior functionality with the following advantages:

For more information about the features of Oracle Utilities Customer Self Service refer to the Oracle Utilities Customer Self Service brochure.

Business Intelligence Cloud Service – Data Modeler

Dylan's BI Notes - Mon, 2015-03-30 19:27
These video shows how the data are loaded to BI Cloud Service and are modeled as dimensions and facts. We do not need to use the BI admin tool to create model. For BICS, we can create model using the browser.
Categories: BI & Warehousing

About the Diverging Textbook Prices and Student Expenditures

Michael Feldstein - Mon, 2015-03-30 16:56

By Phil HillMore Posts (302)

This is part 3 in this series. Part 1 described the most reliable data on A) how much US college textbook prices are rising and B) how much students actually pay for textbooks, showing that the College Board data is not reliable for either measure. Part 2 provided additional detail on the data source (College Board, NCES, NACS, Student Monitor) and their methodologies. Note that the textbook market is moving into a required course materials market, and in the immediate series I use both terms somewhat interchangeably based on which source I’m quoting. They are largely equivalent, but not identical.

Based on the most reliable data we have, the average college textbook prices are rising at three times the rate of inflation while average student expenditures on textbooks is remaining flat or even falling, in either case below the rate of inflation. Average student expenditures of approximately $600 per year is about half of what gets commonly reported in the national media. The combined chart comes from this GAO Report (using CPI data) and this NPR report (using Student Monitor data).

Combined Chart

Does this indicate a functioning market, and does this indicate that we don’t have a textbook pricing problem? No, and no.

Why Are Student Expenditures Not Rising Along With Prices?

The answer to this question can be partly found in the financials of your major publishing company. If students were buying new textbooks at the same rate as they used to, publishing companies would be thriving instead of cutting thousands of employees or even resorting to bankruptcy to stay afloat. Students are increasingly choosing to not buy new textbooks.

Let’s look at the NACS data (this one from Fall 2013 data, new data coming out later this week):

NACS 2013 Did Not Acquire

Notes

A few notes to highlight:

  • 30% of surveyed students chose not to acquire at least one required course material. On average, these students skipped acquiring three textbooks in just one term.
  • The top reason in this report is not based on price: 38.5% chose not to acquire required course materials because they felt the materials were not needed or wanted, and 30.2% chose not to acquire based on price.
  • By combining answers, 38.5% chose to borrow the course materials or “it was available elsewhere without purchase”.
  • From the following page (not shown), when asked what students used to substitute for non-acquired course materials:
    • 57.1% just used notes from class;
    • 46.5% borrowed material from friends or libraries; and
    • 19.1% got the chapter or material illegally.

Average expenditures don’t capture the full story, and later in the report it is noted that:

  • Students at two-year colleges spent 31% more than the average on required course materials;
  • Overall first year students spent 23% more than the average on required course materials; and
  • Overall second year students spent 10% more than the average on required course materials.

In other words, the high enrollment courses in the first two years lead to the highest student expenditures on textbooks. Note that we’re still not talking about $1,200 per year spending as often reported based on College Board data, even for these first two years.

Student Monitor also captures some information of note.

  • They report identical data – 30% choosing not to acquire at least one textbook.
  • 29% of students report they bought ‘required course materials’ that ended up not being used. Of these students, 52% will be more likely to “wait longer before purchasing course materials”.
  • They categorize the reasons for not acquiring textbooks differently; professor not using the “required” material was listed by 22% of students, lower than affordability at 31%.
  • 73% of students who downloaded textbooks illegally did so to “save money”.
Negative Impact on Students

It is important to look at both types of data – textbook list prices and student expenditures – to see some of the important market dynamics at play. All in all, students are exercising their market power to keep their expenditures down – buying used, renting, borrowing, obtaining illegally, delaying purchase, or just not using at all. And textbook publishers are suffering, despite (or largely because of) their rising prices.

But there are downsides for students. There are increasing number of students just not using their required course materials, and students often delay purchase until well into the academic term. Whether from perceived need or from rising prices, this is not a good situation for student retention and learning.

The post About the Diverging Textbook Prices and Student Expenditures appeared first on e-Literate.

Getting inaccessible URL when executing REST calls within JCSSX????

Angelo Santagata - Mon, 2015-03-30 16:16

Recently I was coding up a REST client for use with Oracle Documents Cloud, using Jersey REST client, and it needed to be deployed to Oracle Java Cloud SX (aka JCSSX). The client code worked perfectly on a local Weblogic 11g but when deployed to the JCSSX instance it would give the following error :

Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4

java.lang.RuntimeException: java.security.AccessControlException: access denied ("java.net.SocketPermission" "partners-pts.documents.us2.somecloud.com:443", "connect,resolve")

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

Initially I was convinced that this was some sort of networking issue in JCSSX, I.e. it couldn't connect to the documents cloud server via the network.. I even tried manually setting the proxy in the Java Code all to no avail..

After quite a while looking I discovered the problem...

This is the detailed error message I got :

Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 Caused by: java.security.AccessControlException: access denied ("java.net.SocketPermission" "partners-pts.documents.us2.somecloud.com:443" "connect,resolve")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:372)
at java.security.AccessController.checkPermission(AccessController.java:559)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
at java.lang.SecurityManager.checkConnect(SecurityManager.java:1051)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:510)
at sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:275)

at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:371)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1300)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
... 81 more

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}


The bold bits hint at the issue.. For some reason my code was using the Sun HTTP Handler which isn't supported on the JCSSX stack but I hadnt configured it to use the Sun Http Handler...You can get your code to use the Sun Http Handler by either setting the system property "UseSunHttpHandler=true" in code or by using Oracle Cloud SDK to set it as a system property.

To check if you have the UseSunHttpHandler set, issue the following command (changing your JCSSX details)

 javacloud list-system-properties -user <username> -p <password> -id <identityDomain>-si  <serverInstance> -httpproxy <httpProxy:port> -datacenter <dataCenterName>

If you have the UseSunHttpHander set to true, or even present, then remove it! Someone had set it in my instance but none of my team members would admit to it.....

 javacloud delete-system-property -user <username> -p <password> -id <identityDomain>-si  <serverInstance> -httpproxy <httpProxy:port> -datacenter <dataCenterName> -name UseSunHttpHandler

Restart your instance and all should then be well.

We've logged an enhancement request to get JCSSX to ignore this specific system property but just incase you hit it before the ER hits the JCSSX servers.



DOAG Expertenseminar "Parallel Execution Masterclass" (German)

Randolf Geist - Mon, 2015-03-30 15:34
In zwei Wochen findet das Expertenseminar "Parallel Execution Masterclass" in Berlin statt, das ich gemeinsam mit der DOAG veranstalte.

Es sind noch ein paar Plätze frei - sollten Sie also Lust und Zeit haben, nach Berlin zu kommen und exklusives Wissen (nicht nur) über das Parallel Execution Feature der Oracle Datenbank zu erfahren, würde ich mich sehr freuen, Sie dort mit den anderen Teilnehmern begrüßen zu dürfen, um gemeinsam mit Ihnen eine gute und produktive Zeit zu verbringen!

Bei Interesse wenden Sie sich bitte an die Ansprechpartner der DOAG, die im Link angegeben sind - dort finden Sie auch eine genauere Beschreibung des Seminars.

Weird issue with sys.fn_hadr_backup_is_preferred_replica() function

Yann Neuhaus - Mon, 2015-03-30 13:11

A couple of days ago, I faced a weird backup issue with SQL Server AlwaysOn and availability groups at one of my customer (thank to him to point out this issue :-) ). After installing our DMK tool (Database Management Kit) about database maintenance for AlwaysOn, my customer noticed that their databases have not backed up. Ouch … what’s going on? I never ran into this issue before... Do the problem comes from our tool?

In fact, our DMK uses the useful DMF sys.fn_hadr_backup_is_preferred_replica() to know which databases are candidate for backup operations on replicas at a given time and this is where our issue starts. Indeed, in a specific situation that includes both a case sensitive server collation and entering the name of replicas in lower case, we found that the result of this function is inconsistent. Let me show with an example.

In my customer’s context, the replica names have been filled out from a PowerShell script form in lower case as follows:

 

blog_37_-_powershell_script

 

Let’s take a look at the system view to check the availability group configuration:

 

SELECT        replica_server_name,        availability_mode_desc,        failover_mode_desc FROM sys.availability_replicas

 

blog_37_-_availability_replicas_name

 

Let’s verify that the collation of the SQL Server instance is case sensitive …

 

SELECT SERVERPROPERTY('Collation') AS ServerCollation;

 

blog_37_-_server_collation

 

… and the backup preference policy is configured to “primary”

 

SELECT        name AS group_name,        automated_backup_preference_desc as backup_preference FROM sys.availability_groups

 

blog_37_-_backup_preference

 

Finally, let’s verify the database inside the availability group:

 

SELECT        g.name AS group_name,        r.replica_server_name AS replica_name,        dcs.database_name,        drs.database_.state_desc AS db_state FROM sys.dm_hadr_database_replica_states AS drs JOIN sys.availability_replicas AS r        ON drs.replica_id = r.replica_id JOIN sys.availability_groups AS g        ON g.group_id = drs.group_id JOIN sys.dm_hadr_database_replica_cluster_states AS dcs        ON dcs.group_database_id = drs.group_database_id              AND dcs.replica_id = drs.replica_id

 

blog_37_-_dummy_database

 

Ok now let’s take a look at the result of the DMF sys.fn_hadr_backup_is_preferred_replica() in this context. I put here a simplified sample portion of the TSQL code used in our DMK:

 

USE master; GO   DECLARE @db_name SYSNAME;   SELECT @db_name = name FROM sys.databases WHERE name = N'DUMMY';   SELECT        @@SERVERNAME AS server_name,        @db_name AS database_name,        sys.fn_hadr_backup_is_preferred_replica(@db_name) AS fn_result;

 

Concerning the primary:

 

blog_37_-_backup_function_result_primaryjpg

 

Concerning the secondary :

 

blog_37_-_backup_function_result_secondary

 

 

If you perform the same by configuring this time the replica names in upper case, you will notice that the issue will disappear. When I think about this issue, it's true that in almost cases customers prefer to use the assistant wizard to configure availability groups and in this case do you notice that the replica names are always switched in upper case?

There also exists a Microsoft connect item about this problem but unfortunately it seems that it will not be solved by Microsoft … so be careful when you implement availability groups by script.

See you on the next availability group ventury!

 

 

 

Oracle GoldenGate, MySQL and Flume

Rittman Mead Consulting - Mon, 2015-03-30 13:05

Back in September Mark blogged about Oracle GoldenGate (OGG) and HDFS . In this short followup post I’m going to look at configuring the OGG Big Data Adapter for Flume, to trickle feed blog posts and comments from our site to HDFS. If you haven’t done so already, I strongly recommend you read through Mark’s previous post, as it explains in detail how the OGG BD Adapter works.  Just like Hive and HDFS, Flume isn’t a fully-supported target so we will use Oracle GoldenGate for Java Adapter user exits to achieve what we want.

What we need to do now is

  1. Configure our MySQL database to be fit for duty for GoldenGate.
  2. Install and configure Oracle GoldenGate for MySQL on our DB server
  3. Create a new OGG Extract and Trail files for the database tables we want to feed to Flume
  4. Configure a Flume Agent on our Cloudera cluster to ‘sink’ to HDFS
  5. Create and configure the OGG Java adapter for Flume
  6. Create External Tables in Hive to expose the HDFS files to SQL access

OGG and Flume

Setting up the MySQL Database Source Capture

The MySQL database I will use for this example contains blog posts, comments etc from our website. We now want to use Oracle GoldenGate to capture new blog post and our readers’ comments and feed this information in to the Hadoop cluster we have running in the Rittman Mead Labs, along with other feeds, such as Twitter and activity logs.

The database has to be configured to user binary logging and also we need to ensure that the socket file can be found in /tmp/mysql.socket. You can find the details for this in the documentation. Also we need to make sure that the tables we want to extract from are using the InnoDB engine and not the default MyISAM one. The engine can easily be changed by issuing

alter table wp_mysql.wp_posts engine=InnoDB;

Assuming we already have installed OGG for MySQL on /opt/oracle/OGG/ we can now go ahead and configure the Manager process and the Extract for our tables. The tables we are interested in are

wp_mysql.wp_posts
wp_mysql.wp_comments
wp_mysql.wp_users
wp_mysql.wp_terms
wp_mysql.wp_term_taxonomy

First configure the manager

-bash-4.1$ cat dirprm/mgr.prm 
PORT 7809
PURGEOLDEXTRACTS /opt/oracle/OGG/dirdat/*, USECHECKPOINTS

Now configure the Extract to capture changes made to the tables we are interested in

-bash-4.1$ cat dirprm/mysql.prm 
EXTRACT mysql
SOURCEDB wp_mysql, USERID root, PASSWORD password
discardfile /opt/oracle/OGG/dirrpt/FLUME.dsc, purge
EXTTRAIL /opt/oracle/OGG/dirdat/et
GETUPDATEBEFORES
TRANLOGOPTIONS ALTLOGDEST /var/lib/mysql/localhost-bin.index
TABLE wp_mysql.wp_comments;
TABLE wp_mysql.wp_posts;
TABLE wp_mysql.wp_users;
TABLE wp_mysql.wp_terms;
TABLE wp_mysql.wp_term_taxonomy;

We should now be able to create the extract and start the process, as with a normal extract.

ggsci>add extract mysql, tranlog, begin now
ggsci>add exttrail ./dirdat/et, extract mysql
ggsci>start extract mysql
ggsci>info mysql
ggsci>view report mysql

We will also have to generate metadata to describe the table structures in the MySQL database. This file will be used by the Flume adapter to map columns and data types to the Avro format.

-bash-4.1$ cat dirprm/defgen.prm 
-- To generate trail source-definitions for GG v11.2 Adapters, use GG 11.2 defgen,
-- or use GG 12.1.x defgen with "format 11.2" definition format.
-- If using GG 12.1.x as a source for GG 11.2 adapters, also generate format 11.2 trails.

-- UserId logger, Password password
SOURCEDB wp_mysql, USERID root, PASSWORD password

DefsFile dirdef/wp.def

TABLE wp_mysql.wp_comments;
TABLE wp_mysql.wp_posts;
TABLE wp_mysql.wp_users;
TABLE wp_mysql.wp_terms;
TABLE wp_mysql.wp_term_taxonomy;
-bash-4.1$ ./defgen PARAMFILE dirprm/defgen.prm 

***********************************************************************
        Oracle GoldenGate Table Definition Generator for MySQL
      Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140920.0203
...

***********************************************************************
**            Running with the following parameters                  **
***********************************************************************
SOURCEDB wp_mysql, USERID root, PASSWORD ******
DefsFile dirdef/wp.def
TABLE wp_mysql.wp_comments;
Retrieving definition for wp_mysql.wp_comments.
TABLE wp_mysql.wp_posts;
Retrieving definition for wp_mysql.wp_posts.
TABLE wp_mysql.wp_users;
Retrieving definition for wp_mysql.wp_users.
TABLE wp_mysql.wp_terms;
Retrieving definition for wp_mysql.wp_terms.
TABLE wp_mysql.wp_term_taxonomy;
Retrieving definition for wp_mysql.wp_term_taxonomy.


Definitions generated for 5 tables in dirdef/wp.def.

Setting up the OGG Java Adapter for Flume

The OGG Java Adapter for Flume will use the EXTTRAIL created earlier as a source, pack the data up and feed to the cluster Flume Agent, using Avro and RPC. The Flume Adapter thus needs to know

  • Where is the OGG EXTTRAIL to read from
  • How to treat the incoming data and operations (e.g. Insert, Update, Delete)
  • Where to send the Avro messages to

First we create a parameter file for the Flume Adapter

-bash-4.1$ cat dirprm/flume.prm
EXTRACT flume
SETENV ( GGS_USEREXIT_CONF = "dirprm/flume.props")
CUSEREXIT libggjava_ue.so CUSEREXIT PASSTHRU INCLUDEUPDATEBEFORES
GETUPDATEBEFORES
NOCOMPRESSUPDATES
SOURCEDEFS ./dirdef/wp.def
DISCARDFILE ./dirrpt/flume.dsc, purge

TABLE wp_mysql.wp_comments;
TABLE wp_mysql.wp_posts;
TABLE wp_mysql.wp_users;
TABLE wp_mysql.wp_terms;
TABLE wp_mysql.wp_term_taxonomy;

There are two things to note here

  • The OGG Java Adapter User Exit is configured in a file called flume.props
  • The source tables’ structures are defined in wp.def

The flume.props file is a ‘standard’ User Exit config file

-bash-4.1$ cat dirprm/flume.props 
gg.handlerlist=ggflume

gg.handler.ggflume.type=com.goldengate.delivery.handler.flume.FlumeHandler
gg.handler.ggflume.host=bd5node1.rittmandev.com
gg.handler.ggflume.port=4545

gg.handler.ggflume.rpcType=avro
gg.handler.ggflume.delimiter=;
gg.handler.ggflume.mode=tx
gg.handler.ggflume.includeOpType=true
# Indicates if the operation timestamp should be included as part of output in the delimited separated values
# true - Operation timestamp will be included in the output
# false - Operation timestamp will not be included in the output
# Default :- true
gg.handler.ggflume.includeOpTimestamp=true

# Optional properties to use the transaction grouping functionality
#gg.handler.ggflume.maxGroupSize=1000
#gg.handler.ggflume.minGroupSize=1000

### native library config ###
goldengate.userexit.nochkpt=TRUE
goldengate.userexit.timestamp=utc
goldengate.log.logname=cuserexit
goldengate.log.level=INFO
goldengate.log.tofile=true
goldengate.userexit.writers=javawriter

gg.report.time=30sec
gg.classpath=AdapterExamples/big-data/flume/target/flume-lib/*

javawriter.stats.full=TRUE
javawriter.stats.display=TRUE
javawriter.bootoptions=-Xmx32m -Xms32m -Djava.class.path=ggjava/ggjava.jar -Dlog4j.configuration=log4j.properties

Some points of interest here are

  • The Flume agent we will send our data to is running on port 4545 on host bd5node1.rittmandev.com
  • We want each record to be prefixed with I(nsert), U(pdated) or D(delete)
  • We want each record to be postfixed with a timestamp of the transaction date
  • The Java class com.goldengate.delivery.handler.flume.FlumeHandler will do the actual work. (The curios reader can view the code in /opt/oracle/OGG/AdapterExamples/big-data/flume/src/main/java/com/goldengate/delivery/handler/flume/FlumeHandler.java)

Before starting up the OGG Flume, let’s first make sure that the Flume agent on bd5node1 is configure to receive our Avro message (Source) and also what to do with the data (Sink)

a1.channels = c1
a1.sources = r1
a1.sinks = k2
a1.channels.c1.type = memory
a1.sources.r1.channels = c1 
a1.sources.r1.type = avro 
a1.sources.r1.bind = bda5node1
a1.sources.r1.port = 4545
a1.sinks.k2.type = hdfs
a1.sinks.k2.channel = c1
a1.sinks.k2.hdfs.path = /user/flume/gg/%{SCHEMA_NAME}/%{TABLE_NAME} 
a1.sinks.k2.hdfs.filePrefix = %{TABLE_NAME}_ 
a1.sinks.k2.hdfs.writeFormat=Writable 
a1.sinks.k2.hdfs.rollInterval=0
a1.sinks.k2.hdfs.hdfs.rollSize=1048576
a1.sinks.k2.hdfs.rollCount=0
a1.sinks.k2.hdfs.batchSize=100 
a1.sinks.k2.hdfs.fileType=DataStream

Here we note that

  • The agent’s source (inbound data stream) is to run on port 4545 and to use avro
  • The agent’s sink will write to HDFS and store the files  in /user/flume/gg/%{SCHEMA_NAME}/%{TABLE_NAME}
  • The HDFS files will be rolled over every 1Mb (1048576 bytes)

We are now ready to head back to the webserver that runs the MySQL database and start the Flume extract, that will feed all committed MySQL transactions against our selected tables to the Flume Agent on the cluster, which in turn will write the data to HDFS

-bash-4.1$ export LD_LIBRARY_PATH=/usr/lib/jvm/jdk1.7.0_55/jre/lib/amd64/server
-bash-4.1$ export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_55/
-bash-4.1$ ./ggsci
ggsci>add extract flume, exttrailsource ./dirdat/et 
ggsci>start flume
ggsci>info flume
EXTRACT    FLUME     Last Started 2015-03-29 17:51   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:06 ago)
Process ID           24331
Log Read Checkpoint  File /opt/oracle/OGG/dirdat/et000008
                     2015-03-29 17:51:45.000000  RBA 7742

If I now submit this blogpost I should see the results showing up our Hadoop cluster in the Rittman Mead Labs.

[oracle@bda5node1 ~]$ hadoop fs -ls /user/flume/gg/wp_mysql/wp_posts
-rw-r--r--   3 flume  flume   3030 2015-03-30 16:40 /user/flume/gg/wp_mysql/wp_posts/wp_posts_.1427729981456

We can quickly create an externally organized table in Hive to view the results with SQL

hive> CREATE EXTERNAL TABLE wp_posts(
     op string, 
 ID                     int,
 post_author            int,
 post_date              String,
 post_date_gmt          String,
 post_content           String,
 post_title             String,
 post_excerpt           String,
 post_status            String,
 comment_status         String,
 ping_status            String,
 post_password          String,
 post_name              String,
 to_ping                String,
 pinged                 String,
 post_modified          String,
 post_modified_gmt      String,
 post_content_filtered  String,
 post_parent            int,
 guid                   String,
 menu_order             int,
 post_type              String,
 post_mime_type         String,
 comment_count          int,
     op_timestamp timestamp
  )
 COMMENT 'External table ontop of GG Flume sink, landed in hdfs'
 ROW FORMAT DELIMITED FIELDS TERMINATED BY ';'
 STORED AS TEXTFILE
 LOCATION '/user/flume/gg/wp_mysql/wp_posts/';

hive> select post_title from gg_flume.wp_posts where op='I' and id=22112;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1427647277272_0017, Tracking URL = http://bda5node1.rittmandev.com:8088/proxy/application_1427647277272_0017/
Kill Command = /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hadoop/bin/hadoop job  -kill job_1427647277272_0017
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 0
2015-03-30 16:51:17,715 Stage-1 map = 0%,  reduce = 0%
2015-03-30 16:51:32,363 Stage-1 map = 50%,  reduce = 0%, Cumulative CPU 1.88 sec
2015-03-30 16:51:33,422 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.38 sec
MapReduce Total cumulative CPU time: 3 seconds 380 msec
Ended Job = job_1427647277272_0017
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 2   Cumulative CPU: 3.38 sec   HDFS Read: 3207 HDFS Write: 35 SUCCESS
Total MapReduce CPU Time Spent: 3 seconds 380 msec
OK
Oracle GoldenGate, MySQL and Flume
Time taken: 55.613 seconds, Fetched: 1 row(s)

Please leave a comment and you’ll be contributing to an OGG Flume!

Categories: BI & Warehousing

Log Buffer #416, A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2015-03-30 12:29

This log buffer edition sprouts from the beauty, glamour and intelligence of various blog posts from Oracle, SQL Server, and MySQL.

Oracle:

Oracle Exadata Performance: Latest Improvements and Less Known Features

Exadata Storage Index Min/Max Optimization

Oracle system V shared memory indicated deleted

12c Parallel Execution New Features: Concurrent UNION ALL

Why does index monitoring makes Connor’s scratch his head and charges off to google so many times.

SQL Server:

Learn how to begin unit testing with tSQLt and SQL Server.

‘Temporal’ tables contain facts that are valid for a period of time. When they are used for financial information they have to be very well constrained to prevent errors getting in and causing incorrect reporting.

As big data application success stories (and failures) have appeared in the news and technical publications, several myths have emerged about big data. This article explores a few of the more significant myths, and how they may negatively affect your own big data implementation.

When effective end dates don’t align properly with effective start dates for subsequent rows, what are you to do?

In order to automate the delivery of an application together with its database, you probably just need the extra database tools that allow you to continue with your current source control system and release management system by integrating the database into it.

MySQL:

Ronald Bradford on SQL, ANSI Standards, PostgreSQL and MySQL.

How to Manage the World’s Top Open Source Databases: ClusterControl 1.2.9 Features Webinar Replay

A few interesting findings on MariaDB and MySQL scalability, multi-table OLTP RO

MariaDB: The Differences, Expectations, and Future

How to Tell If It’s MySQL Swapping

Categories: DBA Blogs

New blog to handle the PJC/Bean articles

Francois Degrelle - Mon, 2015-03-30 12:06

Here is the link to another place that stores the PJCs/Beans article without adds.

http://forms.pjc.bean.blog.free.fr/

Francois

 

Dimensional Modeling

Dylan's BI Notes - Mon, 2015-03-30 11:06
Moved the content into a page – Dimensional Modeling
Categories: BI & Warehousing

A command-line alternative to PeopleSoft SendMaster

Javier Delgado - Mon, 2015-03-30 10:05
If you are familiar with PeopleSoft Integration Broker, I'm sure you have dealt with SendMaster to some degree. This is a very simple but yet useful tool to perform unit tests of the Integration Broker incoming service operations using plain XML (if I'm dealing with SOAP Web Services, I normally use SoapUI, for which there is a very good article on PeopleSoft Wiki).

Most of the time it's enough with SendMaster, but today I came through a problem that required an alternative. While testing an XML message with this tool against an HTTPS PeopleSoft installation, I got the following error message:

Error communicating with server: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

After checking in My Oracle Support, I've found the following resolution (doc 1634045.1):

The following steps will resolve the error:

1) Import the appropriate SSL certificate to the java kestore PS_HOME\jre\lib\security\cacerts or Integration Broker's keystore location i.e pskey file
2) Set sendmaster's preferences  ( via File-Preferences-HTTP tab )  to point to the keystore with the appropriate SSL certificate3) Test
Unfortunately, I didn't have access to the appropriate SSL certificate, so I've decided to use curl, a pretty old (dating back to 1997 according to all knowing wikipedia) but still useful command line tool.



curl is a command line tool that can be used to test HTTP and HTTPS operations, including GET, PUT, POST and so on. One of the features of this tool is that it can run in "insecure" mode, eliminating the need of a client certificate to test URLs on HTTPS. Both in Linux and Mac OS, the option to run in insecure mode is -k. The command line to test my service operation then looked like:

curl -X POST -d @test.xml -k https://<server>/PSIGW/HttpListeningConnector 

Please note that the @ option actually requests curl to take the data from the file following it. Instead of doing so, you can specify the data in the command line, but it is a bit more cumbersome.

Also, keep in mind that curl is not delivered with Windows out of the box, but you can download similar tools from several sources (for instance, this one).







PTS Sample code now available on GitHub

Angelo Santagata - Mon, 2015-03-30 08:43

Not sure many people know about this, but sometime ago my team created a whole collection of sample code. This code is available on OTN at this location  but it is now also available in github here!

 We'll be updating this repository with some new code soon, when we do I'll make sure to update this blog entry

IBM Bluemix demo using IBM Watson Personality Insights service

Pas Apicella - Mon, 2015-03-30 04:31
The IBM Watson Personality Insights service uses linguistic analysis to extract cognitive and social characteristics from input text such as email, text messages, tweets, forum posts, and more. By deriving cognitive and social preferences, the service helps users to understand, connect to, and communicate with other people on a more personalized level.

1. Clone the GitHub repo as shown below.

pas@192-168-1-4:~/bluemix-apps/watson$ git clone https://github.com/watson-developer-cloud/personality-insights-nodejs.git
Cloning into 'personality-insights-nodejs'...
remote: Counting objects: 84, done.
remote: Total 84 (delta 0), reused 0 (delta 0), pack-reused 84
Unpacking objects: 100% (84/84), done.
Checking connectivity... done.

2. Create the service as shown below.

pas@192-168-1-4:~/bluemix-apps/watson/personality-insights-nodejs$ cf create-service personality_insights "IBM Watson Personality Insights Monthly Plan" personality-insights-service
Creating service personality-insights-service in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

3. Edit the manifest.yml to use a unique application name , I normally use {myname}-appname

---
declared-services:
  personality-insights-service:
    label: personality_insights
    plan: 'IBM Watson Personality Insights Monthly Plan'

applications:
- name: pas-personality-insights-nodejs
  command: node app.js
  path: .
  memory: 256M
  services:
  - personality-insights-service

4. Push the application as shown below.

pas@192-168-1-4:~/bluemix-apps/watson/personality-insights-nodejs$ cf push
Using manifest file /Users/pas/ibm/bluemix/apps/watson/personality-insights-nodejs/manifest.yml

Creating app pas-personality-insights-nodejs in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

Creating route pas-personality-insights-nodejs.mybluemix.net...
OK

Binding pas-personality-insights-nodejs.mybluemix.net to pas-personality-insights-nodejs...
OK

Uploading pas-personality-insights-nodejs...
Uploading app files from: /Users/pas/ibm/bluemix/apps/watson/personality-insights-nodejs
Uploading 188.5K, 30 files
Done uploading
OK
Binding service personality-insights-service to app pas-personality-insights-nodejs in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

Starting app pas-personality-insights-nodejs in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
-----> Downloaded app package (192K)
-----> Node.js Buildpack Version: v1.14-20150309-1555
-----> Requested node range:  >=0.10
-----> Resolved node version: 0.10.36
-----> Installing IBM SDK for Node.js from cache
-----> Checking and configuring service extensions
-----> Installing dependencies
       errorhandler@1.3.5 node_modules/errorhandler
       ├── escape-html@1.0.1
       └── accepts@1.2.5 (negotiator@0.5.1, mime-types@2.0.10)
       body-parser@1.11.0 node_modules/body-parser
       ├── bytes@1.0.0
       ├── media-typer@0.3.0
       ├── raw-body@1.3.2
       ├── depd@1.0.0
       ├── qs@2.3.3
       ├── on-finished@2.2.0 (ee-first@1.1.0)
       ├── iconv-lite@0.4.6
       └── type-is@1.5.7 (mime-types@2.0.10)
       express@4.11.2 node_modules/express
       ├── escape-html@1.0.1
       ├── merge-descriptors@0.0.2
       ├── utils-merge@1.0.0
       ├── methods@1.1.1
       ├── fresh@0.2.4
       ├── cookie@0.1.2
       ├── range-parser@1.0.2
       ├── cookie-signature@1.0.5
       ├── media-typer@0.3.0
       ├── finalhandler@0.3.3
       ├── vary@1.0.0
       ├── parseurl@1.3.0
       ├── serve-static@1.8.1
       ├── content-disposition@0.5.0
       ├── path-to-regexp@0.1.3
       ├── depd@1.0.0
       ├── qs@2.3.3
       ├── on-finished@2.2.0 (ee-first@1.1.0)
       ├── debug@2.1.3 (ms@0.7.0)
       ├── etag@1.5.1 (crc@3.2.1)
       ├── proxy-addr@1.0.7 (forwarded@0.1.0, ipaddr.js@0.1.9)
       ├── send@0.11.1 (destroy@1.0.3, ms@0.7.0, mime@1.2.11)
       ├── accepts@1.2.5 (negotiator@0.5.1, mime-types@2.0.10)
       └── type-is@1.5.7 (mime-types@2.0.10)
       jade@1.9.2 node_modules/jade
       ├── character-parser@1.2.1
       ├── void-elements@2.0.1
       ├── commander@2.6.0
       ├── mkdirp@0.5.0 (minimist@0.0.8)
       ├── with@4.0.2 (acorn-globals@1.0.3, acorn@1.0.1)
       ├── constantinople@3.0.1 (acorn-globals@1.0.3)
       └── transformers@2.1.0 (promise@2.0.0, css@1.0.8, uglify-js@2.2.5)
       watson-developer-cloud@0.9.8 node_modules/watson-developer-cloud
       ├── object.pick@1.1.1
       ├── cookie@0.1.2
       ├── extend@2.0.0
       ├── isstream@0.1.2
       ├── async@0.9.0
       ├── string-template@0.2.0 (js-string-escape@1.0.0)
       ├── object.omit@0.2.1 (isobject@0.2.0, for-own@0.1.3)
       └── request@2.53.0 (caseless@0.9.0, json-stringify-safe@5.0.0, aws-sign2@0.5.0, forever-agent@0.5.2, form-data@0.2.0, stringstream@0.0.4, oauth-sign@0.6.0, tunnel-agent@0.4.0, qs@2.3.3, node-uuid@1.4.3, mime-types@2.0.10, combined-stream@0.0.7, http-signature@0.10.1, tough-cookie@0.12.1, bl@0.9.4, hawk@2.3.1)
-----> Caching node_modules directory for future builds
-----> Cleaning up node-gyp and npm artifacts
-----> No Procfile found; Adding npm start to new Procfile
-----> Building runtime environment
-----> Checking and configuring service extensions
-----> Installing App Management
-----> Node.js Buildpack is done creating the droplet

-----> Uploading droplet (12M)

0 of 1 instances running, 1 starting
1 of 1 instances running

App started


OK

App pas-personality-insights-nodejs was started using this command `node app.js`

Showing health and status for app pas-personality-insights-nodejs in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

requested state: started
instances: 1/1
usage: 256M x 1 instances
urls: pas-personality-insights-nodejs.mybluemix.net
last uploaded: Mon Mar 30 10:18:37 +0000 2015

     state     since                    cpu    memory   disk     details
#0   running   2015-03-30 09:20:06 PM   0.0%   0 of 0   0 of 0

5. Access Application


This demo is based off the link below.

https://github.com/watson-developer-cloud/personality-insights-nodejs

More information as follows

http://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/doc/personality-insights/

http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware