Skip navigation.

Feed aggregator

UnifiedPush Server: Docker, WildFly and another Beta release!

Matthias Wessendorf - Fri, 2014-08-15 04:07

Today we are announcing the second beta release of our 1.0.0 version. This release contains several improvements

  • WildFly 8.x support
  • PostgreSQL fix
  • Scheduler component for deleting analytics older than 30 days
  • Improvements on the AdminUI
  • Documentation

The complete list of included items are avialble on our JIRA instance

With the release of the server we also released new versions of the senders for Java and Node.js!

Docker

The team is extremely excited about the work that Docktor Bruno Oliveira did on our new Docker images:

Check them out!

Documentation

As mentioned above, the documentation for the UnifiedPush Server has been reorganized, including an all new guide on how to use the UnifiedPush Server.

Demos

To get easily started using the UnifiedPush Server we have a bunch of demos, supporting various client platforms:

  • Android
  • Apache Cordova (with jQuery and Angular/Ionic)
  • iOS

The simple HelloWorld examples are located here. Some more advanced examples, including a Picketlink secured JAX-RS application, as well as a Fabric8 based Proxy, are available here.

Docker

Bruno Oliveira did Docker images for the Quickstart as well:

Feedback

We hope you enjoy the bits and we do appreciate your feedback! Swing by on our mailing list! We are looking forward to hear from you!

NOTE: the Openshift online offering will be updated w/in the next day or two

Enjoy!


Oracle Priority Service Infogram for 14-AUG-2014

Oracle Infogram - Thu, 2014-08-14 15:41

OpenWorld
Each week leading up to OpenWorld we will be publishing various announcements, schedules, tips, etc. related to the event.
This week:
The Storage Forum at Oracle OpenWorld 2014.
Oracle Support
We’ve reported before on the My Oracle Support Accreditation Series. The My Oracle Support blog lets us know about New Products Added to the series.
RDBMS
From the Oracle Database In-Memory blog: Getting started with Oracle Database In-Memory Part I - Installing & Enabling.
Performance
From the A Wider Viewblog: DAY, AUGUST 11, 2014 Watch Oracle DB Session Activity With The Real-Time Session Sampler.
From the ORACLE DIAGNOSTICIAN: ASH Basics.
ODI
OWB to ODI 12c Migration in action, from the Data Integration blog.
Solaris
From The Observatoryblog: VXLAN in Solaris 11.2.
SPARC
From EnterpriseTech: Oracle Cranks Up The Cores To 32 With Sparc M7 Chip .
Security
IT-Security (Part 6): WebLogic Server and Authorization, from The Cattle Crew.
MAF
From Shay Shmeltzer's Weblog: Required Field Validation in Oracle MAF.
SOA
SOA Transformation through SOA Upgrade, from the SOA & BPM Partner Community Blog.
ADF and BPM
From the Dreamix Group: The Ultimate Guide to Separating BPM from ADF.
From the Waslley Souza Blog: Communication between Task Flows using Task Flow Parameters.
From Andrejus Baranovskis Blog: ADF Thematic Map in ADF 12c (12.1.3).
IOUG
Always good to take an occasional look at upcoming events from IOUG.
…and Finally
Some of the trends of today, based on buzz words:
A year of tech industry hype in a single graph, from The Verge.
And some of the attempted buzzes of the past that went buzzzzzz….THUD!

22 Of The Most Epic Product Fails in History, from Business Insider.


OGG-00212, what a frustrating error.

DBASolved - Thu, 2014-08-14 14:50

Normally, I don’t mind errors when I’m working with Oracle GoldenGate (OGG); I actually like getting errors, keeps me on my toes and gives me something to solve.  Clients on the other hand do not like errors…LOL.  Solving errors in OGG is normally pretty straight forward with the help of the documentation.  Although today I can almost disagree with the docs.

Today, as I’ve been working on implementing a solution with OGG 11.1.x on the source side and OGG 11.2.x on the target side, this error came up as I was trying to start the OGG 11.1.x Extracts:

OGG-00212  Invalid option for MAP: PMP_GROUP=@GETENV(“GGENVIRONMENT”.

OGG-00212  Invalid option for MAP:  TOKENS(.

In looking around in the OGG documentation and other resources (online and offline).  Some errors are self-explanatory; not in the case of OGG-00212.  Looking up the error in OGG 11.1.x docs was pointless; didn’t exist.  When I finally found the error in the docs for OGG 11.2.x, the docs say:

OGG-00212: Invalid option for [0]:{1}
Cause: The parameter could not be parsed because the specified option is invalid.
Action: Fix the syntax

Now that the documentation has stated the obvious, how is the error actually corrected?  There is no easy way to correct this error because it is syntax related.  In the case that I’m having the error was being thrown due to needing additional spaces in the TABLE mapping.  Silly I know, but true.  

Keep in mind, to fix an OGG-00212 error, especially with OGG 11.1.x or older, remember to add spaces where you many not think one is needed.

Example (causes the error):

TABLE <schema>.SETTINGS,TOKENS( #opshb_info() );

Example (fixed the error):

TABLE <schema>.SETTINGS, TOKENS ( #opshb_info() );

Notice the space between the common (,) and TOKEN. Also between TOKENS and the open parentheses (().  Those simple changes fixed the OGG-00212 error I was getting.

Hope this helps!

Enjoy!

http://about.me/dbasolved

 

 

 

 


Filed under: Golden Gate
Categories: DBA Blogs

PeopleTools 8.54 – New Functionality, New Browser Releases

PeopleSoft Technology Blog - Thu, 2014-08-14 14:24

PeopleTools 8.54 has numerous enhancements to offer, but it does have some browser requirements that go along with it as well.  As customers look at the much improved user interface along with other enhancements being delivered in PeopleTools 8.54, they will look to take advantage of the release and plan their upgrade.  As they plan, diligent customers will review certifications and notice that PeopleTools 8.54 certifies only newer browser releases.  For Internet Explorer, IE 9 is the absolute minimum for the Classic UI, and IE 11 (or higher) is required to take advantage of the new Fluid UI.  As they continue to prepare for the upgrade, some may discover that they have older applications that have hard requirements for a less than current IE release.  Can you say IE 8?  I have come across a couple scenarios already where customers have that one critical application that hasn’t been updated in 5+ years and requires the use of IE 8; it can’t work with IE9 or above.  Another situation I’ve seen is where a customer isn’t scheduled to roll out IE 9 (or above) to their user base prior to their scheduled go live date.

As they evaluate options, some customers facing this situation are able to implement a dual browser environment using Chrome or Firefox for PeopleSoft, and an older IE browser required for antiquated applications.  Other customers have begun to ask what PeopleTools functionality might simply not work if they decide to move forward with an uncertified browser environment.  Since we test our software with certified browser combinations, we sometimes aren’t sure which features might be partially supported by older browsers, and which simply won’t work.

The problem is that older versions of Internet Explorer simply did not contain the functionality that current browsers have.  While those versions render HTML and CSS content, they often do so in very specific, non-standard fashion.  A state of the art application like PeopleSoft relies on rich functionality implemented in the latest versions of the HTML and CSS standards.  PeopleSoft utilizes AJAX and the latest accessibility suggestions as outlined in the Web Accessible Initiative – Accessible Rich Internet Applications Suite (WAI-ARIA).  IE 8 and previous browsers were not designed to support this functionality adequately, and cannot deliver the performance that modern browsers do.

We expect that the following areas would be problematic if using IE 8 or (gulp) something older with the Classic UI.  Of course, Fluid UI functionality will not work.

  • Layout issues
  • Nonfunctional breadcrumbs
  • Accessibility problems
  • Performance issues
  • Problematic charts and graphs
  • Mobile Application Platform (MAP)

Note that there are almost certainly other issues that would arise from the use of outdated and uncertified browsers.

As you make plans to roll out the best release of PeopleTools yet, we STRONGLY recommend that you use only certified environment components.  We test and certify environments for a reason - so that we can find as many issues as possible, before you do.  Should you find a bug we missed, our Support organization stands ready to assist you in obtaining a resolution in your certified environment.  We want your roll out to be as smooth as possible - take advantage of our testing and give your users the best experience available.

How RDX’s BI services make a difference: Additional Services Series pt. 3 [VIDEO]

Chris Foot - Thu, 2014-08-14 12:59

Transcript

At RDX, we provide a full suite of BI services that includes data integration, SSIS, analysis and mining of data, SSAS, and scheduled and manual reporting of data in a variety of formats for visual representation, SSRS.

Our SSIS services include extracting, transforming and loading data from any source into a common format that you can easily understand and use to make better business decisions.

We support high volumes of data and have automated workflows, we also provide auto-transformations of many kinds, and provide custom coding in C# and VB.net.

Our SSAS services allow you to choose between a multi-dimensional (cube) or tabular OLAP – online analytical processing – model to break down the data we've gathered and transition it into your browser of choice for easy, actionable reporting. Our SSRS services come in an array of drill-down and drill-through, graphs, charts, and diagrams, so you can make the most of your data, including accessing previously stored reports.

For more details, download our BI whitepaper. We'll see you next time!

 

The post How RDX’s BI services make a difference: Additional Services Series pt. 3 [VIDEO] appeared first on Remote DBA Experts.

My Oracle Support Accreditation Series: New Products Added

Joshua Solomin - Thu, 2014-08-14 11:58
Untitled Document

GPIcon


Have you reviewed the latest offerings for My Oracle Support Accreditation Series? We added several new product tracks such as PeopleSoft, Business Analytics, and Siebel designed to increase your expertise with My Oracle Support.

There are now 10 product paths that focus on building skills around best practices, recommendations, and tool enablement—taking your expertise to the next level.



Continue to expand your existing knowledge with best practices, product-based use cases, and recommendations from subject-matter experts. Your accreditation delivers the information you need—focusing on core functions and building skills, specifically to help you better support your Oracle products by leveraging My Oracle Support and its related capabilities that are important for your Oracle product path solutions, tools, and knowledge.

Learn more about My Oracle Support Accreditation and explore the new product-specific paths
.



Injecting JavaScript into Simplified UI

Oracle AppsLab - Thu, 2014-08-14 11:04

Extensibility is one of the themes we here in Oracle Applications User Experience (@usableapps) advocate, along with simplicity and mobility.

Simplified UI provides a ton of extensible features, from themes, colors and icons to interface and content changes made by Page Composer.

But sometimes you need to inject some JavaScript into Simplified UI, and you just can’t figure out how, like last week for example. Tony and Osvaldo are building one of Noel’s (@noelportugal) crazy ideas, and they needed to do just that. The project? Yeah, it’s a secret for now, but stay tuned.

Anyway, they had been trying for a couple days, unsuccessfully, to find a way to inject some JS, until I finally decided to ask AUX colleague and extensibility guru, Tim DuBois. As I hoped, Tim had a method, a sneaky roundabout one, but one that sounded promising.

Tim couldn’t recall the source of the method, might have come from Angelo Santagata (@AngeloSantagata) or possibly from a Cloud partner, but as you’ll see, it’s ingenious.

Whoever discovered this method was clever and tenacious and should get kudos. It’s a nice, easy way to get JS into a Simplified UI page without changing the shell.

Here we go.

From the Simplified UI springboard, Sales Cloud in this example, navigate to a page like Leads and expand the menu next to your username.

jsexsten1

At this point, you should create a sandbox to keep your changes isolated, just in case. For more about how and why you want to use sandboxes, check out the documentation.

I didn’t create one in this instance because I’m that confident it works. However, we did use a sandbox when we were testing this.

So, from the expanded menu choose Customize User Interface and pick Site as the target layer.

jsexsten2

Click Select from the edit options and choose a component on the page, like a label, in this case “Leads.”

jsexsten3

For this exercise, the component you choose doesn’t really matter because we’re just making a placeholder change. All you need is one with an Edit Component option.

jsexsten4

Choose Edit Component and modify the value. In this case, we’ll change the text by choosing Select Text Resource from the Value menu and then picking a random key value and entering new label text to display.

jsexsten5

jsexsten6

Make sure to click Create before leaving this dialog. Upon returning to Page Composer, you’ll see the Leads label has changed. Exit Page Composer.

jsexsten7

Once again, expand the menu by your username and choose Manage Customizations.jsexsten8

From the All Layers column, download the XML file.

jsexsten9

Edit the XML file and include your JavaScript.

jsexsten10.5

For the record, we found the correct syntax in this forum post. The code should be similar to:

<mds:insert after="outputText1" parent="g1">
  <af:resource xmlns:af="http://xmlns.oracle.com/adf/faces/rich" 
  type="javascript">alert("HELLO WORLD!");</af:resource>
</mds:insert> 

Finally, upload your updated XML using the same Manage Customizations dialog, close and reload the page.

jsexsten11

And there you go.

Find the comments if you like.

Possibly Related Posts:

SQL Tuning Health Check (SQLHC)

DBA Scripts and Articles - Thu, 2014-08-14 09:12

What is SQL Tuning Health Check? The SQL Tuning Health Check is provided by Oracle (Doc ID 1366133.1) in order to check the environment where the problematic SQL query runs. It checks the statistics, the metadata, initialization parameters and other elements that may influence the performance of the SQL being analyzed. The script generates an [...]

The post SQL Tuning Health Check (SQLHC) appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Managing Files with SaltStack

Pythian Group - Thu, 2014-08-14 09:06

Before we begin, take a look at my previous two blog posts, SaltStack for Remote Parallel Execution of Commands and Using SaltStack for Configuration Management.

We manage files using configurations management much for the same reasons that we manage packages and services – we want/need consistency across all of our boxes, and to spend our time on tasks that add more business value than logging into all of our servers to change a line in a config file.

Using Salt I will show you have to manage the message of the day (MOTD) on the server.

Templates

There are many examples of configuration files which differ only by a few lines between staging and production. A great example of this is a config file which has database host/user/pass information. The drive for consistency tells us that we would like the environment that our release is tested on to match (as closely as possible) the production environment where the code will run.

Using templates allows us to affect the few lines in a configuration file, which we would like to change, while leaving the rest alone. This also simplifies the management of our servers and the configuration repository, allowing us to maintain one config file for many servers.

Salt grains

The salt minions know a lot of information about the boxes they run on. Everything from the hostname, to the IP address, to the kernel, and more is available to be queried by the salt master. These pieces of information are called “grains” in the salt world and allow us to insert dynamic variables into our templates.

A great use case for grains would be the expansion of our Apache formula from my last post. On Red Hat-based machines, the Apache package is called “httpd” but on Debian-based machines the package is called “Apache2″ Using the “osfamily” grain we can dynamically redefine the package name for each minion while maintaining a single formula for all servers.

Likewise, any configurations files which need to have the current box IP address can benefit from grains. As each minion installs that configuration file it will see that the variable needs to be populated with a “grain” and will then do so as requested. Rather than maintaining an Apache vhost file for each of your 10 web servers where the only difference is the IP address declaration you can maintain one dynamic template which will ensure that everything else in that config file matches on each of the 10 nodes other then the one thing that needs to be dynamic (the IP address).

Putting it all together – the MOTD

In your /srv/salt dir we are going to create a subdir called motd. inside of that directory you will find 2 files. an init.sls which is the default top level formula for the director and an motd.template file which is our config file. The init.sls looks like this:

/etc/motd: 
  file.managed: 
    - user: root 
    - group: root 
    - mode: 0644 
    - source: salt://motd/motd.template 
    - template: jinja

For the file /etc/motd we are telling Salt that we want to manage the file that its owner and group should be root, that we want the file to have 0644 permissions. We are letting Salt know that it will find the config file (source) under the motd subdir, the salt:// maps to /srv/salt and that our template will be in the jinja format.

Our template will look like:

------------------------------------------------------------------------------
Welcome to {{ grains['fqdn'] }}

Server Stats at a Glance:
------------------------

OS: {{ grains['osfullname'] }}
Kernel: {{ grains['kernelrelease'] }}
Memory: {{ grains['mem_total'] }} MB

This server is managed using a configuration management system (saltstack.org).
Changes made to this box directly will likely be over-written by SALT. Instead
modify server configuration via the configuration management git repository.
------------------------------------------------------------------------------

As each minion installs this MOTD file it will see the variables in use because they are grains the minion will know that it has the information required to populate the variable and will do so for each server. This will give you a final MOTD that looks like this:

[root@ip-10-0-0-172 ~]# cat /etc/motd

------------------------------------------------------------------------------
Welcome to ip-10-0-0-172.ec2.internal

Server Stats at a Glance:
------------------------

OS: Amazon Linux AMI
Kernel: 3.10.42-52.145.amzn1.x86_64
Memory: 996 MB

This server is managed using a configuration management system (saltstack.org).
Changes made to this box directly will likely be over-written by SALT.  Instead
modify server configuration via the configuration management git repository.
------------------------------------------------------------------------------
[root@ip-10-0-0-172 ~]#

As you can see each variable was populated with the information specific to the node.

If we wanted to add, remove, or change anything in the MOTD, rather than having to box walk the entire infrastructure (which depending on your side, could tie up a resource for days), we can edit the single template file on the master and allow the tool to propagate the change out to the boxes for us, reducing that task from a very boring day (or more) to a few minutes!

Categories: DBA Blogs

How to Configure an Azure Point-to-Site VPN – Part 2

Pythian Group - Thu, 2014-08-14 08:47

This blog post is the second in a series of three which will demonstrate how to configure a Point-to-Site VPN step-by-step. In my first blog post, I demonstrated how to configure a virtual network and a dynamic routing gateway. Today’s post will be about creating certificates.

CREATING CERTIFICATES

At this step, we will create and upload a certificate. This certificate will be used to authenticate the VPN clients and are performed in few steps:

  • Generate the certificate
  • Upload the root certificate to the Azure Management Portal
  • Generate a client certificate
  • Export and install the client certificate

Let’s start …

  1. We will need to use the MakeCert tool. MakeCert is part of “Microsoft Visual Studio Express” available here.
  2. After successfully downloading the tool, start the setup and follow the installation steps. Note that you can generate this certificate in any computer, not only in the computer where you are configuring the VPN.
    After the installation, you can find MakeCert at:

    • C:\Program Files (x86)\Windows Kits\8.1\bin\x64
    • C:\Program Files (x86)\Windows Kits\8.1\bin\x86
  3. Launch the command prompt as Administrator. Point the path to one of the folders referred in the previous step and execute the following command (note: keep the command line opened):
    makecert -sky exchange -r -n “CN=RootCertificateMurilo” -pe -a sha1 -len 2048 -ss My “RootCertificateMurilo.cer”
    (where “RootCertificateMurilo” is teh certificate name).Screen Shot 2014-07-30 at 11.38.38
    This command will create and install a root certificate in the Personal certificate store and create the define RootCertificateMurilo.cer file in the same directory that you are executing the command.Screen Shot 2014-07-30 at 11.59.41Note: Store this certificate in a safe location.
  4. Now, go to the Windows Azure Management Portal https://manage.windowsazure.com/ in order to upload the certificate.
  5. In the networks section, select the previously created network and go to the certificate page.Screen Shot 2014-07-30 at 13.02.05
  6. Click Upload a root certificate, select your certificate, and click in the check mark.Screen Shot 2014-07-30 at 13.04.10
    • Depending on the time zone of the server where you created the certificate, you might receive an error message, “The certificate is not valid yet, effective date is [date and time].” To work around this, delete the created certificate, and create another one adding the following parameter (change the date):-b “07/30/2014″It will be valid form 00:00:00 hours for the day you set.
  7. Now we need to create a Client Certificate. We will use the Root Certificate to do this.
    In the same command line window, opened before, execute the following command:makecert.exe -n “CN=ClientCertificateMurilo” -pe -sky exchange -m 96 -ss My -in “RootCertificateMurilo” -is my -a sha1This certificate will be stored in your personal certificate store.
  8. Now we need to export this certificate, as this should be installed on each computer that needs to be connected to the virtual network. To achieve this, enter the command “mmc”, still in the opened command line. The following window will be shown: Screen Shot 2014-07-30 at 16.52.59
    • Go to File->Add/Remove Snap-in.
    • Select “Certificates” and click on “Add >”.Screen Shot 2014-07-30 at 16.54.02
    • Select My user account and click Finish.Screen Shot 2014-07-30 at 16.54.56
    • Click OK in the remaining window.
    • Now you will be able to see your certificates under the “Personal\Certificates” folder:Screen Shot 2014-07-30 at 16.56.13
  9. To export the certificate, right click the Client certificate and click on “All Tasks->Export…”, as shown:Screen Shot 2014-07-30 at 17.00.46
  10. A wizard will be presented. Choose Yes, export the private key and click.Screen Shot 2014-07-31 at 11.15.06
  11. Leave this as default, and click Next.Screen Shot 2014-07-31 at 11.22.15
  12. Choose a strong password (try to remember this) and click Next.Screen Shot 2014-07-31 at 11.23.39
  13. Now you need to set the path to store you .pfx file.Screen Shot 2014-07-31 at 11.25.01
  14. Click Next, then Finish.
  15. To finalize the “Certificates part”, we will need to install the certificate on all the servers where we want to setup the VPN.To accomplish this, you just need to:
    • Copy the exported .pfx file (step 13) to all the servers.
    • Double-click the pfx on all the servers.
    • Enter the password.
    • Proceed with the installation, maintaining the default location.

Stay tuned for my next blog post on how to configure the VPN client.

Categories: DBA Blogs

Michael Abbey: Still Presenting After All These Years

Pythian Group - Thu, 2014-08-14 07:50

A cool, wintery day in late 1989. This kid’s working for the Office of the Auditor General of Canada. I’d been working with Oracle and in my fourth year. I had cut my teeth on 6.0.27.9.4 after first seeing V3 some four years prior. I stumbled across a well-placed ad for a show happening in Anaheim USA in September 1990. I’ve got the bug. I apply to go to the show and was told by my employer ,”Just a sec, David and I were thinking of going to that show – let us get back to you.” Some three weeks I am told it’s a go.

I am off to sunny California for six wonderful days of International Oracle User Week (IOUW); this was a joint effort put on by Oracle and the International Oracle User Group (IOUG). I had spent the better part of the summer of 1969 in southern Cali so this was shaping up to be a resurrection. I toddle off to Cali and have a wonderful time. It’s magic – such a learning opportunity. I even came away knowing how to place a database in archivelog mode. I was so pleased with myself and got to meet one of my heroes. I had only been working with the software for 4 years, but already knew of Ken Jacobs (a.k.a. Dr. DBA).

I had the bug to present almost from day one. I saw an ad in one of the bazillion pieces of paper I brought home from that IOUW about a show in DC – Sheraton Woodley Park to be exact. I don’t even think that it exists anymore. I figured I’d attend ECO then present an abstract for IOUW 1991 in Miami. Some of the history is described in a blog post I made in 2013 located here. Enough said about that. It was quite a whirlwind of activity on the presentation circuit in those days. Starting in 1992 I became very active in the IOUG holding a handful of board positions up to the 2006 or maybe 2007 time frame. I attended a gazillion conferences in those days and the pinnacle was a show in Philly in 1995. I had been on the board of the IOUW for a few years and the paid attendance count at that show was staggering. Chubby Checker played at the big bash and arrangements were made for me to sit in on the bass guitar for the Twist. That got cancelled at the last minute but it was close. My paper was in one of the biggest rooms in the convention centre. There were over 1,500 people in attendance and it was intoxicating. I was pleased when I got my evals to find out the attendees were as pleased as I was. It was all (or close to all) about the CORE database technology in those days. In 1995, Oracle7 was the hot item having been on the street for over 3 years.

As guests of Oracle, a handful of us had the pleasure of attending the launch of Oracle7 at the Hudson Theatre in the Hotel Macklowe on 44th St. in beloved NYC. We were thrilled to be very close in those days to Ray Lane, then President of Oracle Corp. and we introduced Ray to a lot of his direct reports at that “party.” A mere four years later we were back for the release launch of Oracle8 at Radio City Music Hall. Again, a pleasant time was had by all. There turned out to be surprisingly little coverage/mention of Oracle8 at that event. It was more concentrated on Oracle Network Computer (NC) designed to bring computing power to every desktop at a low cost. Once during that Oracle8 launch, the operator of the boom mic in then pit swept the stage to get from one side to the other and almost hit LJE in the side of the head. I think I was the only one who heard what Larry said – “Watch out Bill.” Does anyone get the reference but me?

My torrid Oracle technology career was just that. Between 1991 and the date of this post I have probably given over 100 papers at shows from Ottawa to Hyderabad, Brighton to San Diego, and Vienna to Addis Ababa. There is still a voracious hunger out there for the heart of my expertise – anything that starts with an “O” and ends in an “E” and has the word database tagged on the end. After becoming very close to some of the kernel developers at Oracle, we discussed how they were still in the middle of their workday when the Loma Prieta quake hit in October 1989. Me and a few close friends hung out with the guys whose names litter the bottom of the “this change was done when” section of the ?/rdbms/admin directory on Oracle database software installs. We were in David Anderson’s office schmoozing and asked what he happened to be up to that day. He was ftp’ing source code from a VAX to a Sun box in preparation for the base-platform change that happened in the early 1990s. It was a magic carpet ride.

In some ways it still is. To finish off this year I am appearing at:

  • OOW (Oracle Open World) in San Francisco – September 29-October 2
  • ECO (East Coast Oracle) event in Raleigh/Durham – November 3-5
  • MOUS (Michigan Oracle User Summit) in Livonia – November 13
  • UKOUG in Liverpool – December 8-10

My personal top 10 moments (actually top 11 – the exchange rate) in my still developing tech career you say … drum roll:

Rank Event Date 11 First ever tech show 1990 10 Longest lasting tech contact – Yossi Amor 25 years 9 Number of IOUG yearly events attended 23 8 Books published in Oracle Press series (including translations) 42 7 Most attendees at a presentation – 1500 (Philadelphia) 1995 6 Fewest attendees at a presentation – 1 2013 5 Most exciting event attended – CODA in Burlingame CA 1993 4 First PL/SQL code block to compile – Oracle7 1993 3 Favourite version of SQL*Forms – 2.3 1993 2 First got hands wet with this famous technology – 5.1.22 1986 1 Biggest thrill – the rush of speaking to live audiences 1991-??
Categories: DBA Blogs

“Freeing business analysts from IT”

DBMS2 - Thu, 2014-08-14 06:21

Many of the companies I talk with boast of freeing business analysts from reliance on IT. This, to put it mildly, is not a unique value proposition. As I wrote in 2012, when I went on a history of analytics posting kick,

  • Most interesting analytic software has been adopted first and foremost at the departmental level.
  • People seem to be forgetting that fact.

In particular, I would argue that the following analytic technologies started and prospered largely through departmental adoption:

  • Fourth-generation languages (the analytically-focused ones, which in fact started out being consumed on a remote/time-sharing basis)
  • Electronic spreadsheets
  • 1990s-era business intelligence
  • Dashboards
  • Fancy-visualization business intelligence
  • Planning/budgeting
  • Predictive analytics
  • Text analytics
  • Rules engines

What brings me back to the topic is conversations I had this week with Paxata and Metanautix. The Paxata story starts:

  • Paxata is offering easy — and hopefully in the future comprehensive — “data preparation” tools …
  • … that are meant to be used by business analysts rather than ETL (Extract/Transform/Load) specialists or other IT professionals …
  • … where what Paxata means by “data preparation” is not specifically what a statistician would mean by the term, but rather generally refers to getting data ready for business intelligence or other analytics.

Metanautix seems to aspire to a more complete full-analytic-stack-without-IT kind of story, but clearly sees the data preparation part as a big part of its value.

If there’s anything new about such stories, it has to be on the transformation side; BI tools have been helping with data extraction since — well, since the dawn of BI. The data movement tool I used personally in the 1990s was Q+E, an early BI tool that also had some update capabilities.* And this use of BI has never stopped; for example, in 2011, Stephen Groschupf gave me the impression that a significant fraction of Datameer’s usage was for lightweight ETL.

*Q+E came from Pioneer Software, the original predecessor of Progress DataDirect, which first came to fame in association with Microsoft Excel and the invention of ODBC.

More generally, I’d say that there are several good ways for IT to give out data access, the two most obvious of which are:

  • “Semantic layers” in BI tools.
  • Data copies in departmental data marts.

If neither of those works for you, then most likely either:

  • Your problem isn’t technology.
  • Your problem isn’t data access.

And so we’ve circled back to what I wrote last month:

Data transformation is a better business to enter than data movement. Differentiated value in data movement comes in areas such as performance, reliability and maturity, where established players have major advantages. But differentiated value in data transformation can come from “intelligence”, which is easier to excel in as a start-up.

What remains to be seen is whether and to what extent any of these startups (the ones I mentioned above, or Trifacta, or Tamr, or whoever) can overcome what I wrote in the same post:

When I talk with data integration startups, I ask questions such as “What fraction of Informatica’s revenue are you shooting for?” and, as a follow-up, “Why would that be grounds for excitement?”

It will be interesting to see what happens.

Categories: Other

Developing and Deploying Self-Service Solutions

WebCenter Team - Thu, 2014-08-14 06:00

Guest blog by Geoffrey Bock

How Oracle WebCenter Customers Build Digital Businesses: Developing and Deploying Self-Service Solutions Geoffrey Bock, Principal, Bock & Company
Beyond the First Generation As I described in my last blog post, "Designing for the Experience-Driven Enterprise", many of the WebCenter customers I spoke to are focusing their design efforts on the experience-driven enterprise. They are contending with digital disruption by not only replacing their legacy systems but also by restructuring and extending their enterprise applications. In fact, there is a renewed emphasis on self-service solutions.
Of course self-service is a long-standing goal for doing business over the web. But first generation solutions simply augment existing enterprise activities. For instance, many companies introduced self-service HR portals over a decade ago, enabling employees to update their profile and benefits information on their own,  rather than completing printed forms or calling HR staffers. While the tasks did not change, the people doing the work did.
Now it’s time to develop truly digital self-service solutions that do more than simply digitize these analog activities. A Catalyst for Digital Business Transformation As they become digital businesses, companies are engaged in new efforts that leverage the capabilities of a next-generation enterprise platform. Companies expect to transform how they do business, and deliver self-service solutions that are impossible to achieve without a truly digital application infrastructure. When in search of a starting point, begin with an enterprise portal and make it more relevant for solving business tasks.
Many of the business and IT leaders I interviewed are focusing on three interrelated goals.
  • Continuing to empower end users and operational business units by reducing the necessity of IT support for maintaining enterprise applications.
  • Collecting and organizing disparate strands of information into digital hubs that support business tasks.
  • Restructuring business processes to take advantage of end-to-end digital activities.
With a renewed emphasis on self-service, these leaders can consolidate disparate web sites and applications into a series of task-oriented solutions. For instance, one firm restructures its marketing activities through a customer-experience portal where marketers easily access all resources and assets for managing campaigns and measuring results. Another firm aggregates information from machines in a laboratory that are equipped with an array of sensors, and proactively manages maintenance based on the results. Investing in the Underlying Resources From my perspective, the mobile journey leads to these next-generation solutions. As they rebuild the underlying platforms powering their enterprise applications, IT leaders are defining the essential services within a services-oriented architecture (SOA). It’s important to invest the time and resources to get them right. It’s also essential to define the underlying information architecture, including the metadata definitions and tag-sets essential for dynamic content delivery. Line-of-business leaders should support these IT and content management efforts.

Mobile apps are the catalyst for the digital business transformation. Both business and IT leaders need to rethink how they want to do business, enhance, extend, and replace their first-generation self-service initiatives, and become truly digital businesses.


OTN Latin America Tour 2014 – Mexico

Oracle AppsLab - Wed, 2014-08-13 12:19

keynote1

The OTN network is designed to help Oracle users with community generated resources. Every year the OTN team organizes worldwide tours that allow local users to learn from subject matter experts in all things Oracle. For the past few years the UX team has been participating in the OTN Latin America Tour as well as other regions.  This year I was happy to accept their invitation to deliver the opening keynote for the Mexico City tour stop.

The keynote title was “Wearables in the Enterprise: From Internet of Things to Google Glass and Smart Watches.” Given the AppsLab charter and reputation on cutting edge technologies and innovation it was really easy to put a presentation deck on our team’s findings on these topics. The presentation was a combination of the keynote given by our VP, Jeremy Ashley, during MakerCon 2014 at Oracle HQ this past May and our proof-of-concepts using wearable technologies.

Session114883028625_f2f6a4d6c7_o

I also had a joint session with my fellow UX team member Rafael Belloni titled “Designing Tablet UIs Using ADF.” Here we had the chance to share how users can leverage two great resources freely available from our team:

  1. Simplified User Experience Design Patterns for the Oracle Applications Cloud Service (register to download e-book here)
  2. A Starter kit with templates used to build a Simplified UI interfaces (download kit here)
    *Look for “Rich UI with Data Visualization Components and JWT UserToken validation extending Oracle Sales Cloud– 1.0.1″

These two resources are the result of extensive research done by our whole UX organization and we are happy to share with the Oracle community. Overall it was a great opportunity to reach out to the Latin American community, especially my fellow Mexican friends.

Here are some pictures of the event and of Mexico City. Enjoy!

 

Photo credits to Pablo Ciccarello, Plinio Arbizu, and me.Possibly Related Posts:

Geared Up for Oracle Sales Cloud Customers at Oracle OpenWorld 2014

Linda Fishman Hoyle - Wed, 2014-08-13 08:53

A Guest Post by Michael Richter, Director, Product Management, Oracle Sales Cloud
(pictured left)

Oracle OpenWorld 2014 is the place to be for sales professionals to stay informed, learn, and network. This year promises to be the best ever!

What’s New and Different?
We are centralizing all Oracle Sales Cloud conference sessions and the Oracle Sales Cloud demonstration zone on the second floor of Moscone West. It is called Sales @ CX Central and will be the headquarters for all things CX Sales. Registered participants can attend sessions and immediately experience live demonstrations on the same topic conveniently located on the same floor. This will help participants easily connect with Oracle product experts, implementation partners, and customers with similar interests and challenges.

Kick-Off General Session in Moscone West
On Tuesday, Scott Creighton, VP of Product Management, will announce the latest innovations for Release 9 at 10:00 a.m., Room 2001. Hitachi Consulting, sponsoring partner, will join Creighton on stage.

Conference Sessions
Oracle Sales Cloud will host more than 50 conference sessions this year. A series of Release 9 Essentials sessions, led by the Oracle Sales Cloud product management team, will include overviews of the latest product enhancements and roadmap, case studies, demonstrations, and shared insights by customers and partners on these topics:

  • Core Sales Force Automation
  • Sales Analytics
  • Sales Performance Management
  • Mobile
  • Financial Services B2B Customers
  • How to Customize, Extend and Integrate with Oracle Sales Cloud

Of Special Interest
We will have conference sessions, panels and demonstrations on these topics:

  • Integrating Sales Cloud with On-Premises Solutions or Other CX Pillars by Panasonic UK, Multi-Color Corporation, and Altec, who have all extended their capabilities through integrations or by using Oracle’s platform
  • Partner Relationship Management by Infosys and Oracle
  • Smartphones and Tablets is a session for customers looking to equip their sales force for increased productivity with smartphones or to run their business using tablets

Meet the Experts
Conference goers can meet informally with Oracle experts in Room 2001A at Moscone West on these topics:

  • Core SFA―Wednesday at 12:45 p.m.
  • PaaS and Configuration―Wednesday at 3:15 p.m.
  • Smartphones―Thursday at 10:15 a.m.

Customer Panels
We have scheduled two popular customer panels in Moscone West:

  • Wednesday at 10:30 a.m., Room 2003―Evaluating a CRM System
  • Thursday at 2:00 p.m., Room 2001―Sales Productivity by Batesville, Acorn Paper Products, and Oracle

Partner Panels
The following three panels of implementation experts include Capgemini, Config Consultants, WhiteLight Product Group, Hitachi, BizTech, Accenture, Steria Mummert Consulting, Apex IT, Conemis AG, ec4u, and Boxfusion:

  • Thursday, 10:15 a.m., Room 2003―Extending Siebel and other on-premises systems
  • Thursday 11:30 a.m., Room 2003―Migration
  • Thursday 1:00 p.m., Room 2003―Mobile and Analytics

Customer Events
Finally, an Oracle OpenWorld preview for Oracle Sales Cloud would not be complete without mention of customer events:

  • Sunday―the annual Oracle Sales Cloud Customer Advisory Board meeting
  • Monday―a Customer Connect member reception
  • Tuesday―the CX Central Fest @ OpenWorld, featuring the electronic pop group--Capital Cities

A Different Theme Every Day
Each day the CX Central zone will have a different theme:

  • Monday―Industry solutions by customers
  • Tuesday―Roles, thought leadership, and innovation, which includes the six Release 9 Essentials sessions and a presentation by industry analyst Rebecca Wettemann of Nucleus Research
  • Wednesday and Thursday―A multitude of different CX and on-premises integration scenarios, with kiosks staffed by product managers and sales consulting experts

At a Glance

Oracle OpenWorld is always over the top. This year will be no different, especially for our Oracle Sales Cloud customers. And remember, it’s all happening on the second floor of Moscone West, September 28 - October 2, 2014.

Michael Richter
Director, Product Management
Oracle Sales Cloud

WebCenter SIG - All Things WebCenter Conference Survey

WebCenter Team - Wed, 2014-08-13 06:00
WebCenter SIG

Are you a WebCenter customer? Are you interested in learning more about the WebCenter product suite? Are you interested in attending a conference focused on the WebCenter product suite to get information, training and networking with your peers?

The WebCenter Special Interest Group (SIG) is considering creating a conference solely for WebCenter product suite. However, we need to know what your (the WebCenter Community) interest in attending this conference would be. So we are sending out this message and asking all customers to take this short 5 minute or less survey. Your participation will help us to decide whether or not developing this conference will benefit the WebCenter community.

You are not obligated to disclose any contact information unless you choose. We simply want your opinion about the conference. Thanks for taking 5 minutes to fill out this survey.
 CLICK HERE TO RESPOND TO SURVEY TODAY!

NOTE: This WebCenter SIG is not run or managed by Oracle. This posting is a courtesy to the community.

Can you handle big data? Oracle may have an answer

Chris Foot - Wed, 2014-08-13 01:33

Now more than ever, database administration services are providing their clients with the expertise and software required to support big data endeavors. 

They haven't necessarily had much of a choice. Businesses need environments such as Hadoop to store the large amount of unstructured data they strive to collect and analyze to achieve insights regarding customer sentiment, procurement efficiencies and a wealth of other factors. 

Oracle's assistance 
According to PCWorld, Oracle recently released a software tool capable of querying Hadoop and Not Only Server Query Language environments. The solution is an add-on for the company's Big Data Appliance, a data center rack comprised of its Sun x86 servers programmed to run Cloudera's Hadoop distribution.

In order for businesses to benefit from the simplicity of Big Data SQL, the source noted they must have a 12c Oracle database installed on the company's Exadata database machine. This allows Exadata and the x86 Big Data Appliance configuration to share an interconnect for data exchange. 

Assessing a "wider problem"
Oracle Vice President of Product Development Neil Mendelson asserted the solution wasn't created for the purpose of replacing existing SQL languages such as Hive and Impala. Instead, Mendelson maintained that Big Data SQL enables remote DBA experts to query a variety of information stocks while moving a minimal amount of data. 

This means organizations don't have to spend the time or network resources required to move large troves of data from one environment to another, because Smart Scan technology is applied to conduct filtering on a local level.

InformationWeek contributor Doug Henschen described Smart Scan as a function that combs through data on the storage tier and identifies what information is applicable to the submitted query. Oracle Product Manager Dan McClary outlined an example of how it could be used:

  • A data scientist wants to compare and contrast Twitter data in Hadoop with customer payment information in Oracle Database
  • Smart Scan percolates Tweets that don't have translatable comments and eliminates posts without latitude and longitude data
  • Oracle Database then receives one percent of the total Twitter information in Hadoop
  • A visualization tool identifies location-based profitability based on customer sentiment

Reducing risk 
In addition, Oracle allows DBA services to leverage authorizations and protocols to ensure security is maintained when Hadoop or NoSQL is accessed. For instance, when a professional is assigned the role of "analyst" he or she has permission to query the big data architectures, while those who lack permission cannot. 

The post Can you handle big data? Oracle may have an answer appeared first on Remote DBA Experts.

Dept/Emp POJO's with sample data for Pivotal GemFire

Pas Apicella - Tue, 2014-08-12 21:57
I constantly blog about using DEPARTMENT/EMPLOYEE POJO'S with sample data. Here is how to create a file with data to load into GemFire to give you that sample set.

Note: You would need to create POJO'S for Department/Empployee objects that have getter/setter for the attributes mentioned below.

Dept Data

put --key=10 --value=('deptno':10,'name':'ACCOUNTING') --value-class=pivotal.au.se.deptemp.beans.Department --region=departments;
put --key=20 --value=('deptno':20,'name':'RESEARCH') --value-class=pivotal.au.se.deptemp.beans.Department --region=departments;
put --key=30 --value=('deptno':30,'name':'SALES') --value-class=pivotal.au.se.deptemp.beans.Department --region=departments;
put --key=40 --value=('deptno':40,'name':'OPERATIONS') --value-class=pivotal.au.se.deptemp.beans.Department --region=departments;

Emp Data

put --key=7369 --value=('empno':7369,'name':'SMITH','job':'CLERK','deptno':20) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7370 --value=('empno':7370,'name':'APPLES','job':'MANAGER','deptno':10) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7371 --value=('empno':7371,'name':'APICELLA','job':'SALESMAN','deptno':10) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7372 --value=('empno':7372,'name':'LUCIA','job':'PRESIDENT','deptno':30) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7373 --value=('empno':7373,'name':'SIENA','job':'CLERK','deptno':40) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7374 --value=('empno':7374,'name':'LUCAS','job':'SALESMAN','deptno':10) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7375 --value=('empno':7375,'name':'ROB','job':'CLERK','deptno':30) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7376 --value=('empno':7376,'name':'ADRIAN','job':'CLERK','deptno':20) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7377 --value=('empno':7377,'name':'ADAM','job':'CLERK','deptno':20) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7378 --value=('empno':7378,'name':'SALLY','job':'MANAGER','deptno':20) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7379 --value=('empno':7379,'name':'FRANK','job':'CLERK','deptno':10) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7380 --value=('empno':7380,'name':'BLACK','job':'CLERK','deptno':40) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7381 --value=('empno':7381,'name':'BROWN','job':'SALESMAN','deptno':40) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;

Load into GemFire (Assumed JAR for POJO'S exists in class path of GemFireCache Servers)

The script bellows uses GFSH to load the file into the correct region references the correct POJO inside the files created above.

export CUR_DIR=`pwd`

gfsh <<!
connect --locator=localhost[10334];
run --file=$CUR_DIR/dept-data
run --file=$CUR_DIR/emp-data
!

Below is what the Department.java POJO would look like for example.
  
package pivotal.au.se.deptemp.beans;

public class Department
{
private int deptno;
private String name;

public Department()
{
}

public Department(int deptno, String name) {
super();
this.deptno = deptno;
this.name = name;
}

public int getDeptno() {
return deptno;
}

public void setDeptno(int deptno) {
this.deptno = deptno;
}

public String getName() {
return name;
}

public void setName(String name) {
this.name = name;
}

@Override
public String toString() {
return "Department [deptno=" + deptno + ", name=" + name + "]";
}

}
http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Deploying an SQL Server database to Azure

Yann Neuhaus - Tue, 2014-08-12 18:46

Deploying an SQL Server database to a Windows Azure virtual machine is a feature introduced with SQL Server 2014. It can be useful for an organization  that wants to reduce its infrastucture management, simplify the deployment, or have a fast virtual machine generation.

 

Concept

This new feature is a wizard which allows either to copy or migrate an On-Premise SQL Server database to a Windows Azure virtual machine.

The following schema explains the main process of the feature:
 PrtScr-capture_20.pngAn existing SQL Server instance is present on an On-Premise machine of an organization, hosting one or several user databases.

Once the new feature is used, a copy of the on-premise database will be available on the SQL Server instance in the Cloud.

 

Prerequisites

Azure Account Creation

Obviously, the first requirement is an Azure account! To create an Azure account, go to the official Azure website.

 

Azure Virtual Machine Creation

Once the Azure account has been set, a Windows Azure virtual machine has to be created.

There are two ways to create a Virtual Machine: by a "Quick Create" or by a "Custom Create". It is recommended to perform a "Custom Create" (from gallery) because it offers more flexibility and control on the creation.

 

PrtScr-capture_20_20140730-120018_1.png

 

This example is done with “FROM GALLERY”. So the creation of the virtual machine will be performed with a wizard in four steps.

 

The first step of the wizard is the selection of the virtual machine type.

Unlike prejudices, Windows Azure offers a large panel of virtual machines, which do not only come from the Microsoft world.

See more details about Virtual Machines on the Microsoft Azure website.


 

PrtScr-capture_20_20140730-120252_1.png

The targeted feature is only available on SQL Server 2014, so a virtual machine including SQL Server 2014 has to be created. This example will be made with a Standard edition to have the more restrictive edition of SQL Server 2014.

The first step of the wizard is configured as follows: 


 PrtScr-capture_20_20140730-120551_1.png

For this example, default settings are used. Indeed the configuration of the virtual machine is not the main topic here, and can change depending on one’s need.

The release date from June is selected, so the SP1 will be included for SQL Server 2014.

The "Standard Tier" is selected so load-balancer and auto-scaling will be included. See more details about Basic and Standard Tier on the Microsoft Azure website.

The virtual machine will run with 2 cores and 3.5 GB memory, which will be enough for this demo. See more details about prices and sizes on the Microsoft Azure website.

The virtual machine is named “dbiservicesvm” and the first (admin) user is “dbiadmin”.

 

The second step of the wizard is configured as follows:

 

PrtScr-capture_20_20140730-121233_1.png

 

The creation of a virtual machine in Azure requires a cloud service, which is a container for virtual machines in Azure. See more details about Cloud Service on the Microsoft Azure website.

Furthermore, a storage account ("dbiservices") and an affinity group ("IT") are also required to store the disk files of the virtual machine. To create a storage account and an affinity group, see the Azure Storage Account Creation part from a previous blog.

 

The third step of the wizard is configured as follows:

 

PrtScr-capture_20_20140730-121705_1.png

 

This screen offers the possibility to install extensions for the virtual machine. Virtual machine extensions simplify the virtual machine management. By default, VM Agent is installed on the virtual machine.

The Azure VM Agent will be responsible for installing, configuring and managing VM extensions on the virtual machine.

For this example, VM extensions are not required at all, so nothing is selected.

 

Security

At the SQL Server level, an On-Premise SQL user with Backup Operator privilege is required on the targeted Azure database. Of course the user must be mapped with a SQL Server login to be able to connect to the SQL Server instance.

If the user does not have the Backup Operator privilege, the process will be blocked at the database backup creation step:

 

PrtScr-capture_20_20140730-122146_1.png

 

Moreover, an Azure account with Administrator Service privilege linked to the subscription containing the Windows Azure virtual machine is also required. Without this account, it is impossible to retrieve the list of Windows Azure virtual machines available on the Azure account, so the SQL Server Windows Azure virtual machine created previously.

Finally, the end point of the Cloud Adapter must be configured to access the virtual machine during the execution of the wizard. If not, the following error message will occur:

 

PrtScr-capture_201.png

 

The Cloud Adapter is a service which allows the On-Premise SQL Server instance to communicate with the SQL Server Windows Azure virtual machine. More details about Cloud Adapter for SQL Server on TechNet.

To configure the Cloud Adapter port, the Azure administrator needs to access to the Endpoints page in the SQL Server Windows Azure virtual machine on the Azure account. Then a new endpoint needs to be created as follows:

 

PrtScr-capture_201_20140730-122732_1.png

 

Now, the SQL Server Windows Azure virtual machine allows to connect to the Cloud Adapter port.

 

The credentials of a Windows administrator user are also required on the Windows Azure virtual machine to connect to the virtual machine in Azure. This administrator also needs a SQL Server login. If these two requirements are not met, the following error message will occur:

 

PrtScr-capture_201_20140730-122929_1.png

 

The login created in the SQL Server instance in Azure must also have the dbcreator server role, otherwise the following error message will occur:

 

PrtScr-capture_201_20140730-123033_1.png

 

 

Deploy a database to the Windows Azure VM wizard

Launch the wizard

The wizard can be found with a right-click on the targeted database, then "Tasks" and finally "Deploy Database to a Windows Azure VM…"

 

PrtScr-capture_201_20140730-132221_1.png

 

In fact, it does not matter from which user databases the wizard is launched, because the wizard will ask to connect to an instance, and then it will ask to select a database.

 

Use the wizard

The first step is an introduction of the wizard, which is quite without interests.

 

The second step needs three fields: the SQL Server instance, the SQL Server database and the folder to store the backup files.

 

PrtScr-capture_201_20140730-132446_1.png

 

The "SQL Server" field is the SQL Server instance hosting the SQL Server database which is planned to be deployed to Azure.

The "Select Database" field must obviously reference the database to deploy.

 

The third step needs two fields: the authentication to the Azure account and the selection of the subscription.

 

PrtScr-capture_201_20140730-132747_1.png

 

After clicking on the Sign in… button, a pop-up will require the Administrator privilege from the Azure account.

As soon as the credentials are entered to connect to the Azure account, a certificate will be generated.

If several subscriptions are linked to the Azure account, select the correct subscription ID. In this example, there is only one subscription linked to the Azure account.

For more information, see what’s the difference between an Azure account and a subscription on TechNet.

 

The fourth step of the wizard is divided in several parts. First, there is the virtual machine selection, as follows:

 

PrtScr-capture_201_20140730-133036_1.png

 

The cloud service ("dbiservicescloud") needs to be selected. Then, the virtual machine ("dbiservicesvm") can be selected.

 

Credentials to connect to the virtual machine must be provided, as follows:

 

PrtScr-capture_201_20140730-133421_1.png

 

The SQL Server instance name and the database name need to be filled, as follows:

 

PrtScr-capture_201_20140730-133756_1.png

 

Finally, all information are filled for the wizard, the deployment can be launched. The deployment finished successfully!

 

After the deployment

First, the On-Premise database used for the deployment is still present on the SQL Server instance.

 

PrtScr-capture_201_20140730-134024_1.png

 

Then, the folder used to store the temporary file is still present. In fact, this is a "bak" file because the mechanism behind the wizard is a simple backup and restore of the database from an On-Premise SQL Server instance to an Azure SQL Server instance.

 

PrtScr-capture_201_20140730-134116_1.png

 

So do not forget to delete the bak file after the deployment because this file uncessarly fills your storage space for big databases.

 

Finally, the deployed database can be found on the Azure SQL Server instance!

 

PrtScr-capture_201_20140730-134220_1.png

Limitations

The first limitation of this new feature is the database size of this operation: it cannot exceed 1 TB.

Moreover, it does not support hosted services that are associated with an Affinity Group.

The new feature does not allow to deploy all versions of SQL Server database. Indeed, only SQL Server 2008 database versions or higher are allowed to be deployed.

 

Similar features

If this new feature, for any reason , does not meet the organization’s need, two similar features exist.

 

SQL database in Azure

This feature introduced with SQL Server 2012 is quite similar to the feature presented in this blog, because the SQL Server database and the Windows Azure virtual machine are hosted on Azure.

However, the management of the infrastructure is much more reduced: no virtual machine management nor sql server management. A good comparison between these two functionalities is available: Choosing between SQL Server in Windows Azure VM & Windows Azure SQL Database.

More details about SQL Database on Azure.

 

SQL Server data files in Azure

This is a new feature introduced with SQL Server 2014, which allows to store data files from the SQL Server database in Azure Blob storage, but the SQL Server instance runs On-Premise.

It simplifies the migration process, reduces the On-Premise space storage and management, and simplifies the High Availability and recovery solutions…

More details about SQL Server data files in Azure on TechNet.

 

Conclusion

With this feature, Microsoft simplifies the SQL Server database deployment process from On-Premise to Azure.

Azure is a rather attractive and interesting tool that is highly promoted by Microsoft.

Pilots: Too many ed tech innovations stuck in purgatory

Michael Feldstein - Tue, 2014-08-12 13:44

Steve Kolowich wrote an article yesterday in the Chronicle that described the use of LectureTools, a student engagement and assessment application created by faculty member Perry Sampson at the University Michigan. These two paragraphs jumped out at me.

The professor has had some success getting his colleagues to try using LectureTools in large introductory courses. In the spring, the software was being used in about 40 classrooms at Michigan, he says.

Adoption elsewhere has been scattered. In 2012, Mr. Samson sold LectureTools to Echo360[1], an education-technology company, which has started marketing it to professors at other universities. The program is being used in at least one classroom at 1,100 institutions, according to Mr. Samson, who has kept his title of chief executive of LectureTools. But only 80 are using the software in 10 or more courses.

93% of LectureTools clients use the tool for less than 10 courses total, meaning that the vast majority of customers are running pilot projects almost two years after the company was acquired by a larger ed tech vendor.

We are not running out of ideas in the ed tech market – there are plenty of new products being introduced each year. What we are not seeing, however, are ed tech innovations that go beyond a few pilots in each school. Inside Higher Ed captured this sentiment when quoting a Gallup representative after the GSV+ASU EdInnovations conference this year:

“Every one of these companies has — at least most of them — some story of a school or a classroom or a student or whatever that they’ve made some kind of impact on, either a qualitative story or some real data on learning improvement,” Busteed said. “You would think that with hundreds of millions of dollars, maybe billions now, that’s been plowed into ed-tech investments … and all the years and all the efforts of all these companies to really move the needle, we ought to see some national-level movement in those indicators.”

In our consulting work Michael and I often help survey institutions to discover what technologies are being used within courses[2], and typically the only technologies that are used by a majority of faculty members or in a majority of courses are the following:

  • AV presentation in the classroom;
  • PowerPoint usage in the classroom (obviously connected with the projectors);
  • Learning Management Systems (LMS);
  • Digital content at lower level than a full textbook (through open Internet, library, publishers, other faculty, or OER); and
  • File sharing applications.

Despite the billions of dollars invested over the past several years, the vast majority of ed tech is used in only a small percentage of courses at most campuses.[3] Most ed tech applications or devices have failed to cross the barriers into mainstream adoption within an institution. This could be due to the technology not really addressing problems that faculty or students face, a lack of awareness and support for the technology, or even faculty or student resistance to the innovation. Whatever the barrier, the situation we see far too often is a breakdown in technology helping the majority of faculty or courses.

Diffusion of Innovations – Back to the basics

Everett Rogers wrote the book on the spread of innovations within an organization or cultural group in his book Diffusions of Innovations. Rogers’ work led to many concepts that we seem to take for granted, such as the S-curve of adoption:

 The Diffusion of Innovations, 5th ed, p. 11

Source: The Diffusion of Innovations, 5th ed, p. 11

leading to the categorization of adopters (innovators, early adopters, early majority, late majority, laggards), and the combined technology adoption curve.

 The Diffusion of Innovations, 5th ed., p. 281

Source: The Diffusion of Innovations, 5th ed., p. 281

But Rogers did not set out to describe the diffusion of innovations as an automatic process following a pre-defined path. The real origin of his work was trying to understand why some innovations end up spreading throughout a social group while others do not, somewhat independent of whether the innovation could be thought of as a “good idea”. From the first paragraph of the 5th edition:

Getting a new idea adopted, even when it has obvious advantages, is difficult. Many innovations require a lengthy period of many years from the time when they become available to the time when they are widely adopted. Therefore, a common problem for many individuals and organizations is how to speed up the rate of diffusion of an innovation.

Rogers defined diffusion as “a special type of communication in which the messages are about a new idea” (p. 6), and he focused much of the book on the Innovation-Decision Process. This gets to the key point that availability of a new idea is not enough; rather, diffusion is more dependent on the communication and decision-process about whether and how to adopt the new idea. This process is shown below (p. 170):

 The Diffusion of Innovations, 5th ed., p. 170

Source: The Diffusion of Innovations, 5th ed., p. 170

What we are seeing in ed tech in most cases, I would argue, is that for institutions the new ideas (applications, products, services) are stuck the Persuasion stage. There is knowledge and application amongst some early adopters in small-scale pilots, but majority of faculty members either have no knowledge of the pilot or are not persuaded that the idea is to their advantage, and there is little support or structure to get the organization at large (i.e. the majority of faculty for a traditional institution, or perhaps for central academic technology organization) to make a considered decision. It’s important to note that in many cases, the innovation should not be spread to the majority, either due to being a poor solution or even due to organizational dynamics based on how the innovation is introduced.

The Purgatory of Pilots

This stuck process ends up as an ed tech purgatory – with promises and potential of the heaven of full institutional adoption with meaningful results to follow, but also with the peril of either never getting out of purgatory or outright rejection over time.

Ed tech vendors can be too susceptible to being persuaded by simple adoption numbers such as 1,100 institutions or total number of end users (millions served), but meaningful adoption within an institution – actually affecting the majority of faculty or courses – is necessary in most cases before there can be any meaningful results beyond anecdotes or marketing stories. The reason for the extended purgatory is most often related to people issues and communications, and the ed tech market (and here I’m including vendors as well as campus support staff and faculty) has been very ineffective in dealing with real people at real institutions beyond the initial pilot audience.

Update: Add parenthetical in last sentence to clarify that I’m not just talking about vendors as key players in diffusion.

  1. Disclosure: Echo360 was a recent client of MindWires
  2. For privacy reasons I cannot share the actual survey results publicly.
  3. I’m not arguing against faculty prerogative in technology adoption and for a centralized, mandatory approach, but noting the disconnect.

The post Pilots: Too many ed tech innovations stuck in purgatory appeared first on e-Literate.