Skip navigation.

Feed aggregator

Oracle Cloud : First Impressions

Tim Hall - Fri, 2015-08-28 07:55

cloudFollowers of the blog will know I’ve been waiting to get access to the Oracle Cloud for a while. Well, I’ve finally got access to a bit of it. Specifically, the “Oracle Database Cloud Service” (DBaaS) part. :)

The Schema Service has been around for a few years and I had already tried that out, but IMHO it’s not really part of Oracle Cloud proper*, so I was reserving my judgement until I got the real stuff. :)

I’ve written a couple of articles already. Just basic stuff to document setting stuff up and connecting etc.

So here are some first impressions…

Oracle Cloud : Look and Feel

Overall the cloud offering looks clean and modern. Tastes vary of course, but I like the look of it.

The navigation is a bit inconsistent between the different cloud services. It feels like the console for each section (Compute, Java, DBaaS etc.) has been written by a different team, each doing what they think works, rather than working to a single design standard. Here’s a couple of examples:

  • In the “Oracle Database Cloud Service” section there is a “Consoles” button on the top-right of the screen that triggers a popup menu allowing you to switch to the Dashboard, Java Cloud and Compute Cloud console. In the “Oracle Compute Cloud” section, the “Consoles” button is not present. Instead there is a hamburger on the top-left of the screen that causes a navigation panel to slide out on the left of the screen, pushing the rest of the page contents to the right. On the top-level services page, the same hamburger produces a popup menu, kind-of like the “Consoles” button, but with the colouring of the navigation panel. I don’t find any method better or worse than the others. It would just be nice if they picked one and stuck with it, otherwise you are looking round the screen trying to decide how to make your next move. :)
  • Some consoles use tabs. Some use navigation tiles. Some use both.

Don’t get me wrong, it’s not hard to navigate. It’s just inconsistent, which kind-of ruins the overall effect. If they can bring it all into line I think it will be really cool.

I think Oracle Cloud looks neater than Amazon Web Services, but the navigation is not as consistent as AWS or Azure. Having used AWS, Azure and Oracle Cloud, I feel Azure has the neatest and most consistent interface. Like I said before, tastes vary. :)

Probably my biggest issue with the Oracle Cloud interface is the speed, or lack of. It’s really slow and unresponsive at times. On a few occasions I thought it had died, then after about 30 seconds the screen just popped back into life. Some of the actions give no feedback until they are complete, so you don’t know if you’ve pressed the button or not.

Oracle Cloud : Ease of Use

I found DBaaS pretty simple to use. I’ve already spent some time using AWS and Azure, so there is probably some carry-over there. I pretty much completed my first pass through creation, connections and patching before I even considered looking for documentation. :)

The documentation is OK, but contains very few screen shots, which leads me to believe the look and feel is all in a state of flux.

I think the general Oracle Compute Cloud Service network/firewall setup is really quite clear, but you can’t edit existing rules. Once a rule is created you can only enable, disable or delete it. I found myself having to delete and create rules a number of times when it felt more obvious to let me edit an existing rule. I’ll mention a DBaaS issue related to this later.

DBaaS Specifically

Just some general observations about the DBaaS offering.

  • The “Oracle Database Cloud Service” DBaaS offering looks OK , but I noticed they don’t have multiplexed redo logs. I never run without multiplexed redo logs, regardless of the redundancy on the storage layer. Even if they were all shoved in the same directory, it would still be better than running without multiplexed files. This is a bit of mandatory configuration the user is left to do after the fact.
  • The DBaaS virtual machine has Glassfish and ORDS installed on it, which is necessary because of the way they have organised the administration of the service, but it’s not something I would normally recommend. Databases and App Servers never go on the same box. Like I said, I understand why, but I don’t like it.
  • The management of the DBaaS offering feels fragmented. For some administration tasks you use the main cloud interface. For others you jump across to the DBaaS Monitor, which has a completely different look and feel. For others you to jump across to [DBConsole – 11g | DB Express -12c]. For a DBaaS offering, I think this is a mistake. It should all be incorporated into the central console and feel seamless. I understand that may be a pain and repetition of existing functionality, but it feels wrong without it.
  • I found the network/firewall setup done by the DBaaS service to be quite irritating. It creates a bunch of rules for each DBaaS service, which are all disabled by default (a good thing), but all the rules are “public”, which you would be pretty crazy to enable. Because you can’t edit them, they end up being pretty much useless. It really is one of those, “Do it properly or don’t bother!”, issues to me. If the DBaaS setup screens asked you to define a Security IP List, or pick an existing one, and decide which services you wanted to make available, it could build all these predefined rules properly in the first place. Alternatively, provide a DBaaS network setup wizard or just don’t bother. It feels so half-baked. :(
  • Dealing with the last two points collectively, the fragmentation of the management interface means some of the management functionality (DBaaS Monitor and [DBConsole – 11g | DB Express -12c]) is not available until you open the firewall for it. This kind-of highlights my point about the fragmentation. I’m logged into the DBaaS console where I can create and delete the whole service, but I can’t use some of the management features. It just feels wrong to me. It is totally down to the implementation choices. I would not have chosen this path.
  • Unlike the AWS RDS for Oracle, you get complete access to the OS and database. You even get sudo access to run root commands. At first I thought this was going to be a good thing and a nice differentiator compared to RDS, but having used the service I’m starting to think it is a bad move. The whole point of a DBaaS offering is it hides some of the nuts and bolts from you. I should not be worrying about the OS. I should not be worrying about the basic Oracle setup. Giving this level of access raises more questions/problems than it solves. I feel I should either do everything myself, or pick a DBaaS offering, accept the restrictions of it, and have it all done for me. The current offering feels like it has not decided what it wants to be yet.
  • When I patched the database through the service admin console it worked fine, but it took a “really” long time! I waited quite a while, went out to the gym and it was still going when I came back. Eventually I started an SSH session to try and find out what was happening. It turns out it took over 2 hours to “download” the PSU to the VM. Once the download was complete, the application of the patch was done quickly. Interesting.
  • The “Oracle Database Cloud Service – Virtual Image” option seems pretty pointless to me. On the website and console it says there is a software installation present, but this is not the case. Instead, there is a tarball containing the software (/scratch/db12102_bits.tar.gz). It also doesn’t come with the storage to do the actual installation on, or to hold the datafiles. To do the installation, you would need to “Up Scale” the service to add the storage, then do the installation manually. This process is actually more complicated than provisioning a compute node and doing everything yourself. I think Oracle need to ditch this option and just stick with DBaaS or Compute, like Amazon have done (RDS or EC2).
Conclusion

I like the Oracle Cloud more than I thought I would. I think it looks quite nice and if someone told me I had to use it as a general Infrastructure as a Service (IaaS) portal I would be fine with that.

I like the DBaaS offering less than I hoped I would. I feel quite bad about saying it, but it feels like a work in progress and not something I would want use at this point. If it were my decision, I would be pushing the DBaaS offering more in the direction of AWS RDS for Oracle. As I said before, the current DBaaS offering feels like it has not decided what it wants to be yet. It needs to be much more hands-off, with a more consistent, centralized interface.

I don’t have full access to the straight Compute Cloud yet, so I can’t try provisioning a VM and doing everything myself. If I get access I will try it, but I would expect it to be the same as what I’ve done for EC2 and Azure. A VM is a VM… :)

When I read this back it sounds kind-of negative, but I think all the things I’ve mentioned could be “fixed” relatively quickly. Also, this is only one person’s opinion on one specific service. The haters need to try this for themselves before they hate. :)

Cheers

Tim…

* Just to clarify, I am not saying the Schema Service isn’t “Cloud” and I’m not saying it doesn’t work. I’m just saying I don’t see this as part of Oracle’s grand cloud database vision. It always seemed like a cynical push to market to allow them to say, “we have a cloud DB”. If it had been branded “APEX Service” I might have a different opinion. It is after all a paid for version of apex.oracle.com. This is a very different proposition to promoting it as a “Cloud Database”.

Oracle Cloud : First Impressions was first posted on August 28, 2015 at 2:55 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Creating User Schema Table and Projections in Vertica

Pakistan's First Oracle Blog - Fri, 2015-08-28 01:25
Vertica is a an exciting database with some real nifty features. Projections is a ground breaking unique feature of Vertica which dramatically increases performance benefits in terms of querying and space benefits in terms of compression.



Following test commands are impromptu sesssion in which a user is being created, then a schema is created, and that user is authorized on that schema. Then a table is created with a default superprojection and then a projection is created and then we see its usage.

Create new vertica database user, create schema and authorize that user to that schema. Create 4 column table and insert data.

select user_name from v_catalog.users;

vtest=> create user mytest identified by 'user123';
CREATE USER
vtest=>

vtest=> \du
      List of users
 User name | Is Superuser
-----------+--------------
 dbadmin   | t
 mytest    | f
(2 rows)

vtest=> \dn
         List of schemas
     Name     |  Owner  | Comment
--------------+---------+---------
 v_internal   | dbadmin |
 v_catalog    | dbadmin |
 v_monitor    | dbadmin |
 public       | dbadmin |
 TxtIndex     | dbadmin |
 store        | dbadmin |
 online_sales | dbadmin |
(7 rows)


vtest=> \q
[dbadmin@vtest1 root]$ /opt/vertica/bin/vsql -U mytest -w user123 -h 0.0.0.0 -p 5433 -d vtest
Welcome to vsql, the Vertica Analytic Database interactive terminal.

Type:  \h or \? for help with vsql commands
       \g or terminate with semicolon to execute query
       \q to quit


vtest=> create table testtab (col1 integer,col2 integer, col3 varchar2(78), col4 varchar2(90));
ROLLBACK 4367:  Permission denied for schema public

[dbadmin@vtest1 root]$ /opt/vertica/bin/vsql -U dbadmin -w vtest -h 0.0.0.0 -p 5433 -d vtest
Welcome to vsql, the Vertica Analytic Database interactive terminal.

Type:  \h or \? for help with vsql commands
       \g or terminate with semicolon to execute query
       \q to quit

vtest=> \du
      List of users
 User name | Is Superuser
-----------+--------------
 dbadmin   | t
 mytest    | f
(2 rows)

vtest=> create schema mytest authorization mytest;
CREATE SCHEMA
vtest=> select current_user();
 current_user
--------------
 dbadmin
(1 row)

vtest=>

vtest=> \q
[dbadmin@vtest1 root]$ /opt/vertica/bin/vsql -U mytest -w user123 -h 0.0.0.0 -p 5433 -d vtest
Welcome to vsql, the Vertica Analytic Database interactive terminal.

Type:  \h or \? for help with vsql commands
       \g or terminate with semicolon to execute query
       \q to quit

vtest=> create table testtab (col1 integer,col2 integer, col3 varchar2(78), col4 varchar2(90));
CREATE TABLE
vtest=> select current_user();
 current_user
--------------
 mytest
(1 row)

vtest=>

vtest=> \dt
               List of tables
 Schema |  Name   | Kind  | Owner  | Comment
--------+---------+-------+--------+---------
 mytest | testtab | table | mytest |
(1 row)

vtest=> insert into testtab values (1,2,'test1','test2');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (2,2,'test2','test3');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (3,2,'test2','test3');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (4,2,'test4','tesrt3');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (4,2,'test4','tesrt3');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (4,2,'test4','tesrt3');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (4,2,'test4','tesrt3');
 OUTPUT
--------
      1
(1 row)

vtest=> commit;
COMMIT
vtest=>


Create a projection on 2 columns.

Superprojection exists already:

vtest=> select anchor_table_name,projection_name,is_super_projection from projections;
 anchor_table_name | projection_name | is_super_projection
-------------------+-----------------+---------------------
 testtab           | testtab_super   | t
(1 row)

vtest=>


vtest=> \d testtab
                                    List of Fields by Tables
 Schema |  Table  | Column |    Type     | Size | Default | Not Null | Primary Key | Foreign Key
--------+---------+--------+-------------+------+---------+----------+-------------+-------------
 mytest | testtab | col1   | int         |    8 |         | f        | f           |
 mytest | testtab | col2   | int         |    8 |         | f        | f           |
 mytest | testtab | col3   | varchar(78) |   78 |         | f        | f           |
 mytest | testtab | col4   | varchar(90) |   90 |         | f        | f           |
(4 rows)

vtest=>
vtest=> create projection ptest (col1,col2) as select col1,col2 from testtab;
WARNING 4468:  Projection is not available for query processing. Execute the select start_refresh() function to copy data into this projection.
          The projection must have a sufficient number of buddy projections and all nodes must be up before starting a refresh
CREATE PROJECTION
vtest=>


vtest=> select anchor_table_name,projection_name,is_super_projection from projections;
 anchor_table_name | projection_name | is_super_projection
-------------------+-----------------+---------------------
 testtab           | testtab_super   | t
 testtab           | ptest           | f
(2 rows)


vtest=> select * from ptest;
ERROR 3586:  Insufficient projections to answer query
DETAIL:  No projections eligible to answer query
HINT:  Projection ptest not used in the plan because the projection is not up to date.
vtest=>

vtest=> select start_refresh();
             start_refresh
----------------------------------------
 Starting refresh background process.

(1 row)

vtest=> select * from ptest;
 col1 | col2
------+------
    1 |    2
    2 |    2
    3 |    2
    4 |    2
    4 |    2
    4 |    2
    4 |    2
(7 rows)

vtest=>


 projection_basename | USED/UNUSED |           last_used
---------------------+-------------+-------------------------------
 testtab             | UNUSED      | 1970-01-01 00:00:00-05
 ptest               | USED        | 2015-08-28 07:14:49.877814-04
(2 rows)

vtest=> select * from testtab;
 col1 | col2 | col3  |  col4
------+------+-------+--------
    1 |    2 | test1 | test2
    3 |    2 | test2 | test3
    2 |    2 | test2 | test3
    4 |    2 | test4 | tesrt3
    4 |    2 | test4 | tesrt3
    4 |    2 | test4 | tesrt3
    4 |    2 | test4 | tesrt3
(7 rows)

projection_basename | USED/UNUSED |           last_used
---------------------+-------------+-------------------------------
 ptest               | USED        | 2015-08-28 07:14:49.877814-04
 testtab             | USED        | 2015-08-28 07:16:10.155434-04
(2 rows)
Categories: DBA Blogs

List of acquisitions by Microsoft a data journey

Nilesh Jethwa - Thu, 2015-08-27 21:21

If we look into the SEC data for Microsoft and other tech companies, Microsoft spends the most in Research and Development from [by dollar]

Image

Read more at: http://www.infocaptor.com/dashboard/list-of-acquisitions-by-microsoft-a-data-journey

Integrating Telstra Public SMS API into Bluemix

Pas Apicella - Thu, 2015-08-27 20:16
In the post below I will show how I integrated Telstra public SMS Api into my Bluemix catalog to be consumed as a service. This was all done from Public Bluemix using the Cloud Integration Service.

Step 1 - Create a T.DEV account

In order to get started you need to create an account on http://dev.telstra.com in order to be granted access to the SMS API. Once access is granted you need to create an application which enables you to add/manage Telstra API keys as shown below.

1.1 Create an account an http://dev.telstra.com

1.2. Once done you should have something as follows which can take up to 24 hours to get approved as shown by the "approved" icon


Step 2 - Test the SMS Telstra API

At this point we want to test the Telstra SMS API using a script, this ensures it's working before we proceed to Integrating it onto Bluemix.

2.1. Create a script called setup.sh as follows

#Obtain these keys from the Telstra Developer Portal
APP_KEY="yyyy-key"
APP_SECRET="yyyy-secret"

curl "https://api.telstra.com/v1/oauth/token?client_id=$APP_KEY&client_secret=$APP_SECRET&grant_type=client_credentials&scope=SMS"

2.2. Edit the script above to use your APP_KEY and APP_SECRET values from the Telstra Developer Portal

2.3. Run as shown below


pas@Pass-MBP:~/ibm/customers/telstra/telstra-apis/test$ ./setup.sh
{ "access_token": "xadMkPqSAE0VG6pSGEi6rHA5vqYi", "expires_in": "3599" }

2.4. Make a note of the token key returned, you will need this to send an SMS message
2.5. Create a script called "sendsms.sh" as shown below.


# * Recipient number should be in the format of "04xxxxxxxx" where x is a digit
# * Authorization header value should be in the format of "Bearer xxx" where xxx is access token returned
# from a previous GET https://api.telstra.com/v1/oauth/token request.
RECIPIENT_NUMBER=0411151350
TOKEN=token-key

curl -H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-d "{\"to\":\"$RECIPIENT_NUMBER\", \"body\":\"Hello, pas sent this message from telstra SMS api!\"}" \
"https://api.telstra.com/v1/sms/messages"

2.6. Replace the token key with what was returned at step 2.4 above
2.7. Replace the RECIPIENT_NUMBER with your own mobile number to test the SMS API.
2.8. Run as shown below.


pas@Pass-MBP:~/ibm/customers/telstra/telstra-apis/test$ ./sendsms.sh
{"messageId":"1370CAB677B59C226705337B95945CD6"}
Step 3 - Creating a REST based service to Call Telstra SMS API

At this point we can now Integrate the Telstra SMS API into Bluemix. To do that I created a simple Spring Boot Application which exposes a RESTful method to call Telstra SMS API using Spring's RestTemplate class. I do this as it's two calls you need to make to call the Telstra SMS API. A REST based call to get a ACCESS_TOKEN , then followed by a call to actually send an SMS message. Creating a Spring Boot application to achieve this allows me to wrap that into one single call making it easy to consume and add to the Bluemix Catalog as a Service.

More Information on The Cloud Integration service can be found here. Cloud Integration allows us to expose RESTful methods from Bluemix applications onto the catalog via one simple screen. We could alos use Bluemix API management service as well.

https://www.ng.bluemix.net/docs/services/CloudIntegration/index.html

Below shows the application being pushed into Bluemix which will then be used to add Telstra SMS API service into the Bluemix catalog.



pas@192-168-1-4:~/ibm/DemoProjects/spring-starter/jazzhub/TelstraSMSAPIDemo$ cf push
Using manifest file /Users/pas/ibm/DemoProjects/spring-starter/jazzhub/TelstraSMSAPIDemo/manifest.yml

Creating app pas-telstrasmsapi in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
OK

Using route pas-telstrasmsapi.mybluemix.net
Binding pas-telstrasmsapi.mybluemix.net to pas-telstrasmsapi...
OK

Uploading pas-telstrasmsapi...
Uploading app files from: /Users/pas/ibm/DemoProjects/spring-starter/jazzhub/TelstraSMSAPIDemo/target/TelstraSMSAPI-1.0-SNAPSHOT.jar
Uploading 752.3K, 98 files
Done uploading
OK

Starting app pas-telstrasmsapi in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
-----> Downloaded app package (15M)
-----> Liberty Buildpack Version: v1.19.1-20150622-1509
-----> Retrieving IBM 1.8.0_20150617 JRE (ibm-java-jre-8.0-1.0-pxa6480sr1ifx-20150617_03-cloud.tgz) ... (0.0s)
         Expanding JRE to .java ... (1.4s)
-----> Retrieving App Management 1.5.0_20150608-1243 (app-mgmt_v1.5-20150608-1243.zip) ... (0.0s)
         Expanding App Management to .app-management (0.9s)
-----> Downloading Auto Reconfiguration 1.7.0_RELEASE from https://download.run.pivotal.io/auto-reconfiguration/auto-reconfiguration-1.7.0_RELEASE.jar (0.1s)
-----> Liberty buildpack is done creating the droplet

-----> Uploading droplet (90M)

0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
1 of 1 instances running

App started


OK

App pas-telstrasmsapi was started using this command `$PWD/.java/jre/bin/java -Xtune:virtualized -Xmx384M -Xdump:none -Xdump:heap:defaults:file=./../dumps/heapdump.%Y%m%d.%H%M%S.%pid.%seq.phd -Xdump:java:defaults:file=./../dumps/javacore.%Y%m%d.%H%M%S.%pid.%seq.txt -Xdump:snap:defaults:file=./../dumps/Snap.%Y%m%d.%H%M%S.%pid.%seq.trc -Xdump:heap+java+snap:events=user -Xdump:tool:events=systhrow,filter=java/lang/OutOfMemoryError,request=serial+exclusive,exec=./.buildpack-diagnostics/killjava.sh $JVM_ARGS org.springframework.boot.loader.JarLauncher --server.port=$PORT`

Showing health and status for app pas-telstrasmsapi in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls: pas-telstrasmsapi.mybluemix.net
last uploaded: Fri Jul 17 11:26:58 UTC 2015

     state     since                    cpu    memory           disk           details
#0   running   2015-07-17 09:28:28 PM   1.0%   150.6M of 512M   148.9M of 1G
Step 4 - Add the RESTful method to IBM Bluemix catalog to invoke the Telstra SMS API

4.1 To expose our RESTful method we simply define the end point using the Cloud Integration service as shown below.


The image showing the Cloud Integration service with the Telstra API exposed and available to be consumed as a Service on Bluemix.





This was created using the Bluemix Dashboard but can also be done using the Cloud Foundry command line "cf create-service ..."

Step 5 - Create a Application client which will invoke the Telstra SMS service

At this point we are going to push a client application onto Bluemix which consumes the Telstra SMS API service and then uses it within the application. WE do this to verify the service works creating a simple HTML based application which invokes the service which has a manifest.yml file indicating it wants to consume the service which is now exposed on the catalog within Bluemix as per above.

5.1. The manifest.yml consumes the service created from the API in the catalog


applications:
- name: pas-telstrasmsapi-client
  memory: 512M
  instances: 1
  host: pas-telstrasmsapi-client
  domain: mybluemix.net
  path: ./target/TelstraSMSApiClient-0.0.1-SNAPSHOT.jar
  env:
   JBP_CONFIG_IBMJDK: "version: 1.8.+"
  services:
    - TelstraSMS-service
5.2. Push the application as shown below.


pas@Pass-MacBook-Pro:~/ibm/DemoProjects/spring-starter/jazzhub/TelstraSMSApiClient$ cf push
Using manifest file /Users/pas/ibm/DemoProjects/spring-starter/jazzhub/TelstraSMSApiClient/manifest.yml

Updating app pas-telstrasmsapi-client in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
OK

Using route pas-telstrasmsapi-client.mybluemix.net
Uploading pas-telstrasmsapi-client...
Uploading app files from: /Users/pas/ibm/DemoProjects/spring-starter/jazzhub/TelstraSMSApiClient/target/TelstraSMSApiClient-0.0.1-SNAPSHOT.jar
Uploading 806.8K, 121 files
Done uploading
OK
Binding service TelstraSMS-service to app pas-telstrasmsapi-client in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
OK

Stopping app pas-telstrasmsapi-client in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
OK

Starting app pas-telstrasmsapi-client in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
-----> Downloaded app package (16M)
-----> Downloaded app buildpack cache (1.2M)
-----> Liberty Buildpack Version: v1.19.1-20150622-1509
-----> Retrieving IBM 1.8.0_20150617 JRE (ibm-java-jre-8.0-1.0-pxa6480sr1ifx-20150617_03-cloud.tgz) ... (0.0s)
         Expanding JRE to .java ... (1.5s)
-----> Retrieving App Management 1.5.0_20150608-1243 (app-mgmt_v1.5-20150608-1243.zip) ... (0.0s)
         Expanding App Management to .app-management (0.9s)
-----> Downloading Auto Reconfiguration 1.7.0_RELEASE from https://download.run.pivotal.io/auto-reconfiguration/auto-reconfiguration-1.7.0_RELEASE.jar (0.0s)
-----> Liberty buildpack is done creating the droplet

-----> Uploading droplet (90M)

0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
1 of 1 instances running

App started


OK

App pas-telstrasmsapi-client was started using this command `$PWD/.java/jre/bin/java -Xtune:virtualized -Xmx384M -Xdump:none -Xdump:heap:defaults:file=./../dumps/heapdump.%Y%m%d.%H%M%S.%pid.%seq.phd -Xdump:java:defaults:file=./../dumps/javacore.%Y%m%d.%H%M%S.%pid.%seq.txt -Xdump:snap:defaults:file=./../dumps/Snap.%Y%m%d.%H%M%S.%pid.%seq.trc -Xdump:heap+java+snap:events=user -Xdump:tool:events=systhrow,filter=java/lang/OutOfMemoryError,request=serial+exclusive,exec=./.buildpack-diagnostics/killjava.sh $JVM_ARGS org.springframework.boot.loader.JarLauncher --server.port=$PORT`

Showing health and status for app pas-telstrasmsapi-client in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls: pas-telstrasmsapi-client.mybluemix.net
last uploaded: Sun Jul 19 15:22:26 UTC 2015

     state     since                    cpu    memory           disk           details
#0   running   2015-07-19 11:23:57 PM   0.8%   144.9M of 512M   149.8M of 1G

Step 6 - Send SMS using Telstra SMS Api from Bluemix Application using the Service

6.1. Navigate to the URL below and send an SMS using the form below.

http://pas-telstrasmsapi-client.mybluemix.net/


6.2. Verify it has sent a message to the phone number entered in the text field as shown below.



More Information 

Getting started with Bluemix is easy, navigate to http://bluemix.net to sign up and get going.http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Oracle Priority Support Infogram for 27-AUG-2015

Oracle Infogram - Thu, 2015-08-27 14:57

RDBMS
Understanding new In-Memory notes in an execution plan, from Oracle Database In-Memory
Migration IBM AIX ==> SPARC Solaris with Data Guard, from Upgrade your Database - NOW!
Tips on SQL Plan Management and Oracle Database In-Memory - Part 2, from the Oracle Optimizer blog.
Coding Oracle
Yet another CSV -> Table but with pipleline function, from Kris’ Blog.
WebLogic
Extending the Weblogic Console by adding Books, Pages and Portlets , from WebLogic Partner Community EMEA.
Java
9 tools to help you with Java Performance Tunin, from WebLogic Partner Community EMEA.
Extending the Weblogic Console by adding Books, Pages and Portlets 

Asynchronous Support in JAX-RS 2/Java EE 7, from The Aquarium.
And from the same source:
Java API for JSON Binding (JSON-B) 1.0 Early Draft Now Available!
BI Publisher
Page Borders and Title Underlines, from the Oracle BI Publisher blog.
BPM
Getting Started with BPM: Free Oracle University Video Tutorial, from the SOA & BPM Partner Community Blog.
Mobile Computing
New Features : Oracle Mobile Security Suite Integration in Oracle MAF 2.1.3, from The Oracle Mobile Platform Blog.
Ops Center
How Many Systems Can Ops Center Manage?, from the Ops Center blog.
Demantra
How to avoid ORA-06512 and ORA-20000 when Concurrent Statistics Gathering is enabled. New in 12.1 Database, Concurrent Statistics Gathering, Simultaneous for Multiple Tables or Partitions, from the Oracle Demantra blog.
EBS
From the Oracle E-Business Suite Support blog:
New White Paper on Using Web ADI Spread Sheet in the Buyer Work Center Orders (BWC)
From the Oracle E-Business Suite Technology blog:
JRE 1.8.0_60 Certified with Oracle E-Business Suite

Using Application Management Suite With E-Business Suite 12.2

Adaptive Query Optimization in Oracle 12c : Ongoing Updates

Tim Hall - Thu, 2015-08-27 12:09

I’ve said a number of times, the process of writing articles is part of an ongoing learning experience for me. A few days ago my personal tech editor (Jonathan Lewis) asked about a statement I made in the SQL Plan Directive article. On further investigation it turned out the sentence was a complete work of fiction on my part, based on my misunderstanding of something I read in the manual, as well as the assumption that everything that happens must be as a result of a new feature. :)

Anyway, the offending statement has been altered, but the conversation this generated resulted in new article about Automatic Column Group Detection.

The process also highlighted how difficult, at least for me, it is to know what is going on in the optimizer now. It wasn’t always straight forward before, but now with the assorted new optimizations, some beating others to the punch, it is even more difficult. There are a number of timing issues involved also. If a statement runs twice in quick succession, you might get a different sequence of events compared to having a longer gap between the first and second run of the statement. It’s maddening at times. I’m hoping Jonathan will put pen to paper about this, because I think he will do a better job of explaining the issues around the inter-dependencies better than I can.

Anyway, I will be doing another pass through this stuff over the coming days/weeks/months/years to make sure it is consistent with “my current understanding”. :)

Fun, fun, fun…

Cheers

Tim…

Adaptive Query Optimization in Oracle 12c : Ongoing Updates was first posted on August 27, 2015 at 7:09 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Summer Projects and a Celebration

Oracle AppsLab - Thu, 2015-08-27 09:02

If you follow us on Twitter (@theappslab) or on Facebook, you’ve seen some of the Summer projects coming together.

If not, here’s a recap of some of the tinkering going on in OAUX Emerging Technologies land.

Mark (@mvilrokx) caught the IoT bug from Noel (@noelportugal), and he’s been busy destroying and rebuilding a Nerf gun, which is a thing. Search for “nerf gun mods” if you don’t believe me.

11878916_1055356384509414_2509613851779629347_o

Mark installed our favorite chip, the ESP8266, to connect the gun to the internets, and he’s been tinkering from there.

Meanwhile, Raymond (@yuhuaxie) has been busy building a smart thermostat.

11880462_1056118014433251_3359065287035416341_n

11896214_1056118017766584_838064281034754658_n

And finally, completely unrelated to IoT tinkering, earlier this month the Oracle Mexico Development Center (MDC) in Guadalajara celebrated its fifth anniversary. As you know, we have two dudes in that office, Os (@vaini11a) and Luis (@lsgaleana), as well as an extended OAUX family. Congratulations.

11838559_950231585020447_942122106329896825_o

 

11218984_950231691687103_4011713548176238985_nPossibly Related Posts:

Red Samurai ADF Performance Audit Tool v 4.0 - Web Client Request Monitoring and Complete Query Analysis

Andrejus Baranovski - Thu, 2015-08-27 03:12
I'm excited to announce, we have released a new version of our RSA audit tool. This is a major update after previous version released in February 2015 - Red Samurai ADF Performance Audit Tool v 3.4 - ADF Task Flow Statistics with Oracle DMS Servlet Integration.

It is already 3 years, since initial version - Red Samurai Performance Audit Tool - Runtime Diagnosis for ADF Applications. We are using it for many of our customers to monitor ADF performance in both test and production environments. Many new features were added during these years, more features to come.

RSA Audit v4.0 New Features

1. RSA Audit v4.0 dashboard is supporting ADF 12c and Alta UI look


2. Web Client Request Time monitoring. Supported with ADF 11g and 12c. Generic method tracks request time for all ADF UI components. Logged data can be analysed through ADF UI dashboard or directly in the DB. Request time represents complete time from user action in the browser, until request is completed. This includes real user experience - browser processing time, network time, server side time and ADF BC/DB processing times. Runs in VERBOSE logging mode


3. Detail information about ADF fragment, button or other ADF UI component involved into request is being logged together with request processing time and is accessible from audit dashboard. This helps to identify slow actions spanning from Web Client to DB


4. Information about each request is grouped, this allows to compare differences between multiple requests and identify bottlenecks in the application performance


5. Duplicate Queries. Allows to track all executed VO’s, very helpful to identify redundant VO’s executions. Groups VO executions per ECID, this helps to identify VO’s re-executed multiple times during the same request. Runs in MEDIUM logging mode


6. VO’s executed from the same ECID are automatically highlighted - this simplifies redundant queries analysis


7. Number of duplicate executions of VO’s per ECID is calculated and presented in the table and sunburst chart


8. We calculate top VO’s per AM. This helps to set priorities for SQL tuning and understand heavy used VO’s


9. Sunburst chart displays visual representation of duplicate and top VO’s per AM

Page Borders and Title Underlines

Tim Dexter - Wed, 2015-08-26 15:32

I have taken to recording screen grabs to help some folks out on 'how do I' scenarios. Sometimes a 3 minute video saves a couple of thousand words and several screen shots.

So, per chance you need to know:

1. How to add a page border to your output and/or

2. How to add an under line that runs across the page

Watch this!   https://www.youtube.com/watch?v=3UcXHeSF0BM

If you need the template, sample data and output, get them here.

I'm taking requests if you have them.

Categories: BI & Warehousing

The Fraught Interaction Design of Personalized Learning Products

Michael Feldstein - Wed, 2015-08-26 12:49

By Michael FeldsteinMore Posts (1043)

David Wiley has a really interesting post up about Lumen Learning’s new personalized learning platform. Here’s an excerpt:

A typical high-level approach to personalization might include:

  • building up an internal model of what a student knows and can do,
  • algorithmically interrogating that model, and
  • providing the learner with a unique set of learning experiences based on the system’s analysis of the student model

Our thinking about personalization started here. But as we spoke to faculty and students, and pondered what we heard from them and what we have read in the literature, we began to see several problems with this approach. One in particular stood out:

There is no active role for the learner in this “personalized” experience. These systems reduce all the richness and complexity of deciding what a learner should be doing to – sometimes literally – a “Next” button. As these systems painstakingly work to learn how each student learns, the individual students lose out on the opportunity to learn this for themselves. Continued use of a system like this seems likely to create dependency in learners, as they stop stretching their metacognitive muscles and defer all decisions about what, when, and how long to study to The Machine.

Instructure’s Jared Stein really likes Lumen’s approach, writing,

So much work in predictive analytics and adaptive learning seeks to relieve people from the time-consuming work of individual diagnosis and remediation — that’s a two-edged sword: Using technology to increase efficiency can too easily sacrifice humanness — if you’re not deliberate in the design and usage of the technology. This topic came up quickly amongst the #DigPedNetwork group when Jim Groom and I chatted about closed/open learning environments earlier this month, suggesting that we haven’t fully explored this dilemma as educators or educational technologist.

I would add that I have seen very little evidence that either instructors or students place a high value on the adaptivity of these products. Phil and I have talked to a wide range of folks using these products, both in our work on the e-Literate TV case studies and in our general work as analysts. There is a lot of interest in the kind of meta-cognitive dashboarding that David is describing. There is little interest in, and in some cases active hostility toward, adaptivity. For example, Essex County College is using McGraw Hill’s ALEKS, which has one of the more sophisticated adaptive learning approaches on the market. But when we talked to faculty and staff there, the aspects of the program that they highlighted as most useful were a lot more mundane, e.g.,

It’s important for students to spend the time, right? I mean learning takes time, and it’s hard work. Asking students to keep time diaries is a very difficult ask, but when they’re working in an online platform, the platform keeps track of their time. So, on the first class day of the week, that’s goal-setting day. How many hours are you going to spend working on your math? How many topics are you planning to master? How many classes are you not going to be absent from?

I mean these are pretty simple goals, and then we give them a couple goals that they can just write whatever they feel like. And I’ve had students write, “I want to come to class with more energy,” and other such goals. And then, because we’ve got technology as our content delivery system, at the end of the week I can tell them, in a very efficient fashion that doesn’t take up a lot of my time, “You met your time goal, you met your topic goal,” or, “You approached it,” or, “You didn’t.”

So one of the most valuable functions of this system in this context is to reflect back to the students what they have done in terms that make sense to them and are relevant to the students’ self-selected learning goals. The measures are fairly crude—time on task, number of topics covered, and so on—and there is no adaptivity necessary at all.

But I also think that David’s post hints at some of the complexity of the design challenges with these products.

You can think of the family of personalized learning products as having potentially two components: diagnostic and prescriptive. Everybody who likes personalized learning products in any form likes the diagnostic component. The foundational value proposition for personalization, (which should not in any way be confused with “personal”), is having the system provide feedback to students and teachers about what the student does well and where the student is struggling. Furthermore, the perceived value of the product is directly related to the confidence that students and teachers have that the product is rendering an accurate diagnosis. That’s why I think products that provide black box diagnoses are doomed to market failure in the long term. As the market matures, students and teachers are going to want to know not only what the diagnosis is but also what the basis of the diagnosis is, so that they can judge for themselves whether they think the machine is correct.

Once the system has diagnosed the student’s knowledge or skill gaps—and it is worth calling out that these many of these personalized learning systems work on a deficit model, where the goal is to get students to fill in gaps—the next step is to prescribe actions that will help students to address those gaps. Here again we get into the issue of transparency. As David points out, some vendors hide the rationale for their prescriptions, even going so far as to remove user choice and just hide the adaptivity behind the “next” button. Note that the problem isn’t so much with providing a prescription as it is with the way in which it is provided. The other end of the spectrum, as David argues, is to make recommendations. The full set of statements from a well behaved personalized learning product to a student or teacher might be something like the following:

  1. This is where I think you have skill or knowledge gaps.
  2. This is the evidence and reasoning for my diagnosis.
  3. This is my suggestion for what you might want to do next.
  4. This is my reasoning for why I think it might help you.

It sounds verbose, but it can be done in fairly compact ways. Netflix’s “based on your liking Movie X and Movie Y, we think you would give Movie Z 3.5 stars” is one example of a compact explanation that provides at least some of this information. There are lots of ways that a thoughtful user interface designer can think about progressively revealing some of this information and providing “nudges” that encourage students on certain paths while still giving them the knowledge and freedom they need to make choices for themselves. The degree to which the system should be heavy-handed in its prescription probably depends in part on the pedagogical model. I can see something closer to “here, do this next” feeling appropriate in a self-paced CBE course than in a typical instructor-facilitated course. But even there, I think the Lumen folks are 100% right that the first responsibility of the adaptive learning system should be to help the learner understand what the system is suggesting and why so that the learner can gain better meta-cognitive understanding.

None of which is to say that the fancy adaptive learning algorithms themselves are useless. To the contrary. In an ideal world, the system will be looking at a wide range of evidence to provide more sophisticated evidence-based suggestions to the students. But the key word here is “suggestions.” Both because a critical part of any education is teaching students to be more self-aware of their learning processes and because faulty prescriptions in an educational setting can have serious consequences, personalized learning products need to evolve out of the black box phase as quickly as possible.

 

 

The post The Fraught Interaction Design of Personalized Learning Products appeared first on e-Literate.

Inside View Of Blackboard’s Moodle Strategy In Latin America

Michael Feldstein - Wed, 2015-08-26 11:45

By Phil HillMore Posts (358)

One year ago Blackboard’s strategy for Moodle was floundering. After the 2012 acquisition of Moodlerooms and Netspot, Blackboard had kept its promises of supporting the open source community – and in fact, Blackboard pays much more than 50% of the total revenue going to Moodle HQ[1] – but that does not mean they had a strategy. Key Moodlerooms employees were leaving, and the management was frustrated. Last fall the remaining Moodlerooms management put together an emerging strategy to invest in (through corporate M&A) and grow the Moodle business, mostly outside of the US.

In just the past twelve months, Blackboard has acquired three Moodle-based companies – Remote-Learner UK (Moodle Partner in the UK), X-Ray Analytics (learning analytics for Moodle), and Nivel Siete (Moodle Partner in Colombia). When you add in organic growth to these acquisition, Blackboard has added ~450 new clients using Moodle in this same time period, reaching a current total of ~1400.

This is a change worth exploring. To paraphrase Michael’s statements to me and in his recent BbWorld coverage:

If you want to understand Blackboard and their future, you have to understand what they’re doing internationally. If you want to understand what they’re doing internationally, you have to understand what they’re doing with Moodle.

Based on this perspective, I accepted an invitation from Blackboard to come visit Nivel Siete last week to get a first-hand view of what this acquisition means I also attended the MoodleMoot Colombia #mootco15 conference and talked directly to Moodle customers in Latin America. Let’s first unpack that last phrase.

  • Note that due to the nature of this trip, I “talked directly” with Blackboard employees, Nivel Siete employees, Blackboard resellers, and Nivel Siete customers. They did give me free access to talk privately with whoever I wanted to, but treat this post as somewhat of an inside view rather than one that also includes perspectives from competitors.
  • “Moodle” is very significant in Latin America. It is the default LMS that dominates learning environments. The competition, or alternative solution, there is Blackboard Learn or . . . another route to get Moodle. In this market D2L and Canvas have virtually no presence – each company has just a couple of clients in Latin America and are not currently a factor in LMS decision-making. Schoology has one very large customer in Uruguay service hundreds of thousands of students. Blackboard Learn serves the top of the market – e.g. the top 10% in terms of revenue of Colombian institutions, where they already serves the majority of that sub-market according to the people I talked to. For the remaining 90%, it is pretty much Moodle, Moodle, alternate applications that are not LMSs, or nothing.[2]
  • I chose “customers” instead of “schools” or “institutions” for a reason. What is not understood in much of the education community is that Moodle has a large footprint outside of higher ed and K-12 markets. Approximately 2/3 of Nivel Siete’s clients are in corporate learning, and several others are government. And this situation is quite common for Moodle. In the US, more than 1/3 of Moodlerooms’ and approximately 1/2 of Remote-Learner’s customers are corporate learning. Phill Miller, the VP of International for Moodlerooms, said that for most of the Moodle hosting and service providers he has met, they also are serving corporate clients at similar numbers as education.
  • I chose “Latin America” instead of “Colombia” for a reason. While all but ~12 of Nivel Siete’s existing clients are in Colombia, Blackboard bought the company to act as a center of excellence or support service company for most of Latin America – Colombia, Mexico, Brazil, and Peru in particular. Cognos Online, their current local reseller for Latin America for core Blackboard products (Learn, Collaborate, etc) will become the reseller also for their Moodle customers. Nivel Siete will support a broader set of clients. In other words, this is not a simple acquisition of customers – it is an expansion of international presence.

And while we’re at it, the conference reception included a great opera mini flash mob (make sure to watch past 0:37):

Nivel Siete

Nivel Siete (meaning Level 7, a reference from two of the founders’ college days when a professor talked about need to understand deeper levels of the technology stack than just top-level applications that customers see), is a company of just over 20 employees in Bogota. They have 237+ clients, but that is growing. During the three days while I was there they signed several new contracts. They offer Moodle hosting and service in a cloud environment based on Amazon Web Services (AWS) – not true SaaS, as they allow multiple software versions in production and have not automated all provisioning or upgrade processes. What they primarily offer, according to the founders, is a culture of how to service and support using cloud services and specific marketing and sales techniques.

In Latin America, most customers care more about the local sales and support company than they do about the core software. As one person put it, they believe in skin-to-skin sales, where clients have relationships they trust as long as solutions are provided. Most LMS customers in Latin America do not care as much about the components of that solution as they do about relationships, service, and price. And yet, due to open source software and lightweight infrastructure needs, Moodle is dominant as noted above. The Moodle brand, code base, and code licensing does not matter as much as the Moodle culture and ecosystem. From a commercial standpoint, Nivel Siete’s competitors include a myriad of non Moodle Partner hosting providers – telcos bundling in hosting, mom-and-pop providers, self-hosting – or non-consumption. For a subset of the market, Nivel Siete has competed with Blackboard Learn.

Beyond Cognos Online, Blackboard has another ~9 resellers in Latin America, and Nivel Siete (or whatever they decide to name the new unit) will support all of these resellers. This is actually the biggest motivation other than cash for the company to sell – they were seeking methods to extend their influence, and this opportunity made the most sense.

Blackboard Learn and Ultra

What about that Learn sub-market? Most clients and sales people (resellers as well as Blackboard channel manager) are aware of Learn Ultra, but the market seems to understand already that Ultra is not for them . . . yet. They appear to be taking a ‘talk to me when it’s done and done in Spanish’ approach and not basing current decisions on Ultra. In this sense, the timing for Ultra does not matter all that much, as the market is not waiting on it. Once Ultra is ready for Latin America, Blackboard sales (channel manager and resellers) expect the switchover to be quicker than in the US, as LMS major upgrades (involving major UI and UX changes) or adoptions tend to take weeks or months instead of a year or more as we often see in the states. At least in the near term, Learn Ultra is not a big factor in this market.

What Blackboard is best known for in this market is the large SENA contract running on Learn. SENA (National Service for Learning) is a government organization that runs the majority of all vocational colleges – providing certificates and 2-year vocational degrees mostly for lower-income students, a real rising middle class move that is important in developing countries. Blackboard describes SENA as having 6+ million total enrollment, with ~80% in classrooms and ~20% in distance learning.

Integration

The challenge Blackboard faces is integrating its Learn and Moodle operations through the same groups – Nivel Siete internal group, Cognos Online and other resellers serving both lines – without muddling the message and go-to-market approach. Currently Learn is marketed and sold through traditional enterprise sales methods – multiple meetings, sales calls, large bids – while Nivel Siete’s offering of Moodle is marketed and sold with more of a subscription-based mentality. As described by ForceManagement:

A customer who has moved to a subscription-based model of consumption has completely different expectations about how companies are going interact with them.

How you market to them, how you sell to them, how you bill them, how you nurture the relationship – it’s all affected by the Subscription Economy. The customer’s idea of value has changed. And, if the customer’s idea of value has changed, your value proposition should be aligned accordingly. [snip]

The subscription-based sales process relies less on the closing of a sale and more on the nurturing of a long-term relationship to create lifetime customer value.

One of Nivel Siete’s most effective techniques is their The e-Learner Magazine that highlights customer telling their own stories and lessons in a quasi-independent fashion. The company has relied on inbound calls and quick signups and service startups. There is quite a different cultural difference between enterprise software and subscription-based approaches. While Blackboard themselves are facing such changes due to Ultra and newly-offered SaaS models, the group in Latin America is facing the challenge of two different cultures served by the same organizations today.

To help address this challenge, Cognos Online is planning to have two separate teams selling / servicing mainline Blackboard products and Moodle products. But even then, CEO Fernery Morales described that their biggest risk is muddling the message and integrating appropriately.

Moodle Strategy and Risk

At the same time, this strategy and growth comes at a time where the Moodle community at large appears to be at an inflection point. This inflection point I see comes from a variety of triggers:

  • Blackboard acquisitions causing Moodle HQ, other Moodle Partners, and some subset of users’ concerns about commercialization;
  • Creation of the Moodle Association as well as Moodle Cloud services as alternate paths to Moodle Partners for revenue and setup; and
  • Remote-Learner leaving the Moodle Partner program and planning to join the Moodle Association, with its associated lost revenue and public questioning value.

I don’t have time to fully describe these changes here, but Moodle itself is both an opportunity and a risk mostly based on its own success globally. More of that in a future post.

What Does This Mean Beyond Latin America?

It’s too early to fully know, but here are a few notes.

  • Despite the positioning in the US media, there is no “international” market. There are multiple local or regional markets outside of the US that have tremendous growth opportunities for US and other companies outside of those immediate markets. Addressing these markets puts a high premium on localization – having feet on the ground for people who know the culture, can be trusted in the region, and including product customizations meant for those markets. Much of the ed tech investment boom is built on expectations of international growth, but how many ed tech companies actually know how to address local or regional non-US markets? This focus on localizing international markets is one of Blackboard’s greatest strengths.
  • Based on the above, at least in Latin America Blackboard is building itself up as being the status quo before other learning platforms really get a chance to strategically enter the market. For example, Instructure has clearly not chosen to go after non English-speaking international markets yet, but by the time they do push Canvas into Latin America, and if Blackboard is successful integrating Nivel Siete, for example, it is likely Instructure will face an entrenched competitor and potential clients who by default assume Moodle or Learn as solutions.
  • Blackboard as a company has one big growth opportunity right now – the collection of non-US “international” markets that represent just under 1/4 of the company’s revenue. Domestic higher ed is not growing, K-12 is actually decreasing, but international is growing. These growing markets need Moodle and  traditional Learn 9.1 much more than Ultra. I suspect that this growing importance is creating more and more tension internal to Blackboard, as the company needs to balance Ultra with traditional Learn and Moodle development.
  • While I strongly believe in the mission of US community colleges and low-cost 4-year institutions, in Latin America the importance of education in building up an emerging middle class is much greater than in US. We hear this “importance of education” and “building of middle class” used in generic terms regarding ed tech potential, but seeing this connection more closely by being in country is inspiring. This is a real global need that can and should drive future investment in people and technology to address.
  1. This information based on tweet last spring showing Moodlerooms + Netspot combined were more than 50% of revenue, and that the next largest Moodle Partner, Remote-Learner, has left the program. Since last year I have confirmed this information through multiple sources.
  2. Again, much of this information is from people related to Blackboard, but it also matches my investigation of press releases and public statements about specific customers of D2L and Instructure.

The post Inside View Of Blackboard’s Moodle Strategy In Latin America appeared first on e-Literate.

DELETE is faster than TRUNCATE

Laurent Schneider - Wed, 2015-08-26 07:18

Truncate is useful in some serial batch processing but it breaks the read-write consistency, generates stranges errors and results for running selects, and it needs DROP ANY TABLE when run over a table that you do not own.

But also, DELETE is faster in the following test case.

In 12c, you could have over one million partition in a table, but for the sake of the universe, I’ll try with 10000.


SQL> create table scott.t(x) 
  partition by range(x) 
  interval(1) 
  (partition values less than (0)) 
  as 
  select rownum 
  from dual 
  connect by level<10001;
SQL> select count(*) from scott.t;

  COUNT(*)
----------
     10000

The 10K rows table is created, each row is its partition


SQL> delete scott.t;

10000 rows deleted.

Elapsed: 00:00:04.02
SQL> rollback;

Rollback complete.

Not tuned or parallelized or whatever. It took 4 seconds for 10’000 rows. If you have one billion rows, it is doable in a few hours. But you better do it in chunks then.

Anyway, let’s truncate


SQL> truncate table scott.t;

Table truncated.

Elapsed: 00:05:19.24

Five minutes !!! to truncate that tiny table.

If you have one million partitions and underlying indexes and lobs, it will probably failed with out of memory errors after hours and a large impact on the dictionary, sysaux, undo.

The dictionary changes are here very slow.

On docker, Ubuntu and Oracle RDBMS

Marcelo Ochoa - Tue, 2015-08-25 19:17
I have Oracle RDBMS working on Ubuntu for a long time (12.04 and 14.04) RDBMS versions 10g, 11g and 12c with some tweaks to get it working.
Apart from the effort to get it working, some time requires Makefile modifications, these configurations are not supported by Oracle and for sure you couldn't report any bug.
To solve this, is easy to get VirtualBox working and download a pre-built VM, but it requires a lot of hardware resource :(
Hopefully Docker comes in action, I followed this great post by Frits Hoogland for Installing Oracle Database in Docker.
First We need docker running on Ubuntu, there too many guides about that also by simple installing using apt-get repository, my case.
Following Frits's guide I changed the default repository destination using an USB external disk formatted with btrfs:
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        15G  7,7G  6,5G  55% /
none            4,0K     0  4,0K   0% /sys/fs/cgroup
udev            3,9G   12K  3,9G   1% /dev
tmpfs           789M  1,4M  788M   1% /run
none            5,0M     0  5,0M   0% /run/lock
none            3,9G  344M  3,6G   9% /run/shm
none            100M   56K  100M   1% /run/user
/dev/sda6       442G  384G   36G  92% /home
/dev/sdb        597G   18G  577G   3% /var/lib/dockerAlso I created the directory for the Docker template but with some changes, here the files:root@local:/# cd /var/lib/docker/dockerfiles/build-oracle-12102/
root@local:/var/lib/docker/dockerfiles/build-oracle-12102# ls -l
total 2625148
-rw-r--r-- 1 root root      10976 ago 23 10:07 db_install.dbt
-rw-r--r-- 1 root root      10931 ago 25 16:30 db_install-full.dbt
-rw-r--r-- 1 root root      10972 ago 25 16:30 db_install-simple.dbt
-rw-r--r-- 1 root root       1168 ago 25 11:09 Dockerfile
-rw-r--r-- 1 root root 1673544724 ago 22 20:36 linuxamd64_12102_database_1of2.zip
-rw-r--r-- 1 root root 1014530602 ago 22 20:36 linuxamd64_12102_database_2of2.zip
-rwxr-xr-x 1 root root       1729 ago 25 10:11 manage-oracle.sh
-rw-r--r-- 1 root root      24542 ago 24 20:42 responsefile_oracle12102.rspThe content of Dockerfile is:FROM    oraclelinux:6
MAINTAINER marcelo.ochoa@gmail.com
RUN groupadd -g 54321 oinstall
RUN groupadd -g 54322 dba
RUN useradd -m -g oinstall -G oinstall,dba -u 54321 oracle
RUN yum -y install oracle-rdbms-server-12cR1-preinstall perl wget unzip
RUN mkdir /u01
RUN chown oracle:oinstall /u01
USER    oracle
WORKDIR /home/oracle
COPY linuxamd64_12102_database_1of2.zip /home/oracle/
COPY linuxamd64_12102_database_2of2.zip /home/oracle/
COPY responsefile_oracle12102.rsp /home/oracle/
RUN unzip linuxamd64_12102_database_1of2.zip
RUN unzip linuxamd64_12102_database_2of2.zip
RUN rm linuxamd64_12102_database_1of2.zip linuxamd64_12102_database_2of2.zip
RUN /home/oracle/database/runInstaller -silent -force -waitforcompletion -responsefile /home/oracle/responsefile_oracle12102.rsp -ignoresysprereqs -ignoreprereq
USER    root
RUN /u01/app/oraInventory/orainstRoot.sh
RUN /u01/app/oracle/product/12.1.0.2/dbhome_1/root.sh -silent
RUN rm -rf /home/oracle/responsefile_oracle12102.rsp /home/oracle/database
USER    oracle
WORKDIR /home/oracle
RUN     mkdir -p /u01/app/oracle/data
COPY    manage-oracle.sh /home/oracle/
EXPOSE  1521
CMD sh -c /home/oracle/manage-oracle.shRemarked lines differ from the original post by Frits to not download everything from the web using wget, instead of doing that I downloaded 12c binary distribution from OTN Web Site and I copied these two zip into the directory where Dockerfile resides, also I am download Frits's files responsefile_oracle12102.rsp, manage-oracle.sh and db_install.dbt.The file which is responsible for creating/starting/stopping the DB also was modified to use db_install.dbt from the host machine, here the modified version of manage-oracle.sh:#!/bin/bash
PERSISTENT_DATA=/u01/app/oracle/data
export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_1
export ORACLE_SID=$(hostname)
stop_database() {
$ORACLE_HOME/bin/sqlplus / as sysdba << EOF
shutdown abort
exit
EOF
exit
}
start_database() {
$ORACLE_HOME/bin/sqlplus / as sysdba << EOF
startup
exit
EOF
}
create_pfile() {
$ORACLE_HOME/bin/sqlplus -S / as sysdba << EOF
set echo off pages 0 lines 200 feed off head off sqlblanklines off trimspool on trimout on
spool $PERSISTENT_DATA/init_$(hostname).ora
select 'spfile="'||value||'"' from v\$parameter where name = 'spfile';
spool off
exit
EOF
}
trap stop_database SIGTERM
printf "LISTENER=(DESCRIPTION_LIST=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=$(hostname))(PORT=1521))(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521))))\n" > $ORACLE_HOME/network/admin/listener.ora
$ORACLE_HOME/bin/lsnrctl start
if [ ! -f ${PERSISTENT_DATA}/DATABASE_IS_SETUP ]; then
sed -i "s/{{ db_create_file_dest }}/\/u01\/app\/oracle\/data\/$(hostname)/" $PERSISTENT_DATA/db_install.dbt
sed -i "s/{{ oracle_base }}/\/u01\/app\/oracle/" $PERSISTENT_DATA/db_install.dbt
sed -i "s/{{ database_name }}/$(hostname)/" $PERSISTENT_DATA/db_install.dbt
$ORACLE_HOME/bin/dbca -silent -createdatabase -templatename $PERSISTENT_DATA/db_install.dbt -gdbname $(hostname) -sid $(hostname) -syspassword oracle -systempassword oracle -dbsnmppassword oracle
create_pfile
if [ $? -eq 0 ]; then
touch ${PERSISTENT_DATA}/DATABASE_IS_SETUP
fi
else
mkdir -p /u01/app/oracle/admin/$(hostname)/adump
$ORACLE_HOME/bin/sqlplus / as sysdba << EOF
startup pfile=$PERSISTENT_DATA/init_$(hostname).ora
exit
EOF
fi
tail -f /u01/app/oracle/diag/rdbms/$(hostname)/*/trace/alert_$(hostname).log &
waitI am doing that to quickly creates different database with some Oracle options enabled or not, for example db_install.dbt file for OLS Searching will have enabled:option name="JSERVER" value="true"
option name="SPATIAL" value="true"
option name="IMEDIA" value="true"
option name="XDB_PROTOCOLS" value="true"
option name="ORACLE_TEXT" value="true"
.....To create the docker template is similar to the original post:# cd /var/lib/docker/dockerfiles/build-oracle-12102
# docker build -t "oracle-12102" .
# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
oracle-12102        latest              24687eeab73c        8 hours ago         12.26 GB
oraclelinux         6                   cfc75fa9f295        3 weeks ago         156.2 MBFinally to create a full featured Java enabled 12c database I created a directory with the following content:# mkdir -p /var/lib/docker/db/ols
# cp /var/lib/docker/dockerfiles/build-oracle-12102/db_install-full.dbt  /var/lib/docker/db/ols/db_install.dbt
# chown -R 54321:54321 /var/lib/docker/db/olsand executed:# docker run --ipc=host --volume=/var/lib/docker/db/ols:/u01/app/oracle/data --name ols --hostname ols --detach=true oracle-12102
25efb5d26aad31e7b06a8e2707af7c25943e2e42ec5c432dc9fa55f0da0bdaefthe container is started and the database creations works perfect, here the output:# docker logs -f ols
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 25-AUG-2015 16:35:18
Copyright (c) 1991, 2014, Oracle.  All rights reserved.
Starting /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/tnslsnr: please wait...
TNSLSNR for Linux: Version 12.1.0.2.0 - Production
System parameter file is /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/ols/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ols)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ols)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                25-AUG-2015 16:35:19
Uptime                    0 days 0 hr. 0 min. 1 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/ols/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ols)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
The listener supports no services
The command completed successfully
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 25-AUG-2015 16:35:18
Copyright (c) 1991, 2014, Oracle.  All rights reserved.
Starting /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/tnslsnr: please wait...
TNSLSNR for Linux: Version 12.1.0.2.0 - Production
System parameter file is /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/ols/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ols)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ols)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                25-AUG-2015 16:35:19
Uptime                    0 days 0 hr. 0 min. 1 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/ols/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ols)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
The listener supports no services
The command completed successfully
Creating and starting Oracle instance
1% complete
.....
25% complete
Adding Oracle JVM
32% complete
....
Adding Oracle Text
50% complete
....
Adding Oracle Multimedia
55% complete
....
Adding Oracle Spatial
69% complete
Adding Oracle Spatial
69% complete
....
Adding Oracle Application Express
82% complete
87% complete
Completing Database CreationNext steps installing Scotas OLS for testing and happy Docker/Oracle combination :)
Update about shared memory usage (31/08)After installing Scotas OLS which is a heavy Java Application I found these messages at the .trc file associated to the JIT compiler slave process:
ORA-04035: unable to allocate 4096 bytes of shared memory in shared object cache "JOXSHM" of size "1073741824"
joez: Failed loading machine code: Unable to allocate code space
peshmmap_Create_Memory_Map:
Map_Length = 4096
Map_Protection = 7
Flags = 1
File_Offset = 0
mmap failed with error 1
error message:Operation not permittedAfter checked that there is no problem about memory available I sent an email to my friend Kuassi to find some clarification from JVM Dev team and they sent me a quick test to find the solution, the problem is the mount option of the tmpfs file system, by default docker do:
# docker run --privileged=true -i -t oraclelinux:6 /bin/bash
[root@d1ac66be54fb /]# cat /proc/mounts|grep shm
shm /dev/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=65536k 0 0Note the noexec flag!!, to solve that is necessary to start the container with the --privileged=true flag and execute as root:
[root@d1ac66be54fb /]# mount -o remount,exec /dev/shm
warning: can't open /etc/fstab: No such file or directory
[root@d1ac66be54fb /]# cat /proc/mounts|grep shm
shm /dev/shm tmpfs rw,nosuid,nodev,relatime,size=65536k 0 0and that's all, the message "..unable to allocate 4096 bytes.." disappear and the Oracle Internal Java Virtual Machine works perfect.Finally the task now is to modify Dockerfile and manage-oracle.sh scripts to run as root and to execute the remount operation before starting Oracle RDBMS.New Dockerfile basically changes last lines to start manage-oracle.sh as root:USER root
RUN /u01/app/oraInventory/orainstRoot.sh
RUN /u01/app/oracle/product/12.1.0.2/dbhome_1/root.sh -silent
RUN rm -rf /home/oracle/responsefile_oracle12102.rsp /home/oracle/database
WORKDIR /home/oracle
RUN mkdir -p /u01/app/oracle/data
RUN chown oracle:oinstall /u01/app/oracle/data
COPY manage-oracle.sh /home/oracle/
EXPOSE 1521
CMD [ "sh" , "-c" ,  "/home/oracle/manage-oracle.sh" ]and manage-oracle.sh executes all Oracle related scripts using su - oracle -c calls, for example:stop_database() {
su - oracle -c "$ORACLE_HOME/bin/sqlplus / as sysdba" << EOF
shutdown abort
exit
EOF
exit
}obviously there is remount operation previous to start the RDBMS:mount -o remount,exec /dev/shmsu - oracle -c "$ORACLE_HOME/bin/lsnrctl start"Remember to start your docker image using --privileged=true, for example:docker run --privileged=true --ipc=host --volume=/var/lib/docker/db/ols:/u01/app/oracle/data --name ols --hostname ols --detach=true --publish=1521:1521 --publish=9099:9099 oracle-12102HTH, Marcelo.


Oracle Sales Cloud is a Dynamic Force at Oracle OpenWorld 2015

Linda Fishman Hoyle - Tue, 2015-08-25 16:29

A Guest Post by Michael Richter (pictured left), Director of Product Management, Oracle

OpenWorld is returning to San Francisco–October 25-29, 2015!

Customer Experience (CX) will take over Moscone West 2nd floor this year, and Oracle Sales Cloud is excited to present this year’s lineup of customer experience sessions and demonstrations for sales professionals.

Learn about the newest enhancements or get the latest product demonstrations from the product management and sales consulting teams.

Visit Sales―CX Central@OpenWorld for details. All Oracle Sales Cloud sessions will take place in rooms 2003 or 2004 in Moscone West on the 2nd floor. The Oracle Sales Cloud demo zone is also situated on the 2nd floor. Dates, times, and room numbers will be published on at the link above in early September.

What’s New and Different?

  • Release essentials and roadmap sessions for Oracle Sales Cloud and Configure, Price, and Quote
  • Pre-configured industry solution session and product demonstrations
  • Partner-led interactive workshops and hands-on labs
  • Sessions showing new ways of how to migrate or integrate with existing applications

Guest Customer and Partner Appearances

Deloitte Digital, Pansonic Manufacturing (UK), Wilsonart, Accenture, Schneider Electric, Accuride, Config Consultants, DB Schenker, Swiss Post, KEC International, Batesville, Serene, ustudio, Hitachi Consultants, KPN, Oceaneering, TMEIC, TH March, GPUK (Global Pay), Prisio Technologies, Perficient, Infosys, Apex IT, e-Verge, General Electric, Tuff Shed, and more!

Kick-Off with the Sales General Session!

Deloitte Digital teams up with GVP Bill Hunt, Oracle Sales Cloud Development, to kick off the CX-Sales track (GEN4525) at 1:00 – 2:15 p.m. in Room 2003. Hear about Oracle’s product strategy and what Deloitte Digital is doing to meet today’s customer experience challenges.

The Release Essentials and Roadmap Conference Sessions

Oracle Sales Cloud will host more than 30 conference sessions this year. The first in a series of seven Release Essentials and Roadmap sessions begins at 2:45 p.m. on Monday, October 26. These sessions are led by Oracle Sales Cloud product management team members and include customer and partner case studies.

  • Oracle Sales Cloud Release 11 Essentials and Roadmap for Analytics [CON4508]
  • Oracle Sales Cloud Release 11 Essentials and Roadmap for Sales Performance Management [CON7951]
  • Oracle Sales Cloud Release 11 Essentials and Roadmap for PRM: High Tech and Manufacturing [CON4529]
  • Oracle Sales Cloud Release11 Essentials and Roadmap for Configuring and Customizing [CON7952]
  • Oracle Sales Cloud Release 11 Essentials and Roadmap for Outlook and IBM Notes [CON7950]
  • Oracle Sales Cloud Release 11 Essentials and Roadmap for Oracle Customer Data Hub and DaaS [CON7958]
  • Oracle Configure, Price, and Quote Cloud Essentials and Roadmap [CON4548]

Sessions of Special Interest

  • Oracle Sales Cloud: A Temperature Check for Customers by Nucleus Research [CON8191]
  • Oracle Sales Cloud: the Path to the Cloud for Siebel Customers [CON4537]
  • Oracle Configure, Price, and Quote Cloud: Driving Sales Productivity at Schneider Electric [CON8187]
  • Oracle Sales Cloud Hands-on Lab: Oracle Sales Cloud Mobile [HOL8735]
  • Oracle Sales Cloud Hands-on Lab: Easy to Use—Anytime, Anywhere [HOL8665]

Customer Panels

We have scheduled two popular customer panels:

  • Oracle Sales Cloud Analytics [CON9140] with Batesville and Fike
  • Oracle Sales Cloud Customer Panel: the Challenges of Digital Transformation [CON7954] with KEC, Tuff Shed, and TH March

Partner Panels

Hear from the experts at the partner panels.

  • Oracle Sales Cloud: Strategies for Successful CRM Implementations [CON8196] with panelists Config Consultants, Infosys, Apex IT, and e-Verge
  • Oracle Sales Cloud: Delivering Key Capabilities of Modern Selling [CON9660] with Accenture, Perficient, Hitachi Consulting, and Serene

Sales Demo Zone

Take part in Oracle Sales Cloud product demonstrations led by members of the Oracle Sales Cloud product management and sales consulting teams. The Buzz House is located next to the CX-Sales Demo so you can relax with a cup of coffee or snack.

  • New enhancements for core sales force automation and mobile solutions
  • How Oracle ensures data integrity with Customer Data Management
  • The latest developments for analytics
  • Streamlining with Configure, Price, and Quote
  • Oracle Sales Cloud integration with MS Outlook and IBM Notes
  • Incentive Compensation, Territory Management, and Quota Management
  • How to configure and customize and the tools available to you
  • Learn about integrations and migration processes and tools
  • Learn what’s in store for midsize companies

At a Glance

Visit Sales―CX Central@OpenWorld Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} for full details on speakers, conference sessions, exhibits and entertainment and experience all that Oracle OpenWorld has to offer.  Dates, times, and room numbers will be published at the link above in early September

Latest Oracle Service Cloud Product Release Powers Impactful Community Self-Service Experiences

Linda Fishman Hoyle - Tue, 2015-08-25 16:20

A Guest Post by David Vap (pictured left), Group Vice President, Product Development, Oracle

Today more than one in three customers prefers to contact brands through social channels rather than by phone or email (Nielsen), and the distinction between social and traditional channels is eroding. To deliver the best possible customer experience across traditional and digital channels, customer care organizations need to provide a positive and unified experience where and when customers want, whether they are on Twitter, Facebook, peer-to-peer communities, or other social networks.

Following Oracle’s recent Twitter-enriched social customer service announcement, the latest release of Oracle Service Cloud and Oracle Social Cloud continues to power positive and differentiated customer experiences. The new functionality includes:

New Community Self-Service solution to help streamline the customer journey

  • New approach to web self-service brings community functionality directly into core Service Cloud multi-channel web experience
  • Service Cloud now enables organizations to deliver a seamless experience between web self-service and community interactions, leveraging power of customer knowledge to improve service operations
  • A customer no longer needs to separately navigate self-service and community sites to find an answer, instead discovering and interacting with both formal knowledge (knowledge base) and informal knowledge (community answers) in a single experience

Enhanced social service and incident routing

  • New workflow capabilities between Social Cloud and Service Cloud enable businesses to leverage power of social insights and engagements
  • Business users can now attach contextual attributes and notes from posts or incidents identified by Social Cloud directly to Service Cloud to improve service quality and efficiency by providing more customer information and context

Extended social listening and analytics capabilities to private data sources

  • Enhanced connectivity between Social Cloud and Service Cloud has also extended social listening and analytics to enterprise private-data sources, such as the new  community self-service capability, survey data, and chat and call logs.
  • Organizations can now listen and analyze unstructured data and gain insights with terms, themes, sentiment, and customer metrics, and view private and public data side by side in the Oracle SRM.

According to Gartner, investment in peer-to-peer communities drives support costs down and boosts profits. In fact, in a December 2014 Gartner research note entitled “Nine CRM Projects to Do Right Now for Customer Service,” Michael Maoz, Vice President, Distinguished Analyst, Gartner, writes, “Gartner clients who are successful in this space are still seeing on average of 20% reduction in the creation of support tickets following the introduction of peer-to-peer communities.” Maoz goes on to say, “Clients are seeing other business benefits as well. By enabling community-based support, clients have been able to recognize new sales opportunities and increase existing customer satisfaction, resulting in increased revenue in several of these cases.”

For more information about this leading social customer service product, read the news release and check out the VentureBeat profile!

Truncate – 2

Jonathan Lewis - Tue, 2015-08-25 11:25

Following on from my earlier comments about how a truncate works in Oracle, the second oldest question about truncate (and other DDL) appeared on the OTN database forum“Why isn’t a commit required for DDL?”

Sometimes the answer to “Why” is simply “that’s just the way it is” – and that’s what it is in this case, I think.  There may have been some historic reason why Oracle Corp. implemented DDL the way they did (commit any existing transaction the session is running, then auto-commit when complete), but once the code has been around for a few years – and accumulated lots of variations – it can be very difficult to change a historic decision, no matter how silly it may now seem.

This posting isn’t about answering the question “why”, though; it’s about a little script I wrote in 2003 in response to a complaint from someone who wanted to truncate a table in the middle of a transaction without committing the transaction. Don’t ask why – you really shouldn’t be executing DDL as part of a transactional process (though tasks like dropping and recreating indexes as part of a batch process is a reasonable strategy).

So if DDL always commits the current transaction how do you truncate a table without committing ? Easy – use an autonomous transaction. First a couple of tables with a little data, then a little procedure to do my truncate:


create table t1 (n1 number);
insert into t1 values(1);

create table t2 (n1 number);
insert into t2 values(1);

create or replace procedure truncate_t1
as
        pragma autonomous_transaction;
begin
        execute immediate 'truncate table t1';
end;
/

Then the code to demonstrate the effect:


prompt  ======================================
prompt  In this example we end up with no rows
prompt  in t1 and only the original row in t2,
prompt  the truncate didn't commit the insert.
prompt  ======================================

insert into t2 values(2);

execute truncate_t1;
rollback;

select * from t1;
select * from t2;


According to my notes, the last time I ran this code was on 9.2.0.3 but I’ve just tested it on 12.1.0.2 and it behaves in exactly the same way.

I’ve only tested the approach with “truncate” and “create table” apparently, and I haven’t made any attempt to see if it’s possible to cause major distruption with cunningly timed concurrent activity; but if you want to experiment you have a mechanism which Oracle could have used to avoid committing the current transaction – and you may be able to find out why it doesn’t, and why DDL is best “auto-committed”.


Autonomous transaction to the rescue

Patrick Barel - Tue, 2015-08-25 10:10
.code, .code pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .code pre { margin: 0em; } .code .rem { color: #ff0000; } .code .kwrd { color: #008080; } .code .str { color: #0000ff; } .code .op { color: #0000c0; } .code .preproc { color: #cc6633; } .code .asp { background-color: #ffff00; } .code .html { color: #800000; } .code .attr { color: #ff0000; } .code .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .code .lnum { color: #606060; }

Today, at my current project, I came across an issue where autonomous transactions came in handy.

The situation: I need to create a query to perform an export. A couple of the fields to be selected come from a global temporary table, nothing fancy so far except this global temporary table is filled by a (rather complex) procedure. Another problem is this table is emptied for every row, i.e. it will contain only one row at a time. ‘Just build a wrapper table function for this procedure and have that function call the procedure’ was my first idea.

I created a script that shows the situation

CREATE GLOBAL TEMPORARY TABLE empdate
(
  empno NUMBER(4)
, hiredate DATE
)
ON COMMIT DELETE ROWS
/
CREATE OR REPLACE PROCEDURE getthehiredate(empno_in IN NUMBER) IS
BEGIN
  DELETE FROM empdate;
  INSERT INTO empdate
    (empno
    ,hiredate)
    (SELECT empno
           ,hiredate
       FROM emp
      WHERE empno = empno_in);
END getthehiredate;
/

Then I set out to build a pipelined table function that accepts a cursor as one of its parameters. This function then loops all the values in the cursor, calls the procedure, reads the data from the global temporary table and pipes out the resulting record, nothing really fancy so far.

CREATE TYPE empdate_t AS OBJECT
(
  empno    NUMBER(4),
  hiredate DATE
)
/
CREATE TYPE empdate_tab IS TABLE OF empdate_t
/
CREATE OR REPLACE FUNCTION getallhiredates(empnos_in IN SYS_REFCURSOR) RETURN empdate_tab
  PIPELINED IS
  l_empno       NUMBER(4);
  l_returnvalue empdate_t;
BEGIN
  FETCH empnos_in
    INTO l_empno;
  WHILE empnos_in%FOUND LOOP
    getthehiredate(empno_in => l_empno);
    SELECT empdate_t(ed.empno, ed.hiredate)
      INTO l_returnvalue
      FROM empdate ed
     WHERE 1 = 1
       AND ed.empno = l_empno;
    PIPE ROW(l_returnvalue);
    FETCH empnos_in
      INTO l_empno;
  END LOOP;
  RETURN;
END getallhiredates;
/

But when I ran a query against this function:

SELECT *
FROM TABLE(getallhiredates(CURSOR (SELECT empno
FROM emp)))
/

I ran into an error:

ORA-14551: cannot perform a DML operation inside a query 

So, all the work I done so far had been for nothing? Time wasted? I don’t think so. If there is anything I learned over the years it is that Oracle tries to stop you doing certain things but at the same time supplies you the tools to create a work-around.

There is something like an autonomous transaction, that might help me in this case so I changed the code for the function a bit:


CREATE OR REPLACE FUNCTION getallhiredates(empnos_in IN SYS_REFCURSOR) RETURN empdate_tab
  PIPELINED IS
  PRAGMA AUTONOMOUS_TRANSACTION;
  l_empno       NUMBER(4);
  l_returnvalue empdate_t;
BEGIN
  FETCH empnos_in
    INTO l_empno;
  WHILE empnos_in%FOUND LOOP
    getthehiredate(empno_in => l_empno);
    SELECT empdate_t(ed.empno, ed.hiredate)
      INTO l_returnvalue
      FROM empdate ed
     WHERE 1 = 1
       AND ed.empno = l_empno;
    PIPE ROW(l_returnvalue);
    FETCH empnos_in
      INTO l_empno;
  END LOOP;
  COMMIT;
  RETURN;
END getallhiredates;
/

But when I ran the query:

SELECT *
FROM TABLE(getallhiredates(CURSOR (SELECT empno
FROM emp)))
/

I ran into a different error:

ORA-06519: active autonomous transaction detected and rolled back

So this doesn’t work or does it? Pipelined table functions have ‘exit’ the function multiple times. Whenever a row is piped out. So, I tried to put the COMMIT just before the PIPE ROW command:


CREATE OR REPLACE FUNCTION getallhiredates(empnos_in IN SYS_REFCURSOR) RETURN empdate_tab
  PIPELINED IS
  PRAGMA AUTONOMOUS_TRANSACTION;
  l_empno       NUMBER(4);
  l_returnvalue empdate_t;
BEGIN
  FETCH empnos_in
    INTO l_empno;
  WHILE empnos_in%FOUND LOOP
    getthehiredate(empno_in => l_empno);
    SELECT empdate_t(ed.empno, ed.hiredate)
      INTO l_returnvalue
      FROM empdate ed
     WHERE 1 = 1
       AND ed.empno = l_empno;
    COMMIT;
    PIPE ROW(l_returnvalue);
    FETCH empnos_in
      INTO l_empno;
  END LOOP;
  RETURN;
END getallhiredates;
/

And when I ran my statement again:

SELECT *
FROM TABLE(getallhiredates(CURSOR (SELECT empno
FROM emp)))
/

It worked as I hoped for.

As you can see I have tried to mimic the situation using the EMP and DEPT tables. I think this is a nice little trick, but it should be used with caution. It is not for no reason that Oracle prevents you from running DML inside a query, but in this case I can bypass this restriction.



Autonomous transaction to the rescue

Bar Solutions - Tue, 2015-08-25 10:10
.code, .code pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .code pre { margin: 0em; } .code .rem { color: #ff0000; } .code .kwrd { color: #008080; } .code .str { color: #0000ff; } .code .op { color: #0000c0; } .code .preproc { color: #cc6633; } .code .asp { background-color: [...]

Fedora 22/23 and Oracle 11gR2/12cR1

Tim Hall - Tue, 2015-08-25 08:53

linux-tuxAs always, installations of Oracle server products on Fedora are not a great idea, as explained here.

I was reading some stuff about the Fedora 23 Alpha and realised Fedora 22 had passed me by. Not sure how I missed that. :)

Anyway, I did a run through of the usual play stuff.

While I was at it, I thought I would get the heads-up on Fedora 23 Alpha.

The F23 stuff will have to be revised once the final version is out, but I’m less likely to forget now. :)

I guess the only change in F22 upward that really affects me is the deprecation of YUM in F22 in favour of the DNF fork. For the most part, you just switch the command.

#This:
yum install my-package -y
yum groupinstall my-package-group -y
yum update -y

#Becomes:
dnf install my-package -y
dnf groupinstall  my-package-group -y
dnf group install  my-package-group -y
dnf update -y

This did cause one really annoying problem in F23 though. The “MATE Desktop” had a single documentation package that was causing a problem. Usually I would use the following.

yum groupinstall "MATE Desktop" -y --skip-broken

Unfortunately, DNF doesn’t support “–skip-broken”, so I was left to either manually install the pieces, or give up. I chose the latter and use LXDE instead. :) F23 is an Alpha, so you expect issues, but DNF has been in since F22 and still no “–skip-broken”, which I find myself using a lot. Pity.

Cheers

Tim…

Fedora 22/23 and Oracle 11gR2/12cR1 was first posted on August 25, 2015 at 3:53 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Yet another CSV -> Table but with pipleline function

Kris Rice - Tue, 2015-08-25 08:11
Here's just one more variation on how to get a CSV into a table format.  It could have been done before but my google-fu couldn't find it anywhere. First to get some sample data using the /*csv*/ hint in sqldev. Then the results of putting it back to a table. The inline plsql is just to convert the text into a CLOB. Now the details. The csv parsing is completely borrowed(stolen) from