Skip navigation.

Feed aggregator

Oracle Cloud : First Impressions

Tim Hall - Fri, 2015-08-28 07:55

cloudFollowers of the blog will know I’ve been waiting to get access to the Oracle Cloud for a while. Well, I’ve finally got access to a bit of it. Specifically, the “Oracle Database Cloud Service” (DBaaS) part. :)

The Schema Service has been around for a few years and I had already tried that out, but IMHO it’s not really part of Oracle Cloud proper*, so I was reserving my judgement until I got the real stuff. :)

I’ve written a couple of articles already. Just basic stuff to document setting stuff up and connecting etc.

So here are some first impressions…

Oracle Cloud : Look and Feel

Overall the cloud offering looks clean and modern. Tastes vary of course, but I like the look of it.

The navigation is a bit inconsistent between the different cloud services. It feels like the console for each section (Compute, Java, DBaaS etc.) has been written by a different team, each doing what they think works, rather than working to a single design standard. Here’s a couple of examples:

  • In the “Oracle Database Cloud Service” section there is a “Consoles” button on the top-right of the screen that triggers a popup menu allowing you to switch to the Dashboard, Java Cloud and Compute Cloud console. In the “Oracle Compute Cloud” section, the “Consoles” button is not present. Instead there is a hamburger on the top-left of the screen that causes a navigation panel to slide out on the left of the screen, pushing the rest of the page contents to the right. On the top-level services page, the same hamburger produces a popup menu, kind-of like the “Consoles” button, but with the colouring of the navigation panel. I don’t find any method better or worse than the others. It would just be nice if they picked one and stuck with it, otherwise you are looking round the screen trying to decide how to make your next move. :)
  • Some consoles use tabs. Some use navigation tiles. Some use both.

Don’t get me wrong, it’s not hard to navigate. It’s just inconsistent, which kind-of ruins the overall effect. If they can bring it all into line I think it will be really cool.

I think Oracle Cloud looks neater than Amazon Web Services, but the navigation is not as consistent as AWS or Azure. Having used AWS, Azure and Oracle Cloud, I feel Azure has the neatest and most consistent interface. Like I said before, tastes vary. :)

Probably my biggest issue with the Oracle Cloud interface is the speed, or lack of. It’s really slow and unresponsive at times. On a few occasions I thought it had died, then after about 30 seconds the screen just popped back into life. Some of the actions give no feedback until they are complete, so you don’t know if you’ve pressed the button or not.

Oracle Cloud : Ease of Use

I found DBaaS pretty simple to use. I’ve already spent some time using AWS and Azure, so there is probably some carry-over there. I pretty much completed my first pass through creation, connections and patching before I even considered looking for documentation. :)

The documentation is OK, but contains very few screen shots, which leads me to believe the look and feel is all in a state of flux.

I think the general Oracle Compute Cloud Service network/firewall setup is really quite clear, but you can’t edit existing rules. Once a rule is created you can only enable, disable or delete it. I found myself having to delete and create rules a number of times when it felt more obvious to let me edit an existing rule. I’ll mention a DBaaS issue related to this later.

DBaaS Specifically

Just some general observations about the DBaaS offering.

  • The “Oracle Database Cloud Service” DBaaS offering looks OK , but I noticed they don’t have multiplexed redo logs. I never run without multiplexed redo logs, regardless of the redundancy on the storage layer. Even if they were all shoved in the same directory, it would still be better than running without multiplexed files. This is a bit of mandatory configuration the user is left to do after the fact.
  • The DBaaS virtual machine has Glassfish and ORDS installed on it, which is necessary because of the way they have organised the administration of the service, but it’s not something I would normally recommend. Databases and App Servers never go on the same box. Like I said, I understand why, but I don’t like it.
  • The management of the DBaaS offering feels fragmented. For some administration tasks you use the main cloud interface. For others you jump across to the DBaaS Monitor, which has a completely different look and feel. For others you to jump across to [DBConsole – 11g | DB Express -12c]. For a DBaaS offering, I think this is a mistake. It should all be incorporated into the central console and feel seamless. I understand that may be a pain and repetition of existing functionality, but it feels wrong without it.
  • I found the network/firewall setup done by the DBaaS service to be quite irritating. It creates a bunch of rules for each DBaaS service, which are all disabled by default (a good thing), but all the rules are “public”, which you would be pretty crazy to enable. Because you can’t edit them, they end up being pretty much useless. It really is one of those, “Do it properly or don’t bother!”, issues to me. If the DBaaS setup screens asked you to define a Security IP List, or pick an existing one, and decide which services you wanted to make available, it could build all these predefined rules properly in the first place. Alternatively, provide a DBaaS network setup wizard or just don’t bother. It feels so half-baked. :(
  • Dealing with the last two points collectively, the fragmentation of the management interface means some of the management functionality (DBaaS Monitor and [DBConsole – 11g | DB Express -12c]) is not available until you open the firewall for it. This kind-of highlights my point about the fragmentation. I’m logged into the DBaaS console where I can create and delete the whole service, but I can’t use some of the management features. It just feels wrong to me. It is totally down to the implementation choices. I would not have chosen this path.
  • Unlike the AWS RDS for Oracle, you get complete access to the OS and database. You even get sudo access to run root commands. At first I thought this was going to be a good thing and a nice differentiator compared to RDS, but having used the service I’m starting to think it is a bad move. The whole point of a DBaaS offering is it hides some of the nuts and bolts from you. I should not be worrying about the OS. I should not be worrying about the basic Oracle setup. Giving this level of access raises more questions/problems than it solves. I feel I should either do everything myself, or pick a DBaaS offering, accept the restrictions of it, and have it all done for me. The current offering feels like it has not decided what it wants to be yet.
  • When I patched the database through the service admin console it worked fine, but it took a “really” long time! I waited quite a while, went out to the gym and it was still going when I came back. Eventually I started an SSH session to try and find out what was happening. It turns out it took over 2 hours to “download” the PSU to the VM. Once the download was complete, the application of the patch was done quickly. Interesting.
  • The “Oracle Database Cloud Service – Virtual Image” option seems pretty pointless to me. On the website and console it says there is a software installation present, but this is not the case. Instead, there is a tarball containing the software (/scratch/db12102_bits.tar.gz). It also doesn’t come with the storage to do the actual installation on, or to hold the datafiles. To do the installation, you would need to “Up Scale” the service to add the storage, then do the installation manually. This process is actually more complicated than provisioning a compute node and doing everything yourself. I think Oracle need to ditch this option and just stick with DBaaS or Compute, like Amazon have done (RDS or EC2).
Conclusion

I like the Oracle Cloud more than I thought I would. I think it looks quite nice and if someone told me I had to use it as a general Infrastructure as a Service (IaaS) portal I would be fine with that.

I like the DBaaS offering less than I hoped I would. I feel quite bad about saying it, but it feels like a work in progress and not something I would want use at this point. If it were my decision, I would be pushing the DBaaS offering more in the direction of AWS RDS for Oracle. As I said before, the current DBaaS offering feels like it has not decided what it wants to be yet. It needs to be much more hands-off, with a more consistent, centralized interface.

I don’t have full access to the straight Compute Cloud yet, so I can’t try provisioning a VM and doing everything myself. If I get access I will try it, but I would expect it to be the same as what I’ve done for EC2 and Azure. A VM is a VM… :)

When I read this back it sounds kind-of negative, but I think all the things I’ve mentioned could be “fixed” relatively quickly. Also, this is only one person’s opinion on one specific service. The haters need to try this for themselves before they hate. :)

Cheers

Tim…

* Just to clarify, I am not saying the Schema Service isn’t “Cloud” and I’m not saying it doesn’t work. I’m just saying I don’t see this as part of Oracle’s grand cloud database vision. It always seemed like a cynical push to market to allow them to say, “we have a cloud DB”. If it had been branded “APEX Service” I might have a different opinion. It is after all a paid for version of apex.oracle.com. This is a very different proposition to promoting it as a “Cloud Database”.

Oracle Cloud : First Impressions was first posted on August 28, 2015 at 2:55 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Creating User Schema Table and Projections in Vertica

Pakistan's First Oracle Blog - Fri, 2015-08-28 01:25
Vertica is a an exciting database with some real nifty features. Projections is a ground breaking unique feature of Vertica which dramatically increases performance benefits in terms of querying and space benefits in terms of compression.



Following test commands are impromptu sesssion in which a user is being created, then a schema is created, and that user is authorized on that schema. Then a table is created with a default superprojection and then a projection is created and then we see its usage.

Create new vertica database user, create schema and authorize that user to that schema. Create 4 column table and insert data.

select user_name from v_catalog.users;

vtest=> create user mytest identified by 'user123';
CREATE USER
vtest=>

vtest=> \du
      List of users
 User name | Is Superuser
-----------+--------------
 dbadmin   | t
 mytest    | f
(2 rows)

vtest=> \dn
         List of schemas
     Name     |  Owner  | Comment
--------------+---------+---------
 v_internal   | dbadmin |
 v_catalog    | dbadmin |
 v_monitor    | dbadmin |
 public       | dbadmin |
 TxtIndex     | dbadmin |
 store        | dbadmin |
 online_sales | dbadmin |
(7 rows)


vtest=> \q
[dbadmin@vtest1 root]$ /opt/vertica/bin/vsql -U mytest -w user123 -h 0.0.0.0 -p 5433 -d vtest
Welcome to vsql, the Vertica Analytic Database interactive terminal.

Type:  \h or \? for help with vsql commands
       \g or terminate with semicolon to execute query
       \q to quit


vtest=> create table testtab (col1 integer,col2 integer, col3 varchar2(78), col4 varchar2(90));
ROLLBACK 4367:  Permission denied for schema public

[dbadmin@vtest1 root]$ /opt/vertica/bin/vsql -U dbadmin -w vtest -h 0.0.0.0 -p 5433 -d vtest
Welcome to vsql, the Vertica Analytic Database interactive terminal.

Type:  \h or \? for help with vsql commands
       \g or terminate with semicolon to execute query
       \q to quit

vtest=> \du
      List of users
 User name | Is Superuser
-----------+--------------
 dbadmin   | t
 mytest    | f
(2 rows)

vtest=> create schema mytest authorization mytest;
CREATE SCHEMA
vtest=> select current_user();
 current_user
--------------
 dbadmin
(1 row)

vtest=>

vtest=> \q
[dbadmin@vtest1 root]$ /opt/vertica/bin/vsql -U mytest -w user123 -h 0.0.0.0 -p 5433 -d vtest
Welcome to vsql, the Vertica Analytic Database interactive terminal.

Type:  \h or \? for help with vsql commands
       \g or terminate with semicolon to execute query
       \q to quit

vtest=> create table testtab (col1 integer,col2 integer, col3 varchar2(78), col4 varchar2(90));
CREATE TABLE
vtest=> select current_user();
 current_user
--------------
 mytest
(1 row)

vtest=>

vtest=> \dt
               List of tables
 Schema |  Name   | Kind  | Owner  | Comment
--------+---------+-------+--------+---------
 mytest | testtab | table | mytest |
(1 row)

vtest=> insert into testtab values (1,2,'test1','test2');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (2,2,'test2','test3');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (3,2,'test2','test3');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (4,2,'test4','tesrt3');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (4,2,'test4','tesrt3');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (4,2,'test4','tesrt3');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (4,2,'test4','tesrt3');
 OUTPUT
--------
      1
(1 row)

vtest=> commit;
COMMIT
vtest=>


Create a projection on 2 columns.

Superprojection exists already:

vtest=> select anchor_table_name,projection_name,is_super_projection from projections;
 anchor_table_name | projection_name | is_super_projection
-------------------+-----------------+---------------------
 testtab           | testtab_super   | t
(1 row)

vtest=>


vtest=> \d testtab
                                    List of Fields by Tables
 Schema |  Table  | Column |    Type     | Size | Default | Not Null | Primary Key | Foreign Key
--------+---------+--------+-------------+------+---------+----------+-------------+-------------
 mytest | testtab | col1   | int         |    8 |         | f        | f           |
 mytest | testtab | col2   | int         |    8 |         | f        | f           |
 mytest | testtab | col3   | varchar(78) |   78 |         | f        | f           |
 mytest | testtab | col4   | varchar(90) |   90 |         | f        | f           |
(4 rows)

vtest=>
vtest=> create projection ptest (col1,col2) as select col1,col2 from testtab;
WARNING 4468:  Projection is not available for query processing. Execute the select start_refresh() function to copy data into this projection.
          The projection must have a sufficient number of buddy projections and all nodes must be up before starting a refresh
CREATE PROJECTION
vtest=>


vtest=> select anchor_table_name,projection_name,is_super_projection from projections;
 anchor_table_name | projection_name | is_super_projection
-------------------+-----------------+---------------------
 testtab           | testtab_super   | t
 testtab           | ptest           | f
(2 rows)


vtest=> select * from ptest;
ERROR 3586:  Insufficient projections to answer query
DETAIL:  No projections eligible to answer query
HINT:  Projection ptest not used in the plan because the projection is not up to date.
vtest=>

vtest=> select start_refresh();
             start_refresh
----------------------------------------
 Starting refresh background process.

(1 row)

vtest=> select * from ptest;
 col1 | col2
------+------
    1 |    2
    2 |    2
    3 |    2
    4 |    2
    4 |    2
    4 |    2
    4 |    2
(7 rows)

vtest=>


 projection_basename | USED/UNUSED |           last_used
---------------------+-------------+-------------------------------
 testtab             | UNUSED      | 1970-01-01 00:00:00-05
 ptest               | USED        | 2015-08-28 07:14:49.877814-04
(2 rows)

vtest=> select * from testtab;
 col1 | col2 | col3  |  col4
------+------+-------+--------
    1 |    2 | test1 | test2
    3 |    2 | test2 | test3
    2 |    2 | test2 | test3
    4 |    2 | test4 | tesrt3
    4 |    2 | test4 | tesrt3
    4 |    2 | test4 | tesrt3
    4 |    2 | test4 | tesrt3
(7 rows)

projection_basename | USED/UNUSED |           last_used
---------------------+-------------+-------------------------------
 ptest               | USED        | 2015-08-28 07:14:49.877814-04
 testtab             | USED        | 2015-08-28 07:16:10.155434-04
(2 rows)
Categories: DBA Blogs

List of acquisitions by Microsoft a data journey

Nilesh Jethwa - Thu, 2015-08-27 21:21

If we look into the SEC data for Microsoft and other tech companies, Microsoft spends the most in Research and Development from [by dollar]

Image

Read more at: http://www.infocaptor.com/dashboard/list-of-acquisitions-by-microsoft-a-data-journey

Integrating Telstra Public SMS API into Bluemix

Pas Apicella - Thu, 2015-08-27 20:16
In the post below I will show how I integrated Telstra public SMS Api into my Bluemix catalog to be consumed as a service. This was all done from Public Bluemix using the Cloud Integration Service.

Step 1 - Create a T.DEV account

In order to get started you need to create an account on http://dev.telstra.com in order to be granted access to the SMS API. Once access is granted you need to create an application which enables you to add/manage Telstra API keys as shown below.

1.1 Create an account an http://dev.telstra.com

1.2. Once done you should have something as follows which can take up to 24 hours to get approved as shown by the "approved" icon


Step 2 - Test the SMS Telstra API

At this point we want to test the Telstra SMS API using a script, this ensures it's working before we proceed to Integrating it onto Bluemix.

2.1. Create a script called setup.sh as follows

#Obtain these keys from the Telstra Developer Portal
APP_KEY="yyyy-key"
APP_SECRET="yyyy-secret"

curl "https://api.telstra.com/v1/oauth/token?client_id=$APP_KEY&client_secret=$APP_SECRET&grant_type=client_credentials&scope=SMS"

2.2. Edit the script above to use your APP_KEY and APP_SECRET values from the Telstra Developer Portal

2.3. Run as shown below


pas@Pass-MBP:~/ibm/customers/telstra/telstra-apis/test$ ./setup.sh
{ "access_token": "xadMkPqSAE0VG6pSGEi6rHA5vqYi", "expires_in": "3599" }

2.4. Make a note of the token key returned, you will need this to send an SMS message
2.5. Create a script called "sendsms.sh" as shown below.


# * Recipient number should be in the format of "04xxxxxxxx" where x is a digit
# * Authorization header value should be in the format of "Bearer xxx" where xxx is access token returned
# from a previous GET https://api.telstra.com/v1/oauth/token request.
RECIPIENT_NUMBER=0411151350
TOKEN=token-key

curl -H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-d "{\"to\":\"$RECIPIENT_NUMBER\", \"body\":\"Hello, pas sent this message from telstra SMS api!\"}" \
"https://api.telstra.com/v1/sms/messages"

2.6. Replace the token key with what was returned at step 2.4 above
2.7. Replace the RECIPIENT_NUMBER with your own mobile number to test the SMS API.
2.8. Run as shown below.


pas@Pass-MBP:~/ibm/customers/telstra/telstra-apis/test$ ./sendsms.sh
{"messageId":"1370CAB677B59C226705337B95945CD6"}
Step 3 - Creating a REST based service to Call Telstra SMS API

At this point we can now Integrate the Telstra SMS API into Bluemix. To do that I created a simple Spring Boot Application which exposes a RESTful method to call Telstra SMS API using Spring's RestTemplate class. I do this as it's two calls you need to make to call the Telstra SMS API. A REST based call to get a ACCESS_TOKEN , then followed by a call to actually send an SMS message. Creating a Spring Boot application to achieve this allows me to wrap that into one single call making it easy to consume and add to the Bluemix Catalog as a Service.

More Information on The Cloud Integration service can be found here. Cloud Integration allows us to expose RESTful methods from Bluemix applications onto the catalog via one simple screen. We could alos use Bluemix API management service as well.

https://www.ng.bluemix.net/docs/services/CloudIntegration/index.html

Below shows the application being pushed into Bluemix which will then be used to add Telstra SMS API service into the Bluemix catalog.



pas@192-168-1-4:~/ibm/DemoProjects/spring-starter/jazzhub/TelstraSMSAPIDemo$ cf push
Using manifest file /Users/pas/ibm/DemoProjects/spring-starter/jazzhub/TelstraSMSAPIDemo/manifest.yml

Creating app pas-telstrasmsapi in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
OK

Using route pas-telstrasmsapi.mybluemix.net
Binding pas-telstrasmsapi.mybluemix.net to pas-telstrasmsapi...
OK

Uploading pas-telstrasmsapi...
Uploading app files from: /Users/pas/ibm/DemoProjects/spring-starter/jazzhub/TelstraSMSAPIDemo/target/TelstraSMSAPI-1.0-SNAPSHOT.jar
Uploading 752.3K, 98 files
Done uploading
OK

Starting app pas-telstrasmsapi in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
-----> Downloaded app package (15M)
-----> Liberty Buildpack Version: v1.19.1-20150622-1509
-----> Retrieving IBM 1.8.0_20150617 JRE (ibm-java-jre-8.0-1.0-pxa6480sr1ifx-20150617_03-cloud.tgz) ... (0.0s)
         Expanding JRE to .java ... (1.4s)
-----> Retrieving App Management 1.5.0_20150608-1243 (app-mgmt_v1.5-20150608-1243.zip) ... (0.0s)
         Expanding App Management to .app-management (0.9s)
-----> Downloading Auto Reconfiguration 1.7.0_RELEASE from https://download.run.pivotal.io/auto-reconfiguration/auto-reconfiguration-1.7.0_RELEASE.jar (0.1s)
-----> Liberty buildpack is done creating the droplet

-----> Uploading droplet (90M)

0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
1 of 1 instances running

App started


OK

App pas-telstrasmsapi was started using this command `$PWD/.java/jre/bin/java -Xtune:virtualized -Xmx384M -Xdump:none -Xdump:heap:defaults:file=./../dumps/heapdump.%Y%m%d.%H%M%S.%pid.%seq.phd -Xdump:java:defaults:file=./../dumps/javacore.%Y%m%d.%H%M%S.%pid.%seq.txt -Xdump:snap:defaults:file=./../dumps/Snap.%Y%m%d.%H%M%S.%pid.%seq.trc -Xdump:heap+java+snap:events=user -Xdump:tool:events=systhrow,filter=java/lang/OutOfMemoryError,request=serial+exclusive,exec=./.buildpack-diagnostics/killjava.sh $JVM_ARGS org.springframework.boot.loader.JarLauncher --server.port=$PORT`

Showing health and status for app pas-telstrasmsapi in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls: pas-telstrasmsapi.mybluemix.net
last uploaded: Fri Jul 17 11:26:58 UTC 2015

     state     since                    cpu    memory           disk           details
#0   running   2015-07-17 09:28:28 PM   1.0%   150.6M of 512M   148.9M of 1G
Step 4 - Add the RESTful method to IBM Bluemix catalog to invoke the Telstra SMS API

4.1 To expose our RESTful method we simply define the end point using the Cloud Integration service as shown below.


The image showing the Cloud Integration service with the Telstra API exposed and available to be consumed as a Service on Bluemix.





This was created using the Bluemix Dashboard but can also be done using the Cloud Foundry command line "cf create-service ..."

Step 5 - Create a Application client which will invoke the Telstra SMS service

At this point we are going to push a client application onto Bluemix which consumes the Telstra SMS API service and then uses it within the application. WE do this to verify the service works creating a simple HTML based application which invokes the service which has a manifest.yml file indicating it wants to consume the service which is now exposed on the catalog within Bluemix as per above.

5.1. The manifest.yml consumes the service created from the API in the catalog


applications:
- name: pas-telstrasmsapi-client
  memory: 512M
  instances: 1
  host: pas-telstrasmsapi-client
  domain: mybluemix.net
  path: ./target/TelstraSMSApiClient-0.0.1-SNAPSHOT.jar
  env:
   JBP_CONFIG_IBMJDK: "version: 1.8.+"
  services:
    - TelstraSMS-service
5.2. Push the application as shown below.


pas@Pass-MacBook-Pro:~/ibm/DemoProjects/spring-starter/jazzhub/TelstraSMSApiClient$ cf push
Using manifest file /Users/pas/ibm/DemoProjects/spring-starter/jazzhub/TelstraSMSApiClient/manifest.yml

Updating app pas-telstrasmsapi-client in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
OK

Using route pas-telstrasmsapi-client.mybluemix.net
Uploading pas-telstrasmsapi-client...
Uploading app files from: /Users/pas/ibm/DemoProjects/spring-starter/jazzhub/TelstraSMSApiClient/target/TelstraSMSApiClient-0.0.1-SNAPSHOT.jar
Uploading 806.8K, 121 files
Done uploading
OK
Binding service TelstraSMS-service to app pas-telstrasmsapi-client in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
OK

Stopping app pas-telstrasmsapi-client in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
OK

Starting app pas-telstrasmsapi-client in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
-----> Downloaded app package (16M)
-----> Downloaded app buildpack cache (1.2M)
-----> Liberty Buildpack Version: v1.19.1-20150622-1509
-----> Retrieving IBM 1.8.0_20150617 JRE (ibm-java-jre-8.0-1.0-pxa6480sr1ifx-20150617_03-cloud.tgz) ... (0.0s)
         Expanding JRE to .java ... (1.5s)
-----> Retrieving App Management 1.5.0_20150608-1243 (app-mgmt_v1.5-20150608-1243.zip) ... (0.0s)
         Expanding App Management to .app-management (0.9s)
-----> Downloading Auto Reconfiguration 1.7.0_RELEASE from https://download.run.pivotal.io/auto-reconfiguration/auto-reconfiguration-1.7.0_RELEASE.jar (0.0s)
-----> Liberty buildpack is done creating the droplet

-----> Uploading droplet (90M)

0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
1 of 1 instances running

App started


OK

App pas-telstrasmsapi-client was started using this command `$PWD/.java/jre/bin/java -Xtune:virtualized -Xmx384M -Xdump:none -Xdump:heap:defaults:file=./../dumps/heapdump.%Y%m%d.%H%M%S.%pid.%seq.phd -Xdump:java:defaults:file=./../dumps/javacore.%Y%m%d.%H%M%S.%pid.%seq.txt -Xdump:snap:defaults:file=./../dumps/Snap.%Y%m%d.%H%M%S.%pid.%seq.trc -Xdump:heap+java+snap:events=user -Xdump:tool:events=systhrow,filter=java/lang/OutOfMemoryError,request=serial+exclusive,exec=./.buildpack-diagnostics/killjava.sh $JVM_ARGS org.springframework.boot.loader.JarLauncher --server.port=$PORT`

Showing health and status for app pas-telstrasmsapi-client in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls: pas-telstrasmsapi-client.mybluemix.net
last uploaded: Sun Jul 19 15:22:26 UTC 2015

     state     since                    cpu    memory           disk           details
#0   running   2015-07-19 11:23:57 PM   0.8%   144.9M of 512M   149.8M of 1G

Step 6 - Send SMS using Telstra SMS Api from Bluemix Application using the Service

6.1. Navigate to the URL below and send an SMS using the form below.

http://pas-telstrasmsapi-client.mybluemix.net/


6.2. Verify it has sent a message to the phone number entered in the text field as shown below.



More Information 

Getting started with Bluemix is easy, navigate to http://bluemix.net to sign up and get going.http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Oracle Priority Support Infogram for 27-AUG-2015

Oracle Infogram - Thu, 2015-08-27 14:57

RDBMS
Understanding new In-Memory notes in an execution plan, from Oracle Database In-Memory
Migration IBM AIX ==> SPARC Solaris with Data Guard, from Upgrade your Database - NOW!
Tips on SQL Plan Management and Oracle Database In-Memory - Part 2, from the Oracle Optimizer blog.
Coding Oracle
Yet another CSV -> Table but with pipleline function, from Kris’ Blog.
WebLogic
Extending the Weblogic Console by adding Books, Pages and Portlets , from WebLogic Partner Community EMEA.
Java
9 tools to help you with Java Performance Tunin, from WebLogic Partner Community EMEA.
Extending the Weblogic Console by adding Books, Pages and Portlets 

Asynchronous Support in JAX-RS 2/Java EE 7, from The Aquarium.
And from the same source:
Java API for JSON Binding (JSON-B) 1.0 Early Draft Now Available!
BI Publisher
Page Borders and Title Underlines, from the Oracle BI Publisher blog.
BPM
Getting Started with BPM: Free Oracle University Video Tutorial, from the SOA & BPM Partner Community Blog.
Mobile Computing
New Features : Oracle Mobile Security Suite Integration in Oracle MAF 2.1.3, from The Oracle Mobile Platform Blog.
Ops Center
How Many Systems Can Ops Center Manage?, from the Ops Center blog.
Demantra
How to avoid ORA-06512 and ORA-20000 when Concurrent Statistics Gathering is enabled. New in 12.1 Database, Concurrent Statistics Gathering, Simultaneous for Multiple Tables or Partitions, from the Oracle Demantra blog.
EBS
From the Oracle E-Business Suite Support blog:
New White Paper on Using Web ADI Spread Sheet in the Buyer Work Center Orders (BWC)
From the Oracle E-Business Suite Technology blog:
JRE 1.8.0_60 Certified with Oracle E-Business Suite

Using Application Management Suite With E-Business Suite 12.2

Adaptive Query Optimization in Oracle 12c : Ongoing Updates

Tim Hall - Thu, 2015-08-27 12:09

I’ve said a number of times, the process of writing articles is part of an ongoing learning experience for me. A few days ago my personal tech editor (Jonathan Lewis) asked about a statement I made in the SQL Plan Directive article. On further investigation it turned out the sentence was a complete work of fiction on my part, based on my misunderstanding of something I read in the manual, as well as the assumption that everything that happens must be as a result of a new feature. :)

Anyway, the offending statement has been altered, but the conversation this generated resulted in new article about Automatic Column Group Detection.

The process also highlighted how difficult, at least for me, it is to know what is going on in the optimizer now. It wasn’t always straight forward before, but now with the assorted new optimizations, some beating others to the punch, it is even more difficult. There are a number of timing issues involved also. If a statement runs twice in quick succession, you might get a different sequence of events compared to having a longer gap between the first and second run of the statement. It’s maddening at times. I’m hoping Jonathan will put pen to paper about this, because I think he will do a better job of explaining the issues around the inter-dependencies better than I can.

Anyway, I will be doing another pass through this stuff over the coming days/weeks/months/years to make sure it is consistent with “my current understanding”. :)

Fun, fun, fun…

Cheers

Tim…

Adaptive Query Optimization in Oracle 12c : Ongoing Updates was first posted on August 27, 2015 at 7:09 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Summer Projects and a Celebration

Oracle AppsLab - Thu, 2015-08-27 09:02

If you follow us on Twitter (@theappslab) or on Facebook, you’ve seen some of the Summer projects coming together.

If not, here’s a recap of some of the tinkering going on in OAUX Emerging Technologies land.

Mark (@mvilrokx) caught the IoT bug from Noel (@noelportugal), and he’s been busy destroying and rebuilding a Nerf gun, which is a thing. Search for “nerf gun mods” if you don’t believe me.

11878916_1055356384509414_2509613851779629347_o

Mark installed our favorite chip, the ESP8266, to connect the gun to the internets, and he’s been tinkering from there.

Meanwhile, Raymond (@yuhuaxie) has been busy building a smart thermostat.

11880462_1056118014433251_3359065287035416341_n

11896214_1056118017766584_838064281034754658_n

And finally, completely unrelated to IoT tinkering, earlier this month the Oracle Mexico Development Center (MDC) in Guadalajara celebrated its fifth anniversary. As you know, we have two dudes in that office, Os (@vaini11a) and Luis (@lsgaleana), as well as an extended OAUX family. Congratulations.

11838559_950231585020447_942122106329896825_o

 

11218984_950231691687103_4011713548176238985_nPossibly Related Posts:

Red Samurai ADF Performance Audit Tool v 4.0 - Web Client Request Monitoring and Complete Query Analysis

Andrejus Baranovski - Thu, 2015-08-27 03:12
I'm excited to announce, we have released a new version of our RSA audit tool. This is a major update after previous version released in February 2015 - Red Samurai ADF Performance Audit Tool v 3.4 - ADF Task Flow Statistics with Oracle DMS Servlet Integration.

It is already 3 years, since initial version - Red Samurai Performance Audit Tool - Runtime Diagnosis for ADF Applications. We are using it for many of our customers to monitor ADF performance in both test and production environments. Many new features were added during these years, more features to come.

RSA Audit v4.0 New Features

1. RSA Audit v4.0 dashboard is supporting ADF 12c and Alta UI look


2. Web Client Request Time monitoring. Supported with ADF 11g and 12c. Generic method tracks request time for all ADF UI components. Logged data can be analysed through ADF UI dashboard or directly in the DB. Request time represents complete time from user action in the browser, until request is completed. This includes real user experience - browser processing time, network time, server side time and ADF BC/DB processing times. Runs in VERBOSE logging mode


3. Detail information about ADF fragment, button or other ADF UI component involved into request is being logged together with request processing time and is accessible from audit dashboard. This helps to identify slow actions spanning from Web Client to DB


4. Information about each request is grouped, this allows to compare differences between multiple requests and identify bottlenecks in the application performance


5. Duplicate Queries. Allows to track all executed VO’s, very helpful to identify redundant VO’s executions. Groups VO executions per ECID, this helps to identify VO’s re-executed multiple times during the same request. Runs in MEDIUM logging mode


6. VO’s executed from the same ECID are automatically highlighted - this simplifies redundant queries analysis


7. Number of duplicate executions of VO’s per ECID is calculated and presented in the table and sunburst chart


8. We calculate top VO’s per AM. This helps to set priorities for SQL tuning and understand heavy used VO’s


9. Sunburst chart displays visual representation of duplicate and top VO’s per AM

Page Borders and Title Underlines

Tim Dexter - Wed, 2015-08-26 15:32

I have taken to recording screen grabs to help some folks out on 'how do I' scenarios. Sometimes a 3 minute video saves a couple of thousand words and several screen shots.

So, per chance you need to know:

1. How to add a page border to your output and/or

2. How to add an under line that runs across the page

Watch this!   https://www.youtube.com/watch?v=3UcXHeSF0BM

If you need the template, sample data and output, get them here.

I'm taking requests if you have them.

Categories: BI & Warehousing

The Fraught Interaction Design of Personalized Learning Products

Michael Feldstein - Wed, 2015-08-26 12:49

By Michael FeldsteinMore Posts (1043)

David Wiley has a really interesting post up about Lumen Learning’s new personalized learning platform. Here’s an excerpt:

A typical high-level approach to personalization might include:

  • building up an internal model of what a student knows and can do,
  • algorithmically interrogating that model, and
  • providing the learner with a unique set of learning experiences based on the system’s analysis of the student model

Our thinking about personalization started here. But as we spoke to faculty and students, and pondered what we heard from them and what we have read in the literature, we began to see several problems with this approach. One in particular stood out:

There is no active role for the learner in this “personalized” experience. These systems reduce all the richness and complexity of deciding what a learner should be doing to – sometimes literally – a “Next” button. As these systems painstakingly work to learn how each student learns, the individual students lose out on the opportunity to learn this for themselves. Continued use of a system like this seems likely to create dependency in learners, as they stop stretching their metacognitive muscles and defer all decisions about what, when, and how long to study to The Machine.

Instructure’s Jared Stein really likes Lumen’s approach, writing,

So much work in predictive analytics and adaptive learning seeks to relieve people from the time-consuming work of individual diagnosis and remediation — that’s a two-edged sword: Using technology to increase efficiency can too easily sacrifice humanness — if you’re not deliberate in the design and usage of the technology. This topic came up quickly amongst the #DigPedNetwork group when Jim Groom and I chatted about closed/open learning environments earlier this month, suggesting that we haven’t fully explored this dilemma as educators or educational technologist.

I would add that I have seen very little evidence that either instructors or students place a high value on the adaptivity of these products. Phil and I have talked to a wide range of folks using these products, both in our work on the e-Literate TV case studies and in our general work as analysts. There is a lot of interest in the kind of meta-cognitive dashboarding that David is describing. There is little interest in, and in some cases active hostility toward, adaptivity. For example, Essex County College is using McGraw Hill’s ALEKS, which has one of the more sophisticated adaptive learning approaches on the market. But when we talked to faculty and staff there, the aspects of the program that they highlighted as most useful were a lot more mundane, e.g.,

It’s important for students to spend the time, right? I mean learning takes time, and it’s hard work. Asking students to keep time diaries is a very difficult ask, but when they’re working in an online platform, the platform keeps track of their time. So, on the first class day of the week, that’s goal-setting day. How many hours are you going to spend working on your math? How many topics are you planning to master? How many classes are you not going to be absent from?

I mean these are pretty simple goals, and then we give them a couple goals that they can just write whatever they feel like. And I’ve had students write, “I want to come to class with more energy,” and other such goals. And then, because we’ve got technology as our content delivery system, at the end of the week I can tell them, in a very efficient fashion that doesn’t take up a lot of my time, “You met your time goal, you met your topic goal,” or, “You approached it,” or, “You didn’t.”

So one of the most valuable functions of this system in this context is to reflect back to the students what they have done in terms that make sense to them and are relevant to the students’ self-selected learning goals. The measures are fairly crude—time on task, number of topics covered, and so on—and there is no adaptivity necessary at all.

But I also think that David’s post hints at some of the complexity of the design challenges with these products.

You can think of the family of personalized learning products as having potentially two components: diagnostic and prescriptive. Everybody who likes personalized learning products in any form likes the diagnostic component. The foundational value proposition for personalization, (which should not in any way be confused with “personal”), is having the system provide feedback to students and teachers about what the student does well and where the student is struggling. Furthermore, the perceived value of the product is directly related to the confidence that students and teachers have that the product is rendering an accurate diagnosis. That’s why I think products that provide black box diagnoses are doomed to market failure in the long term. As the market matures, students and teachers are going to want to know not only what the diagnosis is but also what the basis of the diagnosis is, so that they can judge for themselves whether they think the machine is correct.

Once the system has diagnosed the student’s knowledge or skill gaps—and it is worth calling out that these many of these personalized learning systems work on a deficit model, where the goal is to get students to fill in gaps—the next step is to prescribe actions that will help students to address those gaps. Here again we get into the issue of transparency. As David points out, some vendors hide the rationale for their prescriptions, even going so far as to remove user choice and just hide the adaptivity behind the “next” button. Note that the problem isn’t so much with providing a prescription as it is with the way in which it is provided. The other end of the spectrum, as David argues, is to make recommendations. The full set of statements from a well behaved personalized learning product to a student or teacher might be something like the following:

  1. This is where I think you have skill or knowledge gaps.
  2. This is the evidence and reasoning for my diagnosis.
  3. This is my suggestion for what you might want to do next.
  4. This is my reasoning for why I think it might help you.

It sounds verbose, but it can be done in fairly compact ways. Netflix’s “based on your liking Movie X and Movie Y, we think you would give Movie Z 3.5 stars” is one example of a compact explanation that provides at least some of this information. There are lots of ways that a thoughtful user interface designer can think about progressively revealing some of this information and providing “nudges” that encourage students on certain paths while still giving them the knowledge and freedom they need to make choices for themselves. The degree to which the system should be heavy-handed in its prescription probably depends in part on the pedagogical model. I can see something closer to “here, do this next” feeling appropriate in a self-paced CBE course than in a typical instructor-facilitated course. But even there, I think the Lumen folks are 100% right that the first responsibility of the adaptive learning system should be to help the learner understand what the system is suggesting and why so that the learner can gain better meta-cognitive understanding.

None of which is to say that the fancy adaptive learning algorithms themselves are useless. To the contrary. In an ideal world, the system will be looking at a wide range of evidence to provide more sophisticated evidence-based suggestions to the students. But the key word here is “suggestions.” Both because a critical part of any education is teaching students to be more self-aware of their learning processes and because faulty prescriptions in an educational setting can have serious consequences, personalized learning products need to evolve out of the black box phase as quickly as possible.

 

 

The post The Fraught Interaction Design of Personalized Learning Products appeared first on e-Literate.

Inside View Of Blackboard’s Moodle Strategy In Latin America

Michael Feldstein - Wed, 2015-08-26 11:45

By Phil HillMore Posts (356)

One year ago Blackboard’s strategy for Moodle was floundering. After the 2012 acquisition of Moodlerooms and Netspot, Blackboard had kept its promises of supporting the open source community – and in fact, Blackboard pays much more than 50% of the total revenue going to Moodle HQ[1] – but that does not mean they had a strategy. Key Moodlerooms employees were leaving, and the management was frustrated. Last fall the remaining Moodlerooms management put together an emerging strategy to invest in (through corporate M&A) and grow the Moodle business, mostly outside of the US.

In just the past twelve months, Blackboard has acquired three Moodle-based companies – Remote-Learner UK (Moodle Partner in the UK), X-Ray Analytics (learning analytics for Moodle), and Nivel Siete (Moodle Partner in Colombia). When you add in organic growth to these acquisition, Blackboard has added ~450 new clients using Moodle in this same time period, reaching a current total of ~1400.

This is a change worth exploring. To paraphrase Michael’s statements to me and in his recent BbWorld coverage:

If you want to understand Blackboard and their future, you have to understand what they’re doing internationally. If you want to understand what they’re doing internationally, you have to understand what they’re doing with Moodle.

Based on this perspective, I accepted an invitation from Blackboard to come visit Nivel Siete last week to get a first-hand view of what this acquisition means I also attended the MoodleMoot Colombia #mootco15 conference and talked directly to Moodle customers in Latin America. Let’s first unpack that last phrase.

  • Note that due to the nature of this trip, I “talked directly” with Blackboard employees, Nivel Siete employees, Blackboard resellers, and Nivel Siete customers. They did give me free access to talk privately with whoever I wanted to, but treat this post as somewhat of an inside view rather than one that also includes perspectives from competitors.
  • “Moodle” is very significant in Latin America. It is the default LMS that dominates learning environments. The competition, or alternative solution, there is Blackboard Learn or . . . another route to get Moodle. In this market D2L and Canvas have virtually no presence – each company has just a couple of clients in Latin America and are not currently a factor in LMS decision-making. Schoology has one very large customer in Uruguay service hundreds of thousands of students. Blackboard Learn serves the top of the market – e.g. the top 10% in terms of revenue of Colombian institutions, where they already serves the majority of that sub-market according to the people I talked to. For the remaining 90%, it is pretty much Moodle, Moodle, alternate applications that are not LMSs, or nothing.[2]
  • I chose “customers” instead of “schools” or “institutions” for a reason. What is not understood in much of the education community is that Moodle has a large footprint outside of higher ed and K-12 markets. Approximately 2/3 of Nivel Siete’s clients are in corporate learning, and several others are government. And this situation is quite common for Moodle. In the US, more than 1/3 of Moodlerooms’ and approximately 1/2 of Remote-Learner’s customers are corporate learning. Phill Miller, the VP of International for Moodlerooms, said that for most of the Moodle hosting and service providers he has met, they also are serving corporate clients at similar numbers as education.
  • I chose “Latin America” instead of “Colombia” for a reason. While all but ~12 of Nivel Siete’s existing clients are in Colombia, Blackboard bought the company to act as a center of excellence or support service company for most of Latin America – Colombia, Mexico, Brazil, and Peru in particular. Cognos Online, their current local reseller for Latin America for core Blackboard products (Learn, Collaborate, etc) will become the reseller also for their Moodle customers. Nivel Siete will support a broader set of clients. In other words, this is not a simple acquisition of customers – it is an expansion of international presence.

And while we’re at it, the conference reception included a great opera mini flash mob (make sure to watch past 0:37):

Nivel Siete

Nivel Siete (meaning Level 7, a reference from two of the founders’ college days when a professor talked about need to understand deeper levels of the technology stack than just top-level applications that customers see), is a company of just over 20 employees in Bogota. They have 237+ clients, but that is growing. During the three days while I was there they signed several new contracts. They offer Moodle hosting and service in a cloud environment based on Amazon Web Services (AWS) – not true SaaS, as they allow multiple software versions in production and have not automated all provisioning or upgrade processes. What they primarily offer, according to the founders, is a culture of how to service and support using cloud services and specific marketing and sales techniques.

In Latin America, most customers care more about the local sales and support company than they do about the core software. As one person put it, they believe in skin-to-skin sales, where clients have relationships they trust as long as solutions are provided. Most LMS customers in Latin America do not care as much about the components of that solution as they do about relationships, service, and price. And yet, due to open source software and lightweight infrastructure needs, Moodle is dominant as noted above. The Moodle brand, code base, and code licensing does not matter as much as the Moodle culture and ecosystem. From a commercial standpoint, Nivel Siete’s competitors include a myriad of non Moodle Partner hosting providers – telcos bundling in hosting, mom-and-pop providers, self-hosting – or non-consumption. For a subset of the market, Nivel Siete has competed with Blackboard Learn.

Beyond Cognos Online, Blackboard has another ~9 resellers in Latin America, and Nivel Siete (or whatever they decide to name the new unit) will support all of these resellers. This is actually the biggest motivation other than cash for the company to sell – they were seeking methods to extend their influence, and this opportunity made the most sense.

Blackboard Learn and Ultra

What about that Learn sub-market? Most clients and sales people (resellers as well as Blackboard channel manager) are aware of Learn Ultra, but the market seems to understand already that Ultra is not for them . . . yet. They appear to be taking a ‘talk to me when it’s done and done in Spanish’ approach and not basing current decisions on Ultra. In this sense, the timing for Ultra does not matter all that much, as the market is not waiting on it. Once Ultra is ready for Latin America, Blackboard sales (channel manager and resellers) expect the switchover to be quicker than in the US, as LMS major upgrades (involving major UI and UX changes) or adoptions tend to take weeks or months instead of a year or more as we often see in the states. At least in the near term, Learn Ultra is not a big factor in this market.

What Blackboard is best known for in this market is the large SENA contract running on Learn. SENA (National Service for Learning) is a government organization that runs the majority of all vocational colleges – providing certificates and 2-year vocational degrees mostly for lower-income students, a real rising middle class move that is important in developing countries. Blackboard describes SENA as having 6+ million total enrollment, with ~80% in classrooms and ~20% in distance learning.

Integration

The challenge Blackboard faces is integrating its Learn and Moodle operations through the same groups – Nivel Siete internal group, Cognos Online and other resellers serving both lines – without muddling the message and go-to-market approach. Currently Learn is marketed and sold through traditional enterprise sales methods – multiple meetings, sales calls, large bids – while Nivel Siete’s offering of Moodle is marketed and sold with more of a subscription-based mentality. As described by ForceManagement:

A customer who has moved to a subscription-based model of consumption has completely different expectations about how companies are going interact with them.

How you market to them, how you sell to them, how you bill them, how you nurture the relationship – it’s all affected by the Subscription Economy. The customer’s idea of value has changed. And, if the customer’s idea of value has changed, your value proposition should be aligned accordingly. [snip]

The subscription-based sales process relies less on the closing of a sale and more on the nurturing of a long-term relationship to create lifetime customer value.

One of Nivel Siete’s most effective techniques is their The e-Learner Magazine that highlights customer telling their own stories and lessons in a quasi-independent fashion. The company has relied on inbound calls and quick signups and service startups. There is quite a different cultural difference between enterprise software and subscription-based approaches. While Blackboard themselves are facing such changes due to Ultra and newly-offered SaaS models, the group in Latin America is facing the challenge of two different cultures served by the same organizations today.

To help address this challenge, Cognos Online is planning to have two separate teams selling / servicing mainline Blackboard products and Moodle products. But even then, CEO Fernery Morales described that their biggest risk is muddling the message and integrating appropriately.

Moodle Strategy and Risk

At the same time, this strategy and growth comes at a time where the Moodle community at large appears to be at an inflection point. This inflection point I see comes from a variety of triggers:

  • Blackboard acquisitions causing Moodle HQ, other Moodle Partners, and some subset of users’ concerns about commercialization;
  • Creation of the Moodle Association as well as Moodle Cloud services as alternate paths to Moodle Partners for revenue and setup; and
  • Remote-Learner leaving the Moodle Partner program and planning to join the Moodle Association, with its associated lost revenue and public questioning value.

I don’t have time to fully describe these changes here, but Moodle itself is both an opportunity and a risk mostly based on its own success globally. More of that in a future post.

What Does This Mean Beyond Latin America?

It’s too early to fully know, but here are a few notes.

  • Despite the positioning in the US media, there is no “international” market. There are multiple local or regional markets outside of the US that have tremendous growth opportunities for US and other companies outside of those immediate markets. Addressing these markets puts a high premium on localization – having feet on the ground for people who know the culture, can be trusted in the region, and including product customizations meant for those markets. Much of the ed tech investment boom is built on expectations of international growth, but how many ed tech companies actually know how to address local or regional non-US markets? This focus on localizing international markets is one of Blackboard’s greatest strengths.
  • Based on the above, at least in Latin America Blackboard is building itself up as being the status quo before other learning platforms really get a chance to strategically enter the market. For example, Instructure has clearly not chosen to go after non English-speaking international markets yet, but by the time they do push Canvas into Latin America, and if Blackboard is successful integrating Nivel Siete, for example, it is likely Instructure will face an entrenched competitor and potential clients who by default assume Moodle or Learn as solutions.
  • Blackboard as a company has one big growth opportunity right now – the collection of non-US “international” markets that represent just under 1/4 of the company’s revenue. Domestic higher ed is not growing, K-12 is actually decreasing, but international is growing. These growing markets need Moodle and  traditional Learn 9.1 much more than Ultra. I suspect that this growing importance is creating more and more tension internal to Blackboard, as the company needs to balance Ultra with traditional Learn and Moodle development.
  • While I strongly believe in the mission of US community colleges and low-cost 4-year institutions, in Latin America the importance of education in building up an emerging middle class is much greater than in US. We hear this “importance of education” and “building of middle class” used in generic terms regarding ed tech potential, but seeing this connection more closely by being in country is inspiring. This is a real global need that can and should drive future investment in people and technology to address.
  1. This information based on tweet last spring showing Moodlerooms + Netspot combined were more than 50% of revenue, and that the next largest Moodle Partner, Remote-Learner, has left the program. Since last year I have confirmed this information through multiple sources.
  2. Again, much of this information is from people related to Blackboard, but it also matches my investigation of press releases and public statements about specific customers of D2L and Instructure.

The post Inside View Of Blackboard’s Moodle Strategy In Latin America appeared first on e-Literate.

DELETE is faster than TRUNCATE

Laurent Schneider - Wed, 2015-08-26 07:18

Truncate is useful in some serial batch processing but it breaks the read-write consistency, generates stranges errors and results for running selects, and it needs DROP ANY TABLE when run over a table that you do not own.

But also, DELETE is faster in the following test case.

In 12c, you could have over one million partition in a table, but for the sake of the universe, I’ll try with 10000.


SQL> create table scott.t(x) 
  partition by range(x) 
  interval(1) 
  (partition values less than (0)) 
  as 
  select rownum 
  from dual 
  connect by level<10001;
SQL> select count(*) from scott.t;

  COUNT(*)
----------
     10000

The 10K rows table is created, each row is its partition


SQL> delete scott.t;

10000 rows deleted.

Elapsed: 00:00:04.02
SQL> rollback;

Rollback complete.

Not tuned or parallelized or whatever. It took 4 seconds for 10’000 rows. If you have one billion rows, it is doable in a few hours. But you better do it in chunks then.

Anyway, let’s truncate


SQL> truncate table scott.t;

Table truncated.

Elapsed: 00:05:19.24

Five minutes !!! to truncate that tiny table.

If you have one million partitions and underlying indexes and lobs, it will probably failed with out of memory errors after hours and a large impact on the dictionary, sysaux, undo.

The dictionary changes are here very slow.

Protect Your APEX Application PL/SQL Source Code

Pete Finnigan - Wed, 2015-08-26 04:35

Oracle Application Express is a great rapid application development tool where you can write your applications functionality in PL/SQL and create the interface easily in the APEX UI using all of the tools available to create forms and reports and....[Read More]

Posted by Pete On 21/07/15 At 04:27 PM

Categories: Security Blogs

Oracle Security and Electronics

Pete Finnigan - Wed, 2015-08-26 04:35

How does Oracle Security and Electronic mix together? - Well I started my working life in 1979 as an apprentice electrician in a factory here in York, England where I live. The factory designed and built trains for the national....[Read More]

Posted by Pete On 09/07/15 At 11:24 AM

Categories: Security Blogs

New Conference Speaking Dates Added

Pete Finnigan - Wed, 2015-08-26 04:35

In the last few years I have not done as many conference speaking dates as I used to. This is simply because when offered they usually clashed with pre-booked work. I spoke for the UKOUG in Dublin last year and....[Read More]

Posted by Pete On 06/07/15 At 09:40 AM

Categories: Security Blogs

Happy 10th Belated Birthday to My Oracle Security Blog

Pete Finnigan - Wed, 2015-08-26 04:35

Make a Sad Face..:-( I seemed to have missed my blogs tenth which happened on the 20th September 2014. My last post last year and until very recently was on July 23rd 2014; so actually its been a big gap....[Read More]

Posted by Pete On 03/07/15 At 11:28 AM

Categories: Security Blogs

Oracle Database Vault 12c Paper by Pete Finnigan

Pete Finnigan - Wed, 2015-08-26 04:35

I wrote a paper about Oracle Database Vault in 12c for SANS last year and this was published in January 2015 by SANS on their website. I also prepared and did a webinar about this paper with SANS. The Paper....[Read More]

Posted by Pete On 30/06/15 At 05:38 PM

Categories: Security Blogs

Unique Oracle Security Trainings In York, England, September 2015

Pete Finnigan - Wed, 2015-08-26 04:35

I have just updated all of our Oracle Security training offerings on our company website. I have revamped all class pages and added two page pdf flyers for each of our four training classes. In have also updated the list....[Read More]

Posted by Pete On 25/06/15 At 04:36 PM

Categories: Security Blogs

Coding in PL/SQL in C style, UKOUG, OUG Ireland and more

Pete Finnigan - Wed, 2015-08-26 04:35

My favourite language is hard to pin point; is it C or is it PL/SQL? My first language was C and I love the elegance and expression of C. Our product PFCLScan has its main functionallity written in C. The....[Read More]

Posted by Pete On 23/07/14 At 08:44 PM

Categories: Security Blogs

Integrating PFCLScan and Creating SQL Reports

Pete Finnigan - Wed, 2015-08-26 04:35

We were asked by a customer whether PFCLScan can generate SQL reports instead of the normal HTML, PDF, MS Word reports so that they could potentially scan all of the databases in their estate and then insert either high level....[Read More]

Posted by Pete On 25/06/14 At 09:41 AM

Categories: Security Blogs