Skip navigation.

Feed aggregator

Deploying a Private Cloud at Home — Part 7

Pythian Group - Wed, 2014-10-29 08:09

Welcome to part 7, the final blog post in my series, Deploying Private Cloud at Home, where I will be sharing the scripts to configure controller and computer nodes. In my previous post, part six, I demonstrated how to configure the controller and compute nodes.

Kindly update the script with the password you want and then execute. I am assuming here that this is a fresh installation and no service is configured on the nodes.

Below script configures controller node, and has two parts

  1. Pre compute node configuration
  2. Post compute node configuration

The “ -pre” will run the pre compute node configuration and prepare the controller node and OpenStack services. “ -post” will run the post compute node configuration of the controller node as these services are dependant of compute node services.

#Configure controller script v 4.4
# Rohan Bhagat             ##################
# Email:Me at ###############
#set variables used in the configuration
#Admin user password
#Demo user password
#Keystone database password
#Admin user Email
#Demo user Email
#Glance db user pass
#Glance user pass
#Glance user email
#Nova db user pass
#Nova user pass
#Nova user Email
#Neutron db user pass
#Neutron user pass
#Neutron user email
#Metadata proxy pass
#IP to be declared for controller
#FQDN for controller hostname or IP
#MYSQL root user pass
#Heat db user pass
#Heat user pass
#Heat user email
#IP range for VM Instances
#Secure MySQL
#Current MySQL root password leave blank if you have not configured MySQL

# Get versions:
if [ "$1" = "--version" -o "$1" = "-v" ]; then
	echo "`basename $0` script version $SCRIPT_VER"
  exit 0
elif [ "$1" = "" ] || [ "$1" = "--help" ]; then
  echo "Configures controller node with pre compute and post compute deployment settings"
  echo "Usage:"
  echo "       `basename $0` [--help | --version | -pre | -post]"
  exit 0

elif [ "$1" = "-pre" ]; then

echo "============================================="
echo "This installation script is based on OpenStack icehouse guide"
echo "Found"
echo "============================================="

echo "============================================="
echo "controller configuration started"
echo "============================================="

echo "Installing MySQL packages"
yum install -y mysql mysql-server MySQL-python
echo "Installing RDO OpenStack repo"
yum install -y
echo "Installing openstack keystone, qpid Identity Service, and required packages for controller"
yum install -y yum-plugin-priorities openstack-utils mysql mysql-server MySQL-python qpid-cpp-server openstack-keystone python-keystoneclient expect

echo "Modification of qpid config file"
perl -pi -e 's,auth=yes,auth=no,' /etc/qpidd.conf
chkconfig qpidd on
service qpidd start

echo "Configuring mysql database server"
cat > /etc/my.cnf <&1 | grep -q token_flush) || echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' >> /var/spool/cron/keystone

echo "Define users, tenants, and roles"
export OS_SERVICE_ENDPOINT=http://$CONTROLLER:35357/v2.0

echo "keystone admin creation"
keystone user-create --name=admin --pass=$ADMIN_PASS --email=$ADMIN_EMAIL
keystone role-create --name=admin
keystone tenant-create --name=admin --description="Admin Tenant"
keystone user-role-add --user=admin --tenant=admin --role=admin
keystone user-role-add --user=admin --role=_member_ --tenant=admin

echo "keystone demo creation"
keystone user-create --name=demo --pass=$DEMO_PASS --email=$DEMO_EMAIL
keystone tenant-create --name=demo --description="Demo Tenant"
keystone user-role-add --user=demo --role=_member_ --tenant=demo
keystone tenant-create --name=service --description="Service Tenant"

echo "Create a service entry for the Identity Service"
keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
keystone endpoint-create --service-id=$(keystone service-list | awk '/ identity / {print $2}') \
--publicurl=http://$CONTROLLER:5000/v2.0 \
--internalurl=http://$CONTROLLER:5000/v2.0 \

echo "Verify Identity service installation"
echo "Request a authentication token by using the admin user and the password you chose for that user"
keystone --os-username=admin --os-password=$ADMIN_PASS \
  --os-auth-url=http://$CONTROLLER:35357/v2.0 token-get
keystone --os-username=admin --os-password=$ADMIN_PASS \
  --os-tenant-name=admin --os-auth-url=http://$CONTROLLER:35357/v2.0 \

cat > /root/ <<EOF
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controller:35357/v2.0

source /root/
echo "keystone token-get"
keystone token-get
echo "keystone user-list"
keystone user-list
echo "keystone user-role-list --user admin --tenant admin"
keystone user-role-list --user admin --tenant admin

echo "Install the Image Service"
yum install -y openstack-glance python-glanceclient
openstack-config --set /etc/glance/glance-api.conf database connection mysql://glance:$GLANCE_DBPASS@$CONTROLLER/glance
openstack-config --set /etc/glance/glance-registry.conf database connection mysql://glance:$GLANCE_DBPASS@$CONTROLLER/glance

echo "configure glance database"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "CREATE DATABASE glance;"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '$GLANCE_DBPASS';"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$GLANCE_DBPASS';"

echo "Create the database tables for the Image Service"
su -s /bin/sh -c "glance-manage db_sync" glance

echo "creating glance user"
keystone user-create --name=glance --pass=$GLANCE_PASS --email=$GLANCE_EMAIL
keystone user-role-add --user=glance --tenant=service --role=admin

echo "glance configuration"
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host $CONTROLLER
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password $GLANCE_PASS
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host $CONTROLLER
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password $GLANCE_PASS
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

echo "Register the Image Service with the Identity service"
keystone service-create --name=glance --type=image --description="OpenStack Image Service"
keystone endpoint-create \
  --service-id=$(keystone service-list | awk '/ image / {print $2}') \
  --publicurl=http://$CONTROLLER:9292 \
  --internalurl=http://$CONTROLLER:9292 \
echo "Start the glance-api and glance-registry services"
service openstack-glance-api start
service openstack-glance-registry start
chkconfig openstack-glance-api on
chkconfig openstack-glance-registry on

echo "Testing image service"
echo "Download the cloud image"
wget -q -O /root/cirros-0.3.2-x86_64-disk.img
echo "Upload the image to the Image Service"
source /root/
glance image-create --name "cirros-0.3.2-x86_64" --disk-format qcow2 \
--container-format bare --is-public True \
--progress  < /root/cirros-0.3.2-x86_64-disk.img

echo "Install Compute controller services"
yum install -y openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient
source /root/

echo "Configure compute database"
openstack-config --set /etc/nova/nova.conf database connection mysql://nova:$NOVA_DBPASS@$CONTROLLER/nova

echo "configuration keys to configure Compute to use the Qpid message broker"
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname $CONTROLLER

source /root/

echo "Set the my_ip, vncserver_listen, and vncserver_proxyclient_address configuration options"
echo "to the management interface IP address of the $CONTROLLER node"
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip $MY_IP
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen $MY_IP
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address $MY_IP

echo "Create a nova database user"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "CREATE DATABASE nova;"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '$NOVA_DBPASS';"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$NOVA_DBPASS';"

echo "Create the Compute service tables"
su -s /bin/sh -c "nova-manage db sync" nova

echo "Create a nova user that Compute uses to authenticate with the Identity Service"
keystone user-create --name=nova --pass=$NOVA_PASS --email=$NOVA_EMAIL
keystone user-role-add --user=nova --tenant=service --role=admin

echo "Configure Compute to use these credentials with the Identity Service running on the controller"
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host $CONTROLLER
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password $NOVA_PASS

echo "Register Compute with the Identity Service"
keystone service-create --name=nova --type=compute --description="OpenStack Compute"
keystone endpoint-create \
  --service-id=$(keystone service-list | awk '/ compute / {print $2}') \
  --publicurl=http://$CONTROLLER:8774/v2/%\(tenant_id\)s \
  --internalurl=http://$CONTROLLER:8774/v2/%\(tenant_id\)s \
echo "Start Compute services and configure them to start when the system boots"
service openstack-nova-api start
service openstack-nova-cert start
service openstack-nova-consoleauth start
service openstack-nova-scheduler start
service openstack-nova-conductor start
service openstack-nova-novncproxy start
chkconfig openstack-nova-api on
chkconfig openstack-nova-cert on
chkconfig openstack-nova-consoleauth on
chkconfig openstack-nova-scheduler on
chkconfig openstack-nova-conductor on
chkconfig openstack-nova-novncproxy on  

echo "To verify your configuration, list available images"
echo "nova image-list"
sleep 5
source /root/
nova image-list


if [ "$1" = "-post" ]; then
#set variables used in the configuration

source /root/
############OpenStack Networking start here##############
echo "configure legacy networking"
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova 

echo "Restart the Compute services"
service openstack-nova-api restart
service openstack-nova-scheduler restart
service openstack-nova-conductor restart

echo "Create the network"
source /root/
nova network-create vmnet --bridge br0 --multi-host T --fixed-range-v4 $RANGE

echo "Verify creation of the network"
nova net-list

############OpenStack Legacy ends##############
echo "Install the dashboard"
yum install -y mod_wsgi openstack-dashboard

echo "Configure openstack dashborad"
sed -i 's/\*/g' /etc/openstack-dashboard/local_settings
echo "Start the Apache web server and memcached"
service httpd start
chkconfig httpd on


Below is the script which configures compute node

#configure comutue script v4
# Rohan Bhagat             ##################
# Email:Me at ###############
#set variables used in the configuration
#Nova user pass
#NEUTRON user pass
#Nova db user pass
#FQDN for $CONTROLLER hostname or IP
#IP of the compute node

echo "============================================="
echo "This installation script is based on OpenStack icehouse guide"
echo "Found"
echo "============================================="

echo "============================================="
echo "compute configuration started"
echo "============================================="

echo "Install the MySQL Python library"
yum install -y MySQL-python

echo "Install the Compute packages"
yum install -y openstack-nova-compute openstack-utils

echo "Edit the /etc/nova/nova.conf configuration file"
openstack-config --set /etc/nova/nova.conf database connection mysql://nova:$NOVA_DBPASS@$CONTROLLER/nova
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host $CONTROLLER
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password $NOVA_PASS

echo "Configure the Compute service to use the Qpid message broker"
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname $CONTROLLER

echo "Configure Compute to provide remote console access to instances"
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip $MY_IP
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address $MY_IP
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://$CONTROLLER:6080/vnc_auto.html

echo "Specify the host that runs the Image Service"
openstack-config --set /etc/nova/nova.conf DEFAULT glance_host $CONTROLLER

echo "Start the Compute service and its dependencies. Configure them to start automatically when the system boots"
service libvirtd start
service messagebus start
service openstack-nova-compute start
chkconfig libvirtd on
chkconfig messagebus on
chkconfig openstack-nova-compute on

echo "kernel networking functions"
perl -pi -e 's,net.ipv4.ip_forward = 0,net.ipv4.ip_forward = 1,' /etc/sysctl.conf
perl -pi -e 's,net.ipv4.conf.default.rp_filter = 1,net.ipv4.conf.default.rp_filter = 0,' /etc/sysctl.conf
echo "net.ipv4.conf.all.rp_filter=0" >> /etc/sysctl.conf
sysctl -p

echo "Install legacy networking components"
yum install -y openstack-nova-network openstack-nova-api
sleep 5
echo "Configure legacy networking"
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova
openstack-config --set /etc/nova/nova.conf DEFAULT network_manager
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.libvirt.firewall.IptablesFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT network_size 254
openstack-config --set /etc/nova/nova.conf DEFAULT allow_same_net_traffic False
openstack-config --set /etc/nova/nova.conf DEFAULT multi_host True
openstack-config --set /etc/nova/nova.conf DEFAULT send_arp_for_ha True
openstack-config --set /etc/nova/nova.conf DEFAULT share_dhcp_address True
openstack-config --set /etc/nova/nova.conf DEFAULT force_dhcp_release True
openstack-config --set /etc/nova/nova.conf DEFAULT flat_network_bridge br0
openstack-config --set /etc/nova/nova.conf DEFAULT flat_interface $FLAT_INTERFACE
openstack-config --set /etc/nova/nova.conf DEFAULT public_interface $PUB_INTERFACE

echo "Start the services and configure them to start when the system boots"
service openstack-nova-network start
service openstack-nova-metadata-api start
chkconfig openstack-nova-network on
chkconfig openstack-nova-metadata-api on

echo "Now restart networking"
service network restart

echo "Compute node configuration competed"
echo "Now you can run -post on the controller node"
echo "To complete the OpenStack configuration"

Categories: DBA Blogs

Oracle WebCenter Contract Lifecycle Management

WebCenter Team - Wed, 2014-10-29 07:44
Oracle WebCenter Contract Lifecycle Management

Contracts rule B2B relationships. Whether you’re a growing mid-market company or a large-scale global organization, you need an effective system to manage surges in contract volumes and ensure accuracy in reporting. Contract Lifecycle Management (CLM) is the proactive, methodical management of a contract from initiation through award, compliance and renewal. Implementing CLM can lead to significant improvements in cost savings and efficiency. Also, CLM can help companies minimize liability and increase compliance with legal requirements.


TekStream’s CLM software is built on Oracle’s industry leading document management system, WebCenter Content, and is designed to seamlessly integrate with enterprise applications like JD Edwards, PeopleSoft and Oracle’s Enterprise Business Suite (EBS).  Combining Oracle’s enterprise level applications with TekStream’s deep understanding of managing essential business information, delivers a contract management tool powerful enough to facilitate even the most complex processes. TekStream’s solution tracks and manages all aspects of your contract work streams from creation and approval to completion and expiration. Companies can rely on TekStream’s CLM to ensure compliance and close deals faster.



  • Centralized repository for all in-process and executed contracts. This ensures that users can quickly find documents “in-flight” or review decisions and details about contracts previously executed.
  • Increase efficiency through better control of the contract process.  By utilizing dynamic workflows based on the actual text of the contracts (and supporting documents), the review process of in-flight contracts becomes more streamlined and targeted. Workflows allow you to review terms and clauses before they become a costly oversight. For example, targeted workflows ensuring that the right people are reviewing the right information at the right time.
  • Support for “Evergreen” contracts help to improve contract renewal rates.  TekStream’s CLM notifies the correct parties when contracts are due for renewal/review and initiates the appropriate workflow streams.  Too many times, organizations fail to capitalize on opportunities to renew or improve existing contracts by missing key negotiation or renewal dates.
  • Improve compliance to regulations and standards by providing clear and concise reporting of procedures and controls. TekStream’s CLM also provides robust Records Management features including document controls for holds and freezes during litigation along with audit details of when documents are reviewed, archived, and destroyed.  The ability to accurately retrieve and report financial data like contracts, greatly reduces time, effort and cost during quarterly and annual audits.
  • Existing Process
    • 5 people x 4 hrs/contract x 2 contracts/day x $250/hour = $10,000/day
  • TekStream CLM
    • 5 people x 2 hrs/contract x 2 contracts/day x $250/hour = $5,000/day
Learn more about TekStream's Oracle WebCenter Contract Lifecycle Management and join us for a webcast on Thursday, October 30 at 10:00am PT!

Oracle Trivia Quiz

Iggy Fernandez - Wed, 2014-10-29 07:36
All the answers can be found in the November 2014 issue of the NoCOUG Journal. I am the editor of the NoCOUG Journal. What’s NoCOUG, you ask? Only the oldest and most active Oracle users group in the world. If you live in the San Francisco bay area and have never ever attended a NoCOUG […]
Categories: DBA Blogs

Oracle cloud control / SQL Details / Statistics

Yann Neuhaus - Wed, 2014-10-29 06:06

A question that I had several times: in Enterprise Manager, in the screen about one SQL statement, the 'statistics' tab shows the number of executions, elapsed time, etc. Question is: which time window does it cover? There is a one hour chart above, and two timestamps displayed as 'First Load Time' and 'Last load Time', and we don't know which one is related with the execution statistics numbers. I'll explain it clearly on an example.

I'll check a query I have on my system which has several cursors, with two different execution plans. And I check from V$SQL because here is where is the most detailed information, and columns are well documented.

From the documentation:

  • FIRST_LOAD_TIME is the Timestamp of the parent creation time
  • LAST_LOAD_TIME is the Time at which the query plan was loaded into the library cache

It's clear that because V$SQL show information about child cursors, the FIRST_LOAD_TIME will be the same for all children.

SQL> select sql_id,plan_hash_value,executions,first_load_time,last_load_time,last_active_time from v$sql where sql_id='dcstr36r0vz0d' order by child_number

------------- --------------- ---------- ------------------- ------------------- -------------------
dcstr36r0vz0d        17720163         60 2014-10-29/07:01:59 2014-10-29/07:01:59 2014-10-29/13:01:25
dcstr36r0vz0d      3798950322        102 2014-10-29/07:01:59 2014-10-29/07:03:49 2014-10-29/13:05:54
dcstr36r0vz0d      3798950322         24 2014-10-29/07:01:59 2014-10-29/07:05:55 2014-10-29/13:05:54
dcstr36r0vz0d      3798950322          1 2014-10-29/07:01:59 2014-10-29/08:11:19 2014-10-29/08:11:19
dcstr36r0vz0d      3798950322          1 2014-10-29/07:01:59 2014-10-29/08:29:34 2014-10-29/08:29:34

The plan with hash value 17720163 has been executed 60 times since 07:01:59. It was the first child cursor (child_number=0) for that parent, so this is why FIRST_LOAD_TIME=LAST_LOAD_TIME

And, the plan with hash value 3798950322 has been executed 128 times since 07:03:49 by cursors that are not shared but have come to the same plan anyway

Two remarks:

  • FIRST_LOAD_TIME is the same for all children because it is a parent information
  • LAST_LOAD_TIME is different for each child and that's important because Enterprise Manager don't show that detail, aggregating together the children with same execution plan.
Time to look at the Enterprise Manager screen.   I'm talking about the 'Real Time'  statistics:   EMLastLoadTime1.png   and I've selected the plan hash value 17720163:   EMLastLoadTime2.png   Ok. So we have 60 executions here. This matches the line in V$SQL. And we know that is it 60 executions since 07:01:59 because both timestamps are the same. No doubt here.   Then, let's select the other plan hash value from the popup:   EMLastLoadTime3.png   128 executions for this plan. This is what we had when summing the lines from V$SQL. And look at the Shared Cursor Statistics. The number of 'Child Cursors' is 4 which is what we know. The 'First Load Time' is the one of the parent.   However, what is the 'Last Load Time' when we know that there are 4 different values in V$SQL for it? Look, they choose the latest one, 08:29:34, and that's a good choice according to the name. It's the last load time.   But what I want to know is the time from which the 128 executions are counted. And that should be the earliest one. In my example, we know from V$SQL what we had 128 executions since 07:03:49 but that timestamp is not displayed here.   If you want a date, you should take the 'First Load time' because it's true that there were 128 executions of cursors with that plan hash value since 07:01:59   Sometimes the first load time is very old and it would be better to have the MIN(LAST_LOAD_TIME). But anyway if we want better time detail, we can choose the 'Historical' view instead of the 'Real Time' one and we have the numbers related with the AWR snapshots.   Here is an example for the cursor with plan hash value 17720163:   EMLastLoadTime4.png   From the historical view, we select a timestamp, we see the begin and end timestamps. Here I have 10 executions per hour.   Everything looks good there, except that 'Child Cursors' is 5, which is for the whole statement and not only for the cursors selected by the plan hash value.   Then I've two conclusions:
  • 'Last Load Time' is not useful to know the time window covered by the Real Time statistics. Use First 'Load time instead'
  • In case of any doubt, fall back to V$ views which are much more documented, and give more detail.

OTN APAC Tour 2014 : It’s Nearly Here!

Tim Hall - Wed, 2014-10-29 03:38

airplane-flying-through-clouds-smallIn a little less than a week I start the OTN APAC Tour. This is where I’m going to be…

  • Perth, Australia : November 6-7
  • Shanghai, China : November 9
  • Tokyo, Japan : November 11-13
  • Beijing, China : November 14-15
  • Bangkok, Thailand : November 17
  • Auckland, New Zealand : November 19-21

Just looking at that list is scary. When I look at the flight schedule I feel positively nauseous. I think I’m in Bangkok for about 24 hours. It’s sleep, conference, fly. :)

After all these years you would think I would be used to it, but every time I plan a tour I go through the same sequence of events.

  • Someone asks me if I want to do the tour.
  • I say yes and agree to do all the dates.
  • They ask me if I am sure, because doing the whole tour is a bit stupid as it’s a killer and takes up a lot of time.
  • I say, no problem. It will be fine. I don’t like cherry-picking events as it makes me feel guilty, like I’m doing it for a holiday or something.
  • Everything is provisionally agreed.
  • I realise the magnitude of what I’ve agreed to and secretly hope I don’t get approval.
  • Approval comes through.
  • Mad panic for visas, flights and hotel bookings etc.
  • The tour starts and it’s madness for X number of days. On several occasions I will want to throw in the towel and get on a plane home, but someone else on the tour will provide sufficient counselling to keep me just on the right side of sane.
  • Tour finishes and although I’ve enjoyed it, I promise myself I will never do it again.

With less than a week to go, I booked the last of my hotels this morning, so you can tell what stage I’m at now… :)

I was reflecting on this last night and I think I know the reason I agree to these silly schedules. When I was a kid, only the “posh” kids did foreign holidays. You would come back from the summer break and people would talk about eating pasta on holiday and it seemed rather exotic. Somewhere in the back of my head I am still that kid and I don’t really believe any of these trips will ever happen, so I agree to anything. :)





OTN APAC Tour 2014 : It’s Nearly Here! was first posted on October 29, 2014 at 10:38 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

AIOUG annual Oracle conference - SANGAM14

Syed Jaffar - Tue, 2014-10-28 23:43
All India Oracle User Group (AIOUG) annual Oracle conference Sangam14 is less than 10 days away. This is the largest Oracle conference that take place every year in different cities of India with thousand's of attendees plus over 100 different topics by many Oracle experts across the globe.

This year's SANGAM is scheduled on Nov 7,8,9 in Bangalore city. Don't let the opportunity go vain, avail/grab the opportunity if you are in India. I am super excited about the conference and look forward attending Tom Kyte's 'Optimizer master class', a full day class and also Maria's 'Oracle database in-memory option' session.

My sessions are as follow:

For more details on agenda, speakers, enrollment, visit

Look forward to seeing you in-person at the conference.

Significant Milestone: First national study of OER adoption

Michael Feldstein - Tue, 2014-10-28 22:02

For years we have heard anecdotes and case studies about OER adoption based on one (or a handful) of institutions. There are many items we think we know, but we have lacked hard data on the adoption process to back up these assumptions that have significant policy and ed tech market implications.

OtC Cover PageThe Babson Survey Research Group (BSRG) – the same one that administers the annual Survey of Online Learning – has released a survey of faculty titled “Opening the Curriculum” on the decision process and criteria for choosing teaching resources with an emphasis on Open Educational Resources (OER). While their funding from the Hewlett Foundation and from Pearson[1] is for the current survey only, there are proposals to continue the Faculty OER surveys annually to get the same type of longitudinal study that they provide for online learning.

While there will be other posts (including my own) that will cover the immediate findings of this survey, I think it would be worthwhile to first provide context on why this is a significant milestone. Most of the following background and author findings is based on my interview with Dr. Jeff Seaman, one of the two lead researchers and authors of the report (the other is Dr. I. Elaine Allen).


Three years ago when the Survey for Online Learning was in its 9th iteration, the Hewlett Foundation approached BSRG about creating reports on OER adoption. Jeff did a meta study to see what data was already available and was disappointed with results, so the group started to compile surveys and augment their own survey questionnaires.

The first effort, titled Growing the Curriculum and published two years ago, was a combination of results derived from four separate studies. The section on Chief Academic Officers was “taken from the data collected for the 2011 Babson Survey Research Group’s online learning report”. This report was really a test of the survey methodology and types of questions that needed to be asked.

The Hewlett Foundation is planning to develop an OER adoption dashboard, and there has been internal debate on what to measure and how. This process took some time, but once the groups came to agreement, the current survey was commissioned.

Pearson came in as a sponsor later in the process and provided additional resources to expand the scope of survey, augmented the questions to be asked, and helped with infographics, marketing, and distribution.

A key issue on OER adoption is that the primary decision-makers are faculty members. Thus the current study is based on responses from teaching faculty “(defined as having at least one course code associated with their records)”.

A total of 2,144 faculty responded to the survey, representing the full range of higher education institutions (two-year, four-year, all Carnegie classifications, and public, private nonprofit, and for-profit) and the complete range of faculty (full- and part-time, tenured or not, and all disciplines). Almost three-quarters of the respondents report that they are full-time faculty members. Just under one-quarter teach online, and they are evenly split between male and female, and 28% have been teaching for 20 years or more.

Internal Lessons

I asked Jeff what his biggest lessons have been while analyzing the results. He replied that the key meta findings are the following:

  • We have had a lot of assumptions in place (e.g. faculty are primary decision-makers on OER adoption, cost is not a major driver of the decision), but we have not had hard data to back up these assumptions, at least beyond several case studies.
  • The decision process for faculty is not about OER – it is about selecting teaching resources. The focus of studies should be on this general resource selection process with OER as one of the key components rather than just asking about OER selection.

Thus the best way to view this report is not to look for earth-shaking findings or to be disappointed if there are no surprises, but rather to see data-backed answers on the teaching resource adoption process.

Most Surprising Finding

Given this context, I pressed Jeff to answer what findings may have surprised him based on prior assumptions. The two answers are encouraging from an OER perspective.

  • Once you present OER to faculty, there’s a real affinity and alignment of OER with faculty values. Jeff was surprised more about the potential of OER than he had thought going in. Unlike other technology-based subjects of BSRG studies, there is almost no suspicion of OER. Everything else BSRG has measured has had strong minority views from faculty against the topic (online learning in particular), with incredible resentment detected. This resistance or resentment is just not there with OER. It is interesting for OER, with no organized marketing plan per se, to have no natural barriers from faculty perceptions.[2]
  • In the fundamental components of OER adoption – such as perceptions of quality and discoverability and currency – there is no significant difference between publisher-provided content and OER.
Notes on Survey

This is valuable survey, and I would hope that BSRG succeeds in getting funding (hint, hint Hewlett and Pearson) to make this into an annual report with longitudinal data. Ideally the base demographics will increase in scope so that we get a better understanding of the unique data between institution types and program types. Currently the report separates 2-year and 4-year institutions, but it would be useful to compare 4-year public vs. private and even for program type (e.g. competency-based programs vs. gen ed vs. fully online traditional programs).

There is much to commend in the appendices of this report – with basic data tables, survey methodology, and even the questionnaire itself. Too many survey reports neglect to include these basics.

You can download the full report here or read below. I’ll have more in terms of analysis of the specific findings in an upcoming post or two.

Download (PDF, 1.89MB)

  1. Disclosure: Pearson is a client of MindWires Consulting – see this post for more details.
  2. It’s no bed of roses for OER, however, as the report documents issues such as lack of faculty awareness and the low priority placed on cost as a criteria in selecting teaching resources.

The post Significant Milestone: First national study of OER adoption appeared first on e-Literate.

Coding in PL/SQL in C style, UKOUG, OUG Ireland and more

Pete Finnigan - Tue, 2014-10-28 17:50

My favourite language is hard to pin point; is it C or is it PL/SQL? My first language was C and I love the elegance and expression of C. Our product PFCLScan has its main functionallity written in C. The....[Read More]

Posted by Pete On 23/07/14 At 08:44 PM

Categories: Security Blogs

Integrating PFCLScan and Creating SQL Reports

Pete Finnigan - Tue, 2014-10-28 17:50

We were asked by a customer whether PFCLScan can generate SQL reports instead of the normal HTML, PDF, MS Word reports so that they could potentially scan all of the databases in their estate and then insert either high level....[Read More]

Posted by Pete On 25/06/14 At 09:41 AM

Categories: Security Blogs

Automatically Add License Protection and Obfuscation to PL/SQL

Pete Finnigan - Tue, 2014-10-28 17:50

Yesterday we released the new version 2.0 of our product PFCLObfuscate . This is a tool that allows you to automatically protect the intellectual property in your PL/SQL code (your design secrets) using obfuscation and now in version 2.0 we....[Read More]

Posted by Pete On 17/04/14 At 03:56 PM

Categories: Security Blogs

Twitter Oracle Security Open Chat Thursday 6th March

Pete Finnigan - Tue, 2014-10-28 17:50

I will be co-chairing/hosting a twitter chat on Thursday 6th March at 7pm UK time with Confio. The details are here . The chat is done over twitter so it is a little like the Oracle security round table sessions....[Read More]

Posted by Pete On 05/03/14 At 10:17 AM

Categories: Security Blogs

PFCLScan Reseller Program

Pete Finnigan - Tue, 2014-10-28 17:50

We are going to start a reseller program for PFCLScan and we have started the plannng and recruitment process for this program. I have just posted a short blog on the PFCLScan website titled " PFCLScan Reseller Program ". If....[Read More]

Posted by Pete On 29/10/13 At 01:05 PM

Categories: Security Blogs

PFCLScan Version 1.3 Released

Pete Finnigan - Tue, 2014-10-28 17:50

We released version 1.3 of PFCLScan our enterprise database security scanner for Oracle a week ago. I have just posted a blog entry on the PFCLScan product site blog that describes some of the highlights of the over 220 new....[Read More]

Posted by Pete On 18/10/13 At 02:36 PM

Categories: Security Blogs

PFCLScan Updated and Powerful features

Pete Finnigan - Tue, 2014-10-28 17:50

We have just updated PFCLScan our companies database security scanner for Oracle databases to version 1.2 and added some new features and some new contents and more. We are working to release another service update also in the next couple....[Read More]

Posted by Pete On 04/09/13 At 02:45 PM

Categories: Security Blogs

Oracle Security Training, 12c, PFCLScan, Magazines, UKOUG, Oracle Security Books and Much More

Pete Finnigan - Tue, 2014-10-28 17:50

It has been a few weeks since my last blog post but don't worry I am still interested to blog about Oracle 12c database security and indeed have nearly 700 pages of notes in MS Word related to 12c security....[Read More]

Posted by Pete On 28/08/13 At 05:04 PM

Categories: Security Blogs

Results of the NoCOUG SQL Mini-Challenge

Iggy Fernandez - Tue, 2014-10-28 15:55
As published in the November 2014 issue of the NoCOUG Journal The inventor of the relational model, Dr. Edgar Codd, was of the opinion that “[r]equesting data by its properties is far more natural than devising a particular algorithm or sequence of operations for its retrieval. Thus, a calculus-oriented language provides a good target language […]
Categories: DBA Blogs

Google Glass, Android Wear, and Apple Watch

Oracle AppsLab - Tue, 2014-10-28 15:43

I have both the Google Glass and Android Wear (Samsung Gear Live, Moto 360), and often times I wear them together.  People always come up with a question:  “How do you compare Google Glass and Android watches?”  Let me address couple of the view points here.  I would like to talk about Apple Watch, but since it has not been officially released yet, let’s say that shape-wise it is square and looks like a Gear Live, and features seem to be pretty similar to Android Wear, with the exceptions of the attempt to add more playful colors and features.  Lets discuss more about it once it is out.

unnamed                             Moto-360-Dynamic-Black

423989-google-glass              10-apple-watch.w529.h352.2x

I am the first batch of the Google Glass Explorer and got my Glass mid 2013.  In the middle of this year, I first got the Gear Live, then later Moto 360.  I always find it peculiar that Glass is an old technology while Wear is a newer technology.  Should it not be easier to design a smart watch first before a glassware?

I do find a lot of similarities between Glass and Wear.  The fundamental similarity is that both are Android devices.  They are voice-input enabled and show you notifications.  You may install additional Android applications for you to personalize your experience and maximize your usage.  I see these as the true values for wearables.

Differences?  Glass does show a lot of capabilities that Android Wear is lack of at the moment.  Things that probably matter for most people would be sound, phone calls, video recording, pictures taking, hands-free with head-on display, GPS, wifi.  Unlike Android Wear, it can be used standalone;  Android Wear is only a companion gadget and has to be paired up with a phone.

Is Glass more superior?   Android Wear does provide a better touch-based interaction, comparing to the swiping at the side of the Glass frame.  You can also play simple games like Flopsy Droid on your watch.  Also commonly included are pedometers and heart activity sensor.  Glass also tends to get over-heated easily.  Water-resistance also plays a role here: you would almost never want to get your Glass wet at all, while Android Wear is water-resistant to certain degree.  When you are charging your watch at night, it also serves as a bedtime clock.


For me, personally, although I own Glass longer than Wear, I have to say I prefer Android Wear over Glass for couple reasons.  First, there is the significant price gap ($1500 vs $200 price tag).  Second, especially when you add prescription to Glass, it gets heavy and hurts the ear when wearing it for an extended period of time.  Third, I do not personally find the additional features offered by Glass useful to my daily activities;  I do not normally take pictures other than at specific moments or while I am traveling.

I also find that even Glass is now publicly available within the US, Glass is still perceived as an anti-social gadget.  The term is defined in the Urban Dictionary as well.  Most of the people I know of who own Glass do not wear it themselves due to all various reasons.  I believe improving the marketing and advertising strategy for Glass may help.

Gadget preference is personal.  What’s yours?Possibly Related Posts:

November 5: Daekyo Sales Cloud Reference Forum

Linda Fishman Hoyle - Tue, 2014-10-28 14:32

Join us for an Oracle Customer Reference Forum on Wednesday, November 5, 2014, at 02:00 p.m. KST / 09:00 p.m. PT to hear Mr. Sa, IT Strategy Manager, talk about why Daekyo has chosen to implement Oracle Sales Cloud.

Founded in 1975, Daekyo is the leading company in Korea's educational service market that has greatly extended its activity overseas emerging as a multinational frontrunner in the field of education. With the expansion of the business, Daekyo realized the need of a centralized system that could successfully serve the purpose of its worldwide franchises. After the implementation of Oracle Sales Cloud Service, Oracle ADF, Oracle IAM, Oracle Webcenter, SOA Suite and BI Publisher, Daekyo greatly reduced the total cost of ownership (TCO) and capital expenditure (CAPEX) by 20 percent, while achieving the standardization of its business flow, real time monitoring, and smart intelligence.

Register now to attend the live Forum on Wednesday, November 5 at 02:00 p.m. KST / 09:00 p.m. PT and learn more from Daekyo directly.

Handling Date Format and ADF Bindings

Andrejus Baranovski - Tue, 2014-10-28 14:25
There are tips and tricks related to the date format handling and ADF bindings. When Oracle date type attribute is being used through ADF binding expression, it will be converted to String with default format (yyyy-MM-dd), no matter what original format was set. This is described in API for the Oracle date type - Date constructor with String. Below I'm going to explain how it works and how you could optimise ADF application to handle date type attributes from ADF bindings layer.

Sample application (download it here - is based on one simple VO with calculated attributes, both oracle.jbo.domain.Date type:

ADF UI implementation is standard - there are ADF Input Date components with regular date converters:

There is a button on UI, it calls custom method implemented in AM and exposed through the bindings. This method accepts two parameters, both oracle.jbo.domain.Date type. I'm using regular binding expression and getting inputValue property (this returns formatted value of the date):

Try to invoke the method, you will get an error about invalid argument - binding expression with date attribute is not accepted. It is trying to use formatted value dd/MM/yyyy and set it for the custom method argument value, but Oracle date accepts only yyyy-MM-dd (I'm using Alta UI skin):

Error message is written in the log:

The workaround is to use attributeValue instead of inputValue in the binding expression. Property attributeValue returns original unformatted date type attribute value:

Do the same test again, type both dates and press Test Dates button - this would work now:

I'm checking out date values in the custom method. Originally it comes unformatted date value, taken from attributeValue property. If we would like to use date value in the certain format (e.g. send to PL/SQL in format dd/MM/yyyy, instead of default yyyy-MM-dd), we should apply DefaultDateFormatter to retrieve the formatted String value for the original date:

Here is the output from the custom method:

11 Ways to Wreck Your Social Relationships

Linda Fishman Hoyle - Tue, 2014-10-28 14:05

Solid, long-lasting relationships are built on people being authentic, trustworthy, altruistic, and patient.

That is true in human relationships, as well as social relationships.

Despite all that’s been written about the topic, some brands that use social still don’t follow the Golden Rule when it comes to their fans, followers and customers.

The result is often unhappy customers who initiate conversations that start with the proverbial “we need to talk.”

In this post, Mike Stiles lists 11 ways that guilty brands provoke these conversations.