Skip navigation.

Feed aggregator

Deploying a Private Cloud at Home — Part 3

Pythian Group - Tue, 2014-10-14 14:59

Today’s blog post is part three of seven in a series dedicated to Deploying Private Cloud at Home, where I will demonstrate how to configure OpenStack Identity service on the controller node. We have already configured the required repo in part two of the series, so let’s get started on configuring Keystone Identity Service.

  1. Install keystone on the controller node.
    yum install -y openstack-keystone python-keystoneclient

    OpenStack uses a message broker to coordinate operations and status information among services. The message broker service typically runs on the controller node. OpenStack supports several message brokers including RabbitMQ, Qpid, and ZeroMQ.I am using Qpid as it is available on most of the distros

  2. Install Qpid Messagebroker server.
    yum install -y qpid-cpp-server

    Now Modify the qpid configuration file to disable authentication by changing below line in /etc/qpidd.conf

    auth=no

    Now start and enable qpid service to start on server startup

    chkconfig qpidd on
    service qpidd start
  3. Now configure keystone to use MySQL database
    openstack-config --set /etc/keystone/keystone.conf \
       database connection mysql://keystone:YOUR_PASSWORD@controller/keystone
  4. Next create keystone database user by running below queries on your mysql prompt as root.
    CREATE DATABASE keystone;
    GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'YOUR_PASSWORD';
    GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'YOUR_PASSWORD';
  5. Now create database tables
    su -s /bin/sh -c "keystone-manage db_sync" keystone

    Currently we don’t have any user accounts that can communicate with OpenStack services and Identity service. So we will setup an authorization token to use as a shared secret between the Identity Service and other OpenStack services and store in configuration file.

    ADMIN_TOKEN=$(openssl rand -hex 10)
    echo $ADMIN_TOKEN
    openstack-config --set /etc/keystone/keystone.conf DEFAULT \
       admin_token $ADMIN_TOKEN
  6. Keystone uses PKI tokens as default. Now create the signing keys and certificates to restrict access to the generated data
    keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
    chown -R keystone:keystone /etc/keystone/ssl
    chmod -R o-rwx /etc/keystone/ssl
  7. Start and enable the keystone identity service to begin at startup
    service openstack-keystone start
    chkconfig openstack-keystone on

    Keystone Identity service stores expired tokens as well in the database. We will create below crontab entry to purge the expired tokens

    (crontab -l -u keystone 2>&1 | grep -q token_flush) || \
    echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' >> /var/spool/cron/keystone
  8. Now we will create admin user for keystone and define roles for admin user
    export OS_SERVICE_TOKEN=$ADMIN_TOKEN
    export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0
    keystone user-create --name=admin --pass=Your_Password --email=Your_Email
    keystone role-create --name=admin
    keystone tenant-create --name=admin --description="Admin Tenant"
    keystone user-role-add --user=admin --tenant=admin --role=admin
    keystone user-role-add --user=admin --role=_member_ --tenant=admin
    keystone user-create --name=pythian --pass= Your_Password --email=Your_Email
    keystone tenant-create --name=pythian --description="Pythian Tenant"
    keystone user-role-add --user=pythian --role=_member_ --tenant=pythian
    keystone tenant-create --name=service --description="Service Tenant"
  9. Now we create a service entry for the identity service
    keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
    keystone endpoint-create --service-id=$(keystone service-list | awk '/ identity / {print $2}') \
    --publicurl=http://controller:5000/v2.0 \
    --internalurl=http://controller:5000/v2.0 \
    --adminurl=http://controller:35357/v2.0
  10. Verify Identity service installation
    unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
  11. Request an authentication token by using the admin user and the password you chose for that user
    keystone --os-username=admin --os-password=Your_Password \
      --os-auth-url=http://controller:35357/v2.0 token-get
    keystone --os-username=admin --os-password=Your_Password \
      --os-tenant-name=admin --os-auth-url=http://controller:35357/v2.0 \
      token-get
  12. We will save the required parameters in admin-openrc.sh as below
    export OS_USERNAME=admin
    export OS_PASSWORD=Your_Password
    export OS_TENANT_NAME=admin
    export OS_AUTH_URL=http://controller:35357/v2.0
  13. Next Next check if everything is working fine and keystone interacts with OpenStack services. We will source the admin-openrc.sh file to load the keystone parameters
    source /root/admin-openrc.sh
  14. List Keystone tokens using:
    keystone token-get
  15. List Keystone users using
    keystone user-list

If all the above commands give you the output, that means your Keystone Identity Service is all set up, and you can proceed to the next steps—In part four, I will discuss on how to configure and set up Image Service to store images.

Categories: DBA Blogs

October 2014 Critical Patch Update Released

Oracle Security Team - Tue, 2014-10-14 13:49
Normal 0 false false false EN-US X-NONE X-NONE

Hello, this is Eric Maurice again.

Oracle today released the October 2014 Critical Patch Update. This Critical Patch Update provides fixes for 154 vulnerabilities across a number of product families including: Oracle Database, Oracle Fusion Middleware, Oracle Enterprise Manager Grid Control, Oracle E-Business Suite, Oracle Supply Chain Product Suite, Oracle PeopleSoft Enterprise, Oracle JDEdwards EnterpriseOne, Oracle Communications Industry Suite, Oracle Retail Industry Suite, Oracle Health Sciences Industry Suite, Oracle Primavera, Oracle Java SE, Oracle and Sun Systems Product Suite, Oracle Linux and Virtualization, and Oracle MySQL.

In today’s Critical Patch Update Advisory, you will see a stronger than previously-used statement about the importance of applying security patches. Even though Oracle has consistently tried to encourage customers to apply Critical Patch Updates on a timely basis and recommended customers remain on actively-supported versions, Oracle continues to receive credible reports of attempts to exploit vulnerabilities for which fixes have been already published by Oracle. In many instances, these fixes were published by Oracle years ago, but their non-application by customers, particularly against Internet-facing systems, results in dangerous exposure for these customers. Keeping up with security releases is a good security practice and good IT governance.

Out of the 154 vulnerabilities fixed with today’s Critical Patch Update release, 31 are for the Oracle Database. All but 3 of these database vulnerabilities are related to features implemented using Java in the Database, and a number of these vulnerabilities have received a CVSS Base Score of 9.0.

This CVSS 9.0 Base Score reflects instances where the user running the database has administrative privileges (as is typical with pre-12 Database versions on Windows). When the database user has limited (or non-root) privilege, then the CVSS Base Score is 6.5 to denote that a successful compromise would be limited to the database and not extend to the underlying Operating System. Regardless of this decrease in the CVSS Base Score for these vulnerabilities for most recent versions of the database on Windows and all versions on Unix and Linux, Oracle recommends that these patches be applied as soon as possible because a wide compromise of the database is possible.

The Java Virtual Machine (Java VM) was added to the database with the release of Oracle 8i in early 1999. The inclusion of Java VM in the database kernel allows Java stored procedures to be executed by the database. In other words, by running Java in the database server, Java applications can benefit from direct access to relational data. Not all customers implement Java stored procedures; however support for Java stored procedures is required for the proper operation of the Oracle Database as certain features are implemented using Java. Due to the nature of the fixes required, Oracle development was not able to produce a normal RAC-rolling fix for these issues. To help protect customers until they can apply the Oracle JavaVM component Database PSU, which requires downtime, Oracle produced a script that introduces new controls to prevent new Java classes from being deployed or new calls from being made to existing Java classes, while preserving the ability of the database to execute the existing Java stored procedures that customers may rely on.

As a mitigation measure, Oracle did consider revoking all Public Grant to Java Classes, but such approach is not feasible with a static script. Due to the dynamic nature of Java, it is not possible to identify all the classes that may be needed by an individual customer. Oracle’s script is designed to provide effective mitigation against malicious exploitation of Java in the database to customers who are not deploying new Java code or creating Java code dynamically.

Customers who regularly develop in Java in the Oracle Database can take advantage of a new feature introduced in Oracle 12.1. By running their workloads with Privilege Analysis enabled, these customers can determine which Java classes are actually needed and remove unnecessary Grants.

18 of the 154 fixes released today are for Oracle Fusion Middleware. Half of these fixes are pass-through fixes to address vulnerabilities in third-party components included in Oracle Fusion Middleware distributions. The most severe CVSS Base Score reported for these Oracle Fusion Middleware vulnerabilities is 7.5.

This Critical Patch Update also provides fixes for 25 new Java SE vulnerabilities. The highest reported CVSS Base Score for these Java SE vulnerabilities is 10.0. This score affects one Java SE vulnerability. Out of these 25 Java vulnerabilities, 20 affect client-only deployments of Java SE (and 2 of these vulnerabilities are browser-specific). 4 vulnerabilities affect client and server deployments of Java SE. One vulnerability affects client and server deployments of JSSE.

Rounding up this Critical Patch Update release are 15 fixes for Oracle and Sun Systems Product Suite, and 24 fixes for Oracle MySQL.

Note that on September 26th 2014, Oracle released Security Alert CVE-2014-7169 to deal with a number of publicly-disclosed vulnerabilities affecting GNU Bash, a popular open source command line shell incorporated into Linux and other widely used operating systems. Customers should check out this Security Alert and apply relevant security fixes for the affected systems as its publication so close to the publication of the October 2014 Critical Patch Update did not allow for inclusion on these Security Alert fixes in the Critical Patch Update release.

For More Information:

The October 2014 Critical Patch Update is located at http://www.oracle.com/technetwork/topics/security/cpuoct2014-1972960.html

Security Alert CVE-2014-7169 is located at http://www.oracle.com/technetwork/topics/security/alert-cve-2014-7169-2303276.html. Furthermore, a list of Oracle products using GNU Bash is located at http://www.oracle.com/technetwork/topics/security/bashcve-2014-7169-2317675.html.

The Oracle Software Security Assurance web site is located at http://www.oracle.com/us/support/assurance/

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

What Do You Need to Secure EHR? [VIDEO]

Chris Foot - Tue, 2014-10-14 11:29

Transcript

Electronic health records are becoming a regular part of the healthcare industry, but are organizations taking the right measures to secure them?

Hi, welcome to RDX. EHR systems can help doctors and other medical experts monumentally enhance patient treatment, but they also pose serious security risks.

SC Magazine reported an employee of Memorial Hermann Health System in Houston accessed more than 10,000 patient records over the course of six years. Social Security Numbers, dates of birth and other information was stolen.

In order to deter such incidents from occurring, health care organizations must employ active security monitoring of their databases. That way, suspicious activity can readily be identified and acted upon.

Thanks for watching! Be sure to join us next time for more security best practices and tips.

The post What Do You Need to Secure EHR? [VIDEO] appeared first on Remote DBA Experts.

Uber won't want drivers in the future

Steve Jones - Tue, 2014-10-14 09:30
I'm an Uber user, its a great service outside of cities with decent public transport.  But I have been thinking about where they will justify the $17bn valuation and give people a return on that $1.2bn investment.  At the same time I've been following the autonomous car pieces with interest and I think there is a pretty clear way this can end, especially as Uber have already said they are going
Categories: Fusion Middleware

Oracle E-Business Suite Updates From OpenWorld 2014

Pythian Group - Tue, 2014-10-14 08:29

Oracle OpenWorld has always been my most exciting conference to attend. I always see high energy levels everywhere, and it kind of revs me up to tackle new upcoming technologies. This year I concentrated on attending mostly Oracle E-Business Suite release 12.2 and Oracle 12c Database-related sessions.

On the Oracle E-Business Suite side, I started off with Oracle EBS Customer Advisory Board Meeting with great presentations on new features like the Oracle EBS 12.2.4 new iPad Touch-friendly interface. This can be enabled by setting “Self Service Personal Home Page mode” profile value to “Framework Simplified”. Also discussed some pros and cons of the new downtime mode feature of adop Online patching utility that allows  release update packs ( like 12.2.3 and 12.2.4 patch ) to be applied with out starting up a new online patching session. I will cover more details about that in a separate blog post. In the mean time take a look at the simplified home page look of my 12.2.4 sandbox instance.

Oracle EBS 12.2.4 Simplified Interface

Steven Chan’s presentation on EBS Certification Roadmap announced upcoming support for Android tablets Chrome Browser, IE11 and Oracle Unified Directory etc. Oracle did not extend any support deadlines for Oracle EBS 11i or R12 this time. So to all EBS customers on 11i: It’s time to move to R12.2. I also attended a good session on testing best practices for Oracle E-Business Suite, which had a good slide on some extra testing required during Online Patching Cycle. I am planning to do a separate blog with more details on that, as it is an important piece of information that one might ignore. Also Oracle announced a new product called Flow Builder that is part of Oracle Application Testing Suite, which helps users test functional flows in Oracle EBS.

On the 12c Database side, I attended great sessions by Christian Antognini on Adaptive Query Optimization and Markus Michalewicz sessions on 12c RAC Operational Best Practices and RAC Cache Fusion Internals. Markus Cachefusion presentation has some great recommendations on using _gc_policy_minimum instead of turning off DRM completely using _gc_policy_time=0. Also now there is a way to control DRM of a object using package DBMS_CACHEUTIL.

I also attended attended some new, upcoming technologies that are picking up in the Oracle space like Oracle NoSQL, Oracle Big Data SQL, and Oracle Data Integrator Hadoop connectors. These products seem to have great future ahead and have good chances of becoming mainstream in the data warehousing side of businesses.

Categories: DBA Blogs

Let the Data Guard Broker control LOG_ARCHIVE_* parameters!

The Oracle Instructor - Tue, 2014-10-14 08:20

When using the Data Guard Broker, you don’t need to set any LOG_ARCHIVE_* parameter for the databases that are part of your Data Guard configuration. The broker is doing that for you. Forget about what you may have heard about VALID_FOR – you don’t need that with the broker. Actually, setting any of the LOG_ARCHIVE_* parameters with an enabled broker configuration might even confuse the broker and lead to warning or error messages. Let’s look at a typical example about the redo log transport mode. There is a broker configuration enabled with one primary database prima and one physical standby physt. The broker config files are mirrored on each site and spfiles are in use that the broker (the DMON background process, to be precise) can access:

 OverviewWhen connecting to the broker, you should always connect to a DMON running on the primary site. The only exception from this rule is when you want to do a failover: That must be done connected to the standby site. I will now change the redo log transport mode to sync for the standby database. It helps when you think of the log transport mode as an attribute (respectively a property) of a certain database in your configuration, because that is how the broker sees it also.

 

[oracle@uhesse1 ~]$ dgmgrl sys/oracle@prima
DGMGRL for Linux: Version 11.2.0.3.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> edit database physt set property logxptmode=sync;
Property "logxptmode" updated

In this case, physt is a standby database that is receiving redo from primary database prima, which is why the LOG_ARCHIVE_DEST_2 parameter of that primary was changed accordingly:

[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Sep 30 17:21:41 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> show parameter log_archive_dest_2

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2		     string	 service="physt", LGWR SYNC AFF
						 IRM delay=0 optional compressi
						 on=disable max_failure=0 max_c
						 onnections=1 reopen=300 db_uni
						 que_name="physt" net_timeout=3
						 0, valid_for=(all_logfiles,pri
						 mary_role)

Configuration for physt

The mirrored broker configuration files on all involved database servers contain that logxptmode property now. There is no new entry in the spfile of physt required. The present configuration allows now to raise the protection mode:

DGMGRL> edit configuration set protection mode as maxavailability;
Succeeded.

The next broker command is done to support a switchover later on while keeping the higher protection mode:

DGMGRL> edit database prima set property logxptmode=sync;
Property "logxptmode" updated

Notice that this doesn’t lead to any spfile entry; only the broker config files store that new property. In case of a switchover, prima will then receive redo with sync.

Configuration for primaNow let’s do that switchover and see how the broker ensures automatically that the new primary physt will ship redo to prima:

 

DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Databases:
    prima - Primary database
    physt - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> switchover to physt;
Performing switchover NOW, please wait...
New primary database "physt" is opening...
Operation requires shutdown of instance "prima" on database "prima"
Shutting down instance "prima"...
ORACLE instance shut down.
Operation requires startup of instance "prima" on database "prima"
Starting instance "prima"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "physt"

All I did was the switchover command, and without me specifying any LOG_ARCHIVE* parameter, the broker did it all like this picture shows:

Configuration after switchoverEspecially, now the spfile of the physt database got the new entry:

 

[oracle@uhesse2 ~]$ sqlplus sys/oracle@physt as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Oct 14 15:43:41 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> show parameter log_archive_dest_2

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2		     string	 service="prima", LGWR SYNC AFF
						 IRM delay=0 optional compressi
						 on=disable max_failure=0 max_c
						 onnections=1 reopen=300 db_uni
						 que_name="prima" net_timeout=3
						 0, valid_for=(all_logfiles,pri
						 mary_role)

Not only is it not necessary to specify any of the LOG_ARCHIVE* parameters, it is actually a bad idea to do so. The guideline here is: Let the broker control them! Else it will at least complain about it with warning messages. So as an example what you should not do:

[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Oct 14 15:57:11 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> alter system set log_archive_trace=4096;

System altered.

Although that is the correct syntax, the broker now gets confused, because that parameter setting is not in line with what is in the broker config files. Accordingly that triggers a warning:

DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Databases:
    physt - Primary database
    prima - Physical standby database
      Warning: ORA-16792: configurable property value is inconsistent with database setting

Fast-Start Failover: DISABLED

Configuration Status:
WARNING

DGMGRL> show database prima statusreport;
STATUS REPORT
       INSTANCE_NAME   SEVERITY ERROR_TEXT
               prima    WARNING ORA-16714: the value of property LogArchiveTrace is inconsistent with the database setting

In order to resolve that inconsistency, I will do it also with a broker command – which is what I should have done instead of the alter system command in the first place:

DGMGRL> edit database prima set property LogArchiveTrace=4096;
Property "logarchivetrace" updated
DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Databases:
    physt - Primary database
    prima - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Thanks to a question from Noons (I really appreciate comments!), let me add the complete list of initialization parameters that the broker is supposed to control. Most but not all is LOG_ARCHIVE*

LOG_ARCHIVE_DEST_n
LOG_ARCHIVE_DEST_STATE_n
ARCHIVE_LAG_TARGET
DB_FILE_NAME_CONVERT
LOG_ARCHIVE_FORMAT
LOG_ARCHIVE_MAX_PROCESSES
LOG_ARCHIVE_MIN_SUCCEED_DEST
LOG_ARCHIVE_TRACE
LOG_FILE_NAME_CONVERT
STANDBY_FILE_MANAGEMENT


Tagged: Data Guard, High Availability
Categories: DBA Blogs

The new %SelectDummyTable MetaSQL

Javier Delgado - Tue, 2014-10-14 07:26
Does anyone know a PeopleSoft developer who didn't ever use a SQL statement like the following one?

select %CurrentDateOut
from PS_INSTALLATION;

Where PS_INSTALLATION could be any single-row table in the PeopleSoft data model.

If you look at the previous statement, the SELECT clause is not retrieving any field from the PS_INSTALLATION table, but just using it to comply with ANSI SQL. The same statement could be written in Microsoft SQL Server like this:

select %CurrentDateOut;

In Oracle Database, as:

select %CurrentDateOut
from dual;

In both cases, the sentences are a better performing option. Both solutions do not require accessing any physical table.

The problem with these solutions is that they are platform specific, and we want to avoid platform specific syntax. Believe me, when you perform a platform migration you suddenly have very present in your mind the ancestors of the programmers who used this type of syntax. So, up to now, we had to stick with the SELECT ... FROM PS_INSTALLATION solution.








Until now. PeopleTools 8.54 introduces a new MetaSQL name %SelectDummyTable, which automatically translates into a platform specific sentences. Our previous sample would be written as:

select %CurrentDateOut
from %SelectDummyTable

We now have a platform independent and well performing solution. What else can we ask for? ;-)

Note: I've checked the online PeopleBooks from Oracle and at this point there is no documentation on this Meta SQL. Still, I've conducted some tests and it seems to be working correctly.

OOW14 : One week in a nutshell

Luc Bors - Tue, 2014-10-14 04:26

Mind Control?

Oracle AppsLab - Mon, 2014-10-13 16:37

Editor’s note: Hey look, a new author. Here’s the first post from Raymond Xie, who joined us nearly a year ago. You may remember him from such concept demos as geo-fencing or Pebble watchface. Raymond has been busy at work and wants to share the work he did with telekinesis. Or something, you decide. Enjoy.

You put on a headband, stare at a ball, tilt your head back-forth and left-right . . . the ball navigates through a simple maze, rushing, wavering, changing colors, and finally hitting the target.

That is the latest creation out of AppsLab: Muse Sphero Driver. When it was first showed at OAUX Exchange during OOW, it amused many people, as they would call it “mind control” game.

The setup consists of  Muse – a brain-sensing headband, Sphero – a robotic ball, and a tablet to bridge the two.

Technically, it is your brainwave data (Electroencephalography – EEG) driving the Sphero (adjusting speed and changing color with spectrum from RED to BLUE, where RED: fast, active;  BLUE: slow, calm);  and head gesture (3d Accelerarometer- ACC) controlling the direction of Sphero movement.  Whether or not you call that as “mind control” is up to your own interpretation.

You kind of drive the ball with your mind, but mostly brainwave noises instead of conscious thought. It is still too early to derive accurate “mind control” from EEG data out of any regular person, for the reasons:

1. For EEG at Scalp level, the noise-to-signal ratio is very poor;
2. Need to establish the correlation between EEG and mind activity.

But it does open up a dialog in HCI, such as voice-control vs mind-control (silence); or in Robotics, instead of asking machine to “see”/”understand”, we can “see”/”understand” and impersonate it with our mind and soul.

While it is difficult to read out “mind” (any mind activity) transparently, we think it is quite doable to map your mind into certain states, and use the “state” as command indirectly.

We may do something around this area. So stay tuned.

Meanwhile, you can start to practice Yoga or Zen, to get better noise-to-signal ratio, and to set your mind into certain state with ease.Possibly Related Posts:

Using Global Temporary Tables in Application Engine

Javier Delgado - Mon, 2014-10-13 15:52
One of the new PeopleTools 8.54 features that went probably a bit unnoticed amidst the excitement on the new Fluid interface is the ability of Application Engine programs to take advantage of Global Temporary Tables (GTTs) when using an Oracle Database.

What are GTTs?
The Global Temporary Tables were introduced by Oracle already on the 8i version of its database product. These tables are session specific, meaning that the data inserted in them only lasts until the session is closed (in Oracle Database there is the possibility of using them only until the next commit, but that option is not used by PeopleSoft). The data inserted in the table by each session is not seen by other sessions. In other words, it is a very similar behavior to Application Engine Temporary Tables. The benefit of using a database supported solution rather the traditional temporary tables is better performance, since GTTs are optimized for temporary data.
How is it implemented in PeopleTools?

The implementation in PeopleTools is quite simple. When selecting Temporary Table as the record type, a new option is enabled: "Global Temporary Table (GTT)".













The build SQL generated by PeopleTools is slightly different to traditional tables:
CREATE GLOBAL TEMPORARY TABLE PS_BN_JOB_WRK (PROCESS_INSTANCE DECIMAL(10) NOT NULL,   EMPLID VARCHAR2(11) NOT NULL,   EMPL_RCD SMALLINT NOT NULL,   EFFDT DATE,   EFFSEQ SMALLINT NOT NULL,   EMPL_STATUS VARCHAR2(1) NOT NULL) ON COMMIT PRESERVE ROWS TABLESPACE BNAPP/
Note: The SQL Build process still creates as many instances of the table as it did with traditional temporary tables. This sounds like a bug to me, as my guess is that the whole idea of using GTTs is to be able to share a table without actually sharing the data, but I may be wrong. In any case, it does not do any harm. Any insight on this? 

Constraints
Due to specific characteristics of GTTs, there are some limitations regarding when they can be used:
  • If the Application Engine is run in online mode, then the GTTs cannot be shared between different programs on the same run.
  • You cannot use Restart Enabled with GTTs as the data is deleted when the session ends. In its current version, PeopleBooks state otherwise, but I think it is a typo.
  • %UpdateStats are not supported. Before Oracle Database 12c, if the statistics would be shared among all the sessions. Oracle Database 12c also supports session specific statistics, which would be the desired behavior in PeopleSoft (from a higher level point of view, programmers are expecting the temporary table to be dedicated to the instance). I guess the %UpdateStats is not supported because Oracle Database 11g is still supported by PeopleTools 8.54 and in that case running statistics would generate unexpected results. Still, the DBA can run statistics outside of the Application Engine program.
Note: As learnt from Oracle OpenWorld 2014, Oracle is evaluating supporting of Oracle Database 12c session specific statistics for GTT’s in a future releases of PeopleTools.
Conclusion
If you are moving to PeopleTools 8.54 and you want to improve the performance of a given Application Engine program, the GTTs may bring good value to your implementation. Please remember that you need to be using an Oracle Database.

    PeopleTools 8.54 will be the last release to certify Crystal Reports

    Javier Delgado - Mon, 2014-10-13 15:49
    It was just a question of time. In July 2011, Oracle announced that newly acquired PeopleSoft applications would not include a Crystal Reports license. Some years before, in October 2007, Business Objects was acquired by SAP. You don't need to read Machiavelli's Il Principe to understand why the license was now not included.

    In order to keep customer's investment on custom reports safe, Oracle kept updating Crystal Reports certifications for those customers who purchased PeopleSoft applications before that date. In parallel, BI Publisher was improved release after release, providing a viable replacement to Crystal Reports, and in many areas surpassing its features.

    Now, as announced in My Oracle Support's document 1927865.1, PeopleTools 8.54 will be the last release for which Crystal Reports will be certified, and support for report issues will end together with the expiration of PeopleSoft 9.1 applications support.








    PeopleTools 8.54 was just released a couple of months ago, so there is no need to panic, but PeopleSoft applications managers would do well if they start coming up with an strategy to convert their existing Crystal Reports into BI Publisher reports.

    Extending SaaS with PaaS free eLearning lectures

    Angelo Santagata - Mon, 2014-10-13 15:17

    Hey all,

    Over the last 4 months I've been working with some of my US friends to create a eLearning version of the PTS SaaS extending PaaS workshop I co-wrote....., Well the time has come and we've published the first 4 eLearning seminars, and I'm sure there will be more coming.

    Check em out and let me know what you think and what other topics need to be covered.

    https://apex.oracle.com/pls/apex/f?p=44785:24:0::::P24_CONTENT_ID,P24_PREV_PAGE:10390,24

    How to measure Oracle index fragmentation

    Yann Neuhaus - Mon, 2014-10-13 14:39

    At Oracle Open World 2014, or rather the Oaktable World, Chris Antognini has presented 'Indexes: Structure, Splits and Free Space Management Internals'. It's not something new, but it's still something that is not always well understood: how index space is managed, block splits, fragmentation, coalesce and rebuilds. Kyle Hailey has made a video of it available here.
    For me, it is the occasion to share the script I use to see if an index is fragmented or not.

    First, forget about those 'analyze index validate structure' which locks the table, and the DEL_LEAF_ROWS that counts only the deletion flags that are transient. The problem is not the amount of free space. The problem is where is that free space. Because if you will insert again in the same range of values, then that space will be reused. Wasted space occurs only when lot of rows were deleted in a range where you will not insert again. For exemple, when you purge old ORDERS, then the index on the ORDER_DATE - or on the ORDER_ID coming from a sequence - will be affected. Note that the problem occurs only for sparse purges because full blocks are reclaimed when needed and can get rows from an other range of value.

    I have a script that shows the number of rows per block, as well as used and free space per block, and aggregates that by range of values.

    First, let's create a table with a date and an index on it:

    SQL> create table DEMOTABLE as select sysdate-900+rownum/1000 order_date,decode(mod(rownum,100),0,'N','Y') delivered , dbms_random.string('U',16) cust_id
      2  from (select * from dual connect by level

    My script shows 10 buckets with begin and end value and for each of them the averge number of rows per block and the free space:

    SQL> @index_fragmentation 
    
    ORDER_DAT -> ORDER_DAT rows/block bytes/block %free space     blocks free                   
    --------- -- --------- ---------- ----------- ----------- ---------- -----                     
    24-APR-12 -> 02-AUG-12        377        7163          11        266                        
    03-AUG-12 -> 11-NOV-12        377        7163          11        266                        
    11-NOV-12 -> 19-FEB-13        377        7163          11        266                        
    19-FEB-13 -> 30-MAY-13        377        7163          11        265                        
    30-MAY-13 -> 07-SEP-13        377        7163          11        265                        
    07-SEP-13 -> 16-DEC-13        377        7163          11        265                        
    16-DEC-13 -> 26-MAR-14        377        7163          11        265                        
    26-MAR-14 -> 03-JUL-14        377        7163          11        265                        
    04-JUL-14 -> 11-OCT-14        377        7163          11        265                        
    12-OCT-14 -> 19-JAN-15        376        7150          11        265                        
    

    Note that the script reads all the table (it can do a sample but here it is 100%). Not exactly the table but only the index. It counts the index leaf blocks with the undocumented function sys_op_lbid() which is used by oracle to estimate the clustering factor.

    So here I have no fragmentation. All blocks have about 377 rows and no free space. This is because I inserted them in increasing order and the so colled '90-10' block split occured.

    Let's see what I get if I delete most of the rows before the 01-JAN-2014:

    SQL> delete from DEMOTABLE where order_dateSQL> @index_fragmentation 
    
    ORDER_DAT -> ORDER_DAT rows/block bytes/block %free space     blocks free                   
    --------- -- --------- ---------- ----------- ----------- ---------- -----                     
    25-APR-12 -> 02-AUG-12          4          72          99        266 oooo                   
    03-AUG-12 -> 11-NOV-12          4          72          99        266 oooo                   
    11-NOV-12 -> 19-FEB-13          4          72          99        266 oooo                   
    19-FEB-13 -> 30-MAY-13          4          72          99        265 oooo                   
    30-MAY-13 -> 07-SEP-13          4          72          99        265 oooo                   
    07-SEP-13 -> 16-DEC-13          4          72          99        265 oooo                   
    16-DEC-13 -> 26-MAR-14          4          72          99        265 oooo                   
    26-MAR-14 -> 03-JUL-14          4          72          99        265 oooo                   
    04-JUL-14 -> 11-OCT-14         46         870          89        265 oooo                   
    12-OCT-14 -> 19-JAN-15        376        7150          11        265                        
    

    I have the same buckets, and same number of blocks. But blocks which are in the range below 01-JAN-2014 have only 4 rows and a lot of free space. This is exactly what I want to detect: I can check if that free space will be reused.

    Here I know I will not enter any orders with a date in the past, so those blocks will never have an insert into them. I can reclaim that free space with a COALESCE:

    SQL> alter index DEMOINDEX coalesce;
    
    Index altered.
    
    SQL> @index_fragmentation 
    
    ORDER_DAT to ORDER_DAT rows/block bytes/block %free space     blocks free                      
    --------- -- --------- ---------- ----------- ----------- ---------- -----                     
    25-APR-12 -> 03-OCT-14        358        6809          15         32                        
    03-OCT-14 -> 15-OCT-14        377        7163          11         32                        
    15-OCT-14 -> 27-OCT-14        377        7163          11         32                        
    27-OCT-14 -> 08-NOV-14        377        7163          11         32                        
    08-NOV-14 -> 20-NOV-14        377        7163          11         32                        
    20-NOV-14 -> 02-DEC-14        377        7163          11         32                        
    02-DEC-14 -> 14-DEC-14        377        7163          11         32                        
    14-DEC-14 -> 26-DEC-14        377        7163          11         32                        
    27-DEC-14 -> 07-JAN-15        377        7163          11         32                        
    08-JAN-15 -> 19-JAN-15        371        7056          12         32                        
    

    I still have 10 buckets because this is defined in my script, but each bucket noew has less rows. I've defragmented the blocks and reclaimed the free blocks.

    Time to share the script now. Here it is: index_fragmentation.zip

    The script is quite ugly. It's SQL generated by PL/SQL. It's generated because it selects the index columns. And as I don't want to have it too large it is not indented nor commented. However, if you run it with set servertoutput on you will see the generated query.

    How to use it? Just change the owner, table_name, and index name. It reads the whole index so if you have a very large index you may want to change the sample size.

    Color Analysis of Flags – Patterns and symbols – Visualizations and Dashboards

    Nilesh Jethwa - Mon, 2014-10-13 13:54

    Recently, here at InfoCaptor we started a small research on the subject of flags. We wanted to answer certain questions like what are the most frequently used colors across all country flags, what are the different patterns etc.

    Read more Color and Pattern analysis on Flags of Countries – Simple visualization but interesting data

    ShareThis

    Oracle Critical Patch Update October 2014 - Massive Patch

    Just when you thought the Oracle Database world was getting safer, Oracle will be releasing fixes for 32 database security bugs on Tuesday, October 14th.  This is in stark contrast to the previous twenty-five quarters where the high was 16 database bugs and average per quarter was 8.2 database bugs.  For the previous two years, the most database bugs fixed in a single quarter was six.

    In addition to the 32 database security bugs, there are a total of 155 security bugs fixed in 44 different products.

    Here is a brief analysis of the pre-release announcement for the upcoming October 2014 Oracle Critical Patch Update (CPU).

    Oracle Database

    • There are 32 database vulnerabilities; only one is remotely exploitable without authentication and 4 are applicable to client-side only installations.
    • Since at least one database vulnerability has a CVSS 2.0 metric of 9.0 (critical for a database vulnerability), this is a fairly important CPU due to severity and volume of fixes.
    • The remotely exploitable without authentication bug is likely in Application Express (APEX).  Any organizations running APEX externally on the Internet should look to apply the relevant patches immediately.  To patch APEX, the newest version must be installed, which requires appropriate testing and upgrading of applications.
    • There are four cilent-side only installations and likely most are in JDBC.
    • Core RDBMS and PL/SQL are listed as patched components, so most likely there are critical security vulnerabilities in all database implementations.

    Oracle Fusion Middleware

    • There are 17 new Oracle Fusion Middleware vulnerabilities, 13 of which are remotely exploitable without authentication and the highest CVSS score being 7.5.
    • Various Fusion Middleware products are listed as vulnerable, so you should carefully review this CPU to determine the exact impact to your environment.
    • The core WebLogic Server is listed as a patched component, therefore, most likely all Fusion Middleware customers will have to apply the patch.

    Oracle E-Business Suite 11i and R12

    • There are nine new Oracle E-Business Suite 11i and R12 vulnerabilities, seven of which are remotely exploitable without authentication.  Many of these are in core Oracle EBS components such as Oracle Applications Framework (OAF) and Application Object Library (AOL/FND).  Even though the maximum CVSS score is 5.0, most of these vulnerabilities should be considered high risk.
    • All DMZ implementations of Oracle EBS should carefully review the CPU to determine if there environment is vulnerable.  As all Oracle EBS CPU patches are now cumulative, the CPU patch should be prioritized or mitigating controls, such as AppDefend, be implemented.

    Planning Impact

    • We anticipate this quarter's CPU to be higher risk than most and should be prioritized.  Based on the patched components, this may be a higher than average risk CPU for all Oracle database environments.
    • As with all previous CPUs, this quarter's security patches should be deemed critical and you should adhere to the established procedures and timing used for previous CPUs.
    • For Oracle E-Business Suite customers, DMZ implementations may have to apply this quarter's patch faster than previous quarters due to the number and severity of bugs.
    Tags: Oracle Critical Patch Updates
    Categories: APPS Blogs, Security Blogs

    Here We Grow Again

    Oracle AppsLab - Mon, 2014-10-13 12:18

    Cheesy title aside, the AppsLab (@theappslab) is growing again, and this time, we’re branching out into new territory.

    As part of the Oracle Applications User Experience (@usableapps) team, we regularly work with interaction designers, information architects and researchers, all of whom are pivotal to ensuring that what we build is what users want.

    Makes sense, right?

    So, we’re joining forces with the Emerging Interactions team within OAUX to formalize a collaboration that has been ongoing for a while now. In fact, if you read here, you’ll already recognize some of the voices, specifically John Cartan and Joyce Ohgi, who have authored posts for us.

    For privacy reasons (read, because Jake is lazy), I won’t name the entire team, but I’m encouraging them to add their thoughts to this space, which could use a little variety. Semi-related, Noel (@noelportugal) was on a mission earlier this week to add content here and even rebrand this old blog. That seems to have run its course quickly.

    One final note, another author has also joined the fold, Mark Vilrokx (@mvilrokx); Mark brings a long and decorated history of development experience with him.

    So, welcome everyone to the AppsLab team.Possibly Related Posts:

    Memory

    Jonathan Lewis - Mon, 2014-10-13 10:24

    On a client site recently, experimenting with a T5-2 – fortunately a test system – we decided to restart an instance with a larger SGA. It had been 100GB, but with 1TB of memory and 256 threads (2 sockets, 16 cores per socket, 8 threads per core) it seemed reasonable to crank this up to 400GB for the work we wanted to do.

    It took about 15 minutes for the instance to start; worse, it took 10 to 15 seconds for a command-line call to SQL*Plus on the server to get a response; worse still, if I ran a simple “ps -ef” to check what processes were running the output started to appear almost instantly but stopped after about 3 lines and hung for about 10 to 15 seconds before continuing. The fact that the first process name to show after the “hang” was one of the Oracle background processes was a bit of hint, though.

    Using truss on both the SQL*Plus call and on the ps call, I found that almost all the elapsed time was spent in a call to shmatt (shared memory attach); a quick check with “ipcs – ma” told me (as you might guess) that the chunk of shared memory identified by truss was one of the chunks allocated to Oracle’s SGA. Using pmap on the pmon process to take a closer look at the memory I found that it consisted of a few hundred pages sized at 256MB and a lot of pages sized at 8KB; this was a little strange since the alert log had told me that the instance was allocating about 1,600 memory pages of 256MB (i.e. 400GB) and 3 pages of 8KB – clearly a case of good intentions failing.

    It wasn’t obvious what my next steps should be – so I bounced the case off the Oak Table … and got the advice to reboot the machine. (What! – it’s not my Windows PC, it’s a T5-2.) The suggestion worked: the instance came up in a few seconds, with shared memory consisting of a few 2GB pages, a fair number of 256MB pages, and a few pages of other sizes (including 8KB, 64KB and 2MB).

    There was a little more to the advice than just rebooting, of course; and there was an explanation that fitted the circumstances. The machine was using ZFS and, in the absence of a set limit, the file system cache had at one point managed to acquire 896 GB of memory. In fact when we first tried to start the instance at with a 400GB SGA Oracle couldn’t start up at all until the system administrator had reduced the filesystem cache and freed up most of the memory; even then so much of the memory had been allocated originally in 8KB pages that Oracle had made a complete mess of building a 400GB memory map.

    I hadn’t passed all these details to the Oak Table but the justification for the suggested course of action (which came from Tanel Poder) sounded like a perfect description of what had been happening up to that point. In total his advice was:

    • limit the ZFS ARC cache (with two possible strategies suggested)
    • use sga_target instead of memory_target (to avoid a similar problem on memory resize operations)
    • start the instance immediately after the reboot

    Maxim: Sometimes the solution you produce after careful analysis of the facts looks exactly like the solution you produce when you can’t think of anything else to do.


    Digital Learning – LVC: It’s the attitude, stupid!

    The Oracle Instructor - Mon, 2014-10-13 06:33

    The single most important factor for successful digital learning is the attitude both of the instructor as well as of the attendees towards the course format. Delivery of countless Live Virtual Classes (LVCs) for Oracle University made me realize that. There are technical prerequisites of course: A reliable and fast network connection and the usage of a good headset is mandatory, else the participant is doomed from the start. Other prerequisites are the same as for traditional courses: Good course material, working lab environment for hands on practices and last not least knowledgeable instructors. For that part notice that we have the very same courseware, lab environments and instructors like for our classroom courses at Oracle University education centers also for LVCs. The major difference is in your head :-)

    Delivering my first couple of LVCs, I felt quite uncomfortable with that new format. Accordingly, my performance was not as good as usual. Meanwhile, I consider the LVC format as totally adequate for my courses and that attitude enables me to deliver them with the same performance as my classroom courses. Actually, they are even better to some degree: I always struggle producing clean sketches with readable handwriting on the whiteboard. Now look at this MS paint sketch from one of my Data Guard LVCs:

    Data Guard Real-Time Apply

    Data Guard Real-Time Apply

    Attendees get all my sketches per email if they like afterwards.

    In short: Because I’m happy delivering through LVC today, I’m now able to do it with high quality. The attitude defines the outcome.

    Did you ever have a teacher in school that you just disliked for some reason? It was hard to learn anything from that teacher, right? Even if that person was competent.

    So this is also true on the side of the attendee: The attitude defines the outcome. If you take an LVC thinking “This cannot work!”, chances are that you are right just because of your mindset. When you attend an LVC with an open mind – even after some initial trouble because you need to familiarize yourself with the learning platform and the way things are presented there – it is much more likely that you will benefit from it. You may even like it better than classroom courses because you can attend from home without the time and expenses it takes to travel :-)

    Some common objections against LVC I have heard from customers and my usual responses:

    An LVC doesn’t deliver the same amount of interaction like a classroom course!

    That is not necessarily so: You are in a small group (mostly less than 10) that is constantly in an audio conference. Unmute yourself and say anything you like, just like in a classroom. Additionally, you have a chatbox available. This is sometimes extremely helpful, especially with non-native speakers in the class :-) You can easily exchange email addresses using the chatbox as well and stay in touch even after the LVC.

    I have no appropriate working place to attend an LVC!

    You have no appropriate working place at all, then, for something that requires a certain amount of concentration. Talk to your manager about it – maybe there is something like a quiet room available during the LVC.

    I cannot keep up the attention when starring the whole day on the computer screen!

    Of course not, that is why we have breaks and practices in between the lessons.

    Finally, I would love to hear about your thoughts and experiences with online courses! What is your attitude towards Digital Learning?


    Tagged: Digital Learning, LVC
    Categories: DBA Blogs