Skip navigation.

Feed aggregator

Pivotal Cloud Foundry Installed lets create an ORG / USER to get started

Pas Apicella - Thu, 2014-06-26 18:16
I installed Pivotal Cloud Foundry 1.2 recently and the commands below is what I run using the CLI to quickly create an ORG and a USER to get started with. Below assumes your connected as the ADMIN user to set a new ORG up.

Cloud Foundry CLI Commands as follows

cf api {cloud end point}
cf create-org pivotal
cf create-user pas pas
cf set-org-role pas pivotal OrgManager
cf target -o pivotal
cf create-space development
cf create-space test
cf create-space production
cf set-space-role pas pivotal production SpaceDeveloper
cf set-space-role pas pivotal development SpaceDeveloper
cf set-space-role pas pivotal test SpaceDeveloper
cf login -u pas -p pas -s developmenthttp://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Oracle Priority Service Infogram for 26-JUN-2014

Oracle Infogram - Thu, 2014-06-26 16:06

RDBMS
One of our TAMs recommends this very handy document : Oracle Database (RDBMS) Releases Support Status Summary
SQL
From that JEFF SMITH: Oracle Database 12c SQL Translation Framework: Fixing Bad SQL.
From the AMIS TECHNOLOGY BLOG: SQL: combine inline PL/SQL function with inline view in Oracle Database 12c SQL Query.
ODI
Oracle Data Integrator 12c - Creating and Connecting to Master and Work Repositories, on YouTube.
Oracle TimesTen
Oracle TimesTen - Real-Time Relational In-Memory Database and Cache, an overview.
Fusion
From the JDeveloper PMs Blog: Oracle JDeveloper and Oracle ADF 12.1.3 Are Here.
Oracle Fusion Middleware Summer Camps IV is coming up in August, register here.
SOA
The Oracle SOA Suite - Team Blog lets us know: Oracle Unveils Oracle SOA Suite 12c.
Java
The Java Sourceannounced: OTN Virtual Technology Summit - July 9.
Oracle BI
Trying to understand the Oracle Reference Architecture for Information Management, from Oracle BI By Bakboord.
Virtual Compute Appliance
Oracle updates Virtual Compute Appliance with latest hardware, from V3.co.uk.
Auditing
From Itegrigy, a very useful article for auditors on Trusting Privileged Users, DBMS_SQLHASH, and Three Misconceptions about Encryption.
Business
Understanding Disruption: Insights From The History Of Business, from Forbes.
Troubleshooting
From Cristóbal Soto's Blog: Data source in suspended state: BEA-001156 error because maximum number of sessions was exceeded in the database (ORA-00018)
EBS
From the Oracle E-Business Suite Support Blog:
During R12.2.3 Upgrade QP_UTIL_PUB Is Invalid
Announcing RapidSR for Oracle Payables: Automated Troubleshooting (AT)
New Service Contracts Functionality - Contracts Merge
My Oracle Support Accreditation for E-Business Suite
Journal Lines Reconciliation - What's Different in Release12?
Webcast: Setup and Usage of Configure to Order (CTO) Business Flow in Sync with Work in Process
General Ledger - Accounting Setups Diagnostic Now Available!
Information From the Field Service & Depot Repair CAB Meeting
...and Finally
VR
Why a Chunk of Cardboard Might Be the Biggest Thing at Google I/O, from GIZMODO.
Computing on the Mac
Can you tell the editor has a Mac for personal use? Well, he does. And here's an article on mucking about on the Mac terminal from lifehacker: Eight Terminal Utilities Every OS X Command Line User Should Know.
Spotify Tips

Another product I love is Spotify, the music streaming service, and it’s running on the Mac as I type this, shaking up the basement around my office. Here are some nice tips I found on it: The Best Spotify Tips and Tricks You’re Probably Not Using, from Lifehacker.

Partner Webcast – Modernizing Oracle Forms for the Cloud era

Oracle Forms is one of the most widely used tools for building applications for the Oracle database. Many organizations still run enterprise Oracle Forms applications created in the 90s, leading in...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Register OAM WebGate from WebGate host

Online Apps DBA - Thu, 2014-06-26 15:05
Hi All, In this post I will explain how one can register a webgate from webgate host rather than registering the webgate from OAM Admin Console or OAM Admin Host. Refer these posts 1, 2 to understand concepts of WebGate registration in OAM 11g. Inband registration mode is used when Web Server Administrator and OAM [...]

This is a content summary only. Visit my website http://onlineAppsDBA.com for full links, other content, and more!
Categories: APPS Blogs

ADF Faces 12.1.3 Features Demo - Partial

Shay Shmeltzer - Thu, 2014-06-26 12:35

The new Oracle ADF and JDeveloper 12.1.3 is out and it comes with a bunch of new features, especially in the UI layer - ADF Faces.

You can read the new features document on OTN, and you should also look into the new components demo for some inspiration.

For a quick overview of some of the new UI capabilities check out this quick video that shows some of the key new features.

<span id="XinhaEditingPostion"></span>

Categories: Development

Sam Alapati’s 12c OCP upgrade book includes test software with key DBA skills

Bobby Durrett's DBA Blog - Thu, 2014-06-26 11:42

Sweet!  I just installed the software from the CD that came with Sam Alapati’s book related to the OCP 12c upgrade exam and found that it has questions related to the key DBA skills section of the test.  Now I feel good about my test preparation materials.  The first time I took the test I didn’t have this degree of guidance on what to study, especially on the key DBA skills section.

Now I have questions on key DBA skills from the testing software included with Sam Alapati’s book, questions on key DBA skills in the Kaplan Selftest software,  and a large section on key DBA skills in Matthew Morris’ book.

I’ve read Matthew’s book already, and I’ve read the key DBA skills part twice.  Now I’m going to start reading Sam’s book and then on to the software tests.  All this to buy me one more correct answer on the test but I’d rather over prepare this time instead of leaving it to chance.

- Bobby

Categories: DBA Blogs

Welcome to Blackbird.io Employees and Clients

Pythian Group - Thu, 2014-06-26 11:29

Today, we announced that Pythian has entered into an agreement to acquire Blackbird.io, itself the result of a recent merger between PalominoDB and DriveDev.

I want to start with a hearty welcome to the 40+ new esteemed collaborators joining our firm today. Simultaneously, I want to welcome Blackbird.io’s valued clients to the Pythian family.

I am looking forward to cultivating a long-lasting collaboration and friendship with each one of you, many of whom I have already counted as friends for years.

To that point, I want to highlight my longstanding friendship and collaboration with Laine Campbell, the CEO of Blackbird.io. I first met Laine in 2007 and was impressed by her intelligence, her energy, her charisma and, most of all, her remarkable passion for doing the right thing by her team, her clients, and her community.

In February 2008, I sent Laine an email with the subject “Join us?”, the most important line of which was “I’m looking for a founder for a new office in the Bay Area.”

Laine was gracious in her reply: “At this point, I’m absolutely packed with long-term clients.  I’m quite comfortable with revenue and challenge and location.  I really am flattered you’d consider me for the position, but I’m going to have to pass.” That was only about a year after she had founded PalominoDB.

Laine and I have been friends ever since and have made a discipline of trading notes and advice about our respective businesses.

As we fast-forward six years to the present, Laine and her team have achieved what many might have thought impossible. Out of thin air, with no venture capital and in only eight short years, Blackbird.io is a business eerily reminiscent of Pythian in 2008… a feat that took us 11 years.

Earlier this year, PalominoDB joined forces with DriveDev, itself a highly successful DevOps business transformation company founded in 2007 to create Blackbird.io. Blackbird.io delivers a coherent and differentiated vision that helps transform businesses through breakthrough velocity, availability, security, performance, and cost.

In what has to be one of the longest corporate romances our niche has known, Laine reached out to me in May indicating that she’d like to accept my original offer and join forces with us. It was my turn to be flattered and go through a week’s soul searching.  I was not alone in the exercise. A lot of soul searching, strategic thinking, and sheer hard work has gone into this announcement today. By the end of our efforts, it became clear that joining forces would dramatically accelerate our ability to reshape the enterprise IT services landscape.

I would like to specifically thank Laine Campbell, Aaron Lee, and Vicki Vance as owners of Blackbird.io for their courage, vision, and determination through these demanding weeks. On the Pythian side, I would like to especially thank  Andrew Waitman, without whom this deal would be impossible to contemplate, Alain Tardif and Siobhan Devlin, and the rest of the executive team at Pythian who’ve moved mountains on our behalf to make it real. I don’t want to forget to highlight as well the external support of Bob Ford at Kelly Santini and our financing partners.

We have months of hard work ahead of us integrating our businesses. It’s our goal and imperative to listen and learn from each other, and pick and choose the best parts of each respective business as we weave a coherent and integrated whole. This will be the first meaningful merger Pythian undertakes.

Together we are almost 350 strong and are home to the industry’s largest open-source database managed services capability. Together we will accelerate the adoption of Enterprise DevOps and help countless SaaS, retail, media, and online businesses leave their competitors in the dust. And that is a vision worth getting excited about.

Categories: DBA Blogs

Is the DOE backing down on proposed State Authorization regulations?

Michael Feldstein - Thu, 2014-06-26 08:25

Now witness the firepower of this fully written and delivered WCET / UPCEA /Sloan-C letter!

- D. Poulin

One of the policies that we’re tracking at e-Literate is the proposed State Authorization regulation that the US Department of Education (DOE) has been pushing. The latest DOE language represents a dramatic increase in federal control of distance education and in bureaucratic compliance required of institutions and states. In the most recent post we shared a letter from WCET, UPCEA and Sloan-C to Secretary Duncan at the DOE.

What does it take to get all of the higher education institutions and associations to agree? Apparently the answer is for the Department of Education to propose its new State Authorization regulations. [snip]

Here’s what is newsworthy – the idea and proposed language is so damaging to innovation in higher ed (which the DOE so fervently supports in theory) and so burdensome to institutions and state regulators that three higher ed associations have banded together to oppose the proposed rules. WCET (WICHE Cooperative on Educational Technologies), UPCEA (University Professional and Continuing Education Association) and Sloan-C (Sloan Consortium) wrote a letter to Secretary Arne Duncan calling for the DOE to reconsider their planned State Authorization regulations.

While it is unclear how direct an impact the letter had, yesterday brought welcome news from Ted Mitchell at the DOE: they have effectively paused their efforts to introduce new State Authorization regulations. As described at Inside Higher Ed:

The Obama administration is delaying its plan to develop a controversial rule that would require online programs to obtain approval from each and every state in which they enroll students, a top Education Department official said Wednesday.

Under Secretary of Education Ted Mitchell said that the administration would not develop a new “state authorization” regulation for distance education programs before its November 1 deadline.
“We, for all intents and purposes, are pausing on state authorization,” Mitchell said during remarks at the Council for Higher Education Accreditation conference. “It’s complicated, and we want to get it right.”

Mitchell said he wanted make sure the regulation was addressing a “specific problem” as opposed to a general one. The goal, he said, should be to promote consumer protection while also allowing for innovation and recognizing that “we do live in the 21st century and boundaries don’t matter that much.”

It gets better. Mitchell made this statement while at a workshop for the Council for Higher Education Accreditation, and his speech mentioned his desire to clean up some of the regulatory burden on accrediting agencies. As described at the Chronicle:

Ted Mitchell, the under secretary of education, told attendees at a workshop held by the Council for Higher Education Accreditation that accreditors’ acceptance of more responsibility over the years for monitoring colleges had created “complicated expectations for institutions, regulators, politicians, and the public.”

Much of the work accreditors do to ensure that colleges comply with federal regulations is “less appropriate to accreditors than it may be to the state or federal government,” said Mr. Mitchell, who is the No. 2 official in the Department of Education and oversees all programs related to postsecondary education and federal student aid.

“If I could focus on a spot today,” he said, “it would be the compliance work and seeing if we could relieve accreditors of the burden of taking that on for us.”

This is just a speech, and we do not know what the DOE will eventually propose (or not) on State Authorization. But it is certainly a welcome sign that the department has heard the concerns of many in the higher education community.

Update: See Russ Poulin’s blog post at WCET with more context and inside info.

WCET joined with Sloan-C and UPCEA to write a letter to Education Secretary Arne Duncan and Under Secretary Mitchell about our concerns with the direction the Department was taking and to give recommendations on how the Department might proceed. I have also been talking with numerous groups and individuals that have been writing their own letters or have used their contacts.

On Tuesday of this week, Marshall Hill (Executive Director of the National Council on State Authorization Reciprocity Agreements) and some high-ranking members of the National Council leadership board met with Mr. Mitchell. According to Marshall, Mr. Mitchell was aware of many of the concerns that they raised and was very supportive of reciprocity. From that meeting, Mr. Mitchell indicated that more work needed to be done, but did not suggest the delay.

Mr. Mitchell’s reference in the Inside Higher Ed article about addressing a “specific problem” showed that our message was being heard.

The post Is the DOE backing down on proposed State Authorization regulations? appeared first on e-Literate.

Oracle Parallel Query: Did you use MapReduce for years without knowing it?

Yann Neuhaus - Thu, 2014-06-26 06:42

I've read this morning that MapReduce is dead. The first time I heard about MapReduce was when a software architect proposed to stop writing SQL on Oracle Database and replace it with MapReduce processing. Because the project had to deal with a huge amount of data in a small time and they had enough budget to buy as many cores as they need, they wanted the scalability of parallel distributed processing.

The architect explained how you can code filters and aggregations in Map & Reduce functions and then distribute the work over hundreds of CPU cores. Of course, it's very interesting, but it was not actually new. I was doing this for years on Oracle with Parallel Query. And not only filters and aggregations, but joins as well - and without having to rewrite the SQL statements.

I don't know if MapReduce is dead, but for 20 years we are able to just flip a switch (ALTER TABLE ... PARALLEL ...) and bring scalability with parallel processing. Given that we understand how it works.

Reading a parallel query execution plan is not easy. In this post, I'll just show the basics. If you need to go further, you should have a look at some Randolf Geist presentations and read his Understanding Parallel Execution article. My goal is not to go very deep, but only to show that it is not that complex.

I'll explain how Parallel query works by showing an execution plan for a simple join between DEPT and EMP tables where I want to read EMP in parallel - and distribute the join operation as well.

For the fun of it, and maybe because it's easier to read at the first time, I've done the execution plan on an Oracle 7.3.3 database (1997):

 

CapturePQ733.PNG

 

Let's start by the end. I want to read the EMP table by several processes (4 processes because I've set the parallel degree to 4 on table EMP). The table is not partitioned. It is a heap table where rows are scattered into the segment without any specific clustering. So each process will process an arbitrary range of blocks and this is why you see an internal query filtering on ROWID between :1 and :2. My session process, which is known as the 'coordinator', and which will be represented in green below, has divided the range of rowid (it's a full table scan, that reads all blocks from start to high water mark) and has mandated 4 'producer' processes to do the full scan on their part. Those producers are represented in dark blue below.

But then there is a join to do. The coordinator could collect all the rows from the 'producer' processes and do the join, but that is expensive and not scalable. We want the join to be distributed as well. Each producer process can read the DEPT table and do the join, which is fine if it is a small table only. But anyway, we don't want the DEPT table to be read in parallel because we have not set a parallel degree on it. So the EMP table will be read by only one process: my session process, which does all the no-parallel (aka the serial) things in addition to its 'coordinator' role.

Then we have a new set of 4 processes that will do the Hash Join. They need some rows from DEPT and they need some rows from EMP. They are the 'consumer' processes that will consume rows from 'producers', and are represented in pink below. And they don't need them randomly. Because it is a join, each 'consumer' process must have the pairs of rows that match the join columns. In the plan above, you see an internal query on internal 'table queue' names. The parallel full scan on EMP distributes its rows: it's a PARALLEL_TO_PARALLEL distribution, the parallel producers sending their rows to parallel consumers. The serial full scan on DEPT distributes its rows as well: it's a PARALLEL_FROM_SERIAL distribution, the parallel consumers receiving their rows from the serial coordinator process. The key for both distributions are given by a hash function on the join column DEPTNO, so that rows are distributed to the 4 consumer processes, but keeping same DEPTNO into the same process.

We have a group by operation that will be done in parallel as well. But the processes that do the join on DEPTNO cannot do the group by which is on others columns (DNAME,JOB). So we have to distribute the rows again, but this time the distribution key is on DNAME and JOB columns. So the join consumer processes are also producers for the group by operation. And we will have a new set of consumer processes that will do the join, in light blue below. That distribution is a PARALLEL_TO_PARALLEL as it distributes from 4 producers arranged by (DEPTNO) to 4 consumers arranged by (DNAME,JOB).

At the end only one process receives the result and sends it to the client. It's the coordinator which is 'serial'. So it's a PARALLEL_TO_SERIAL distribution.

Now let's finish with my Oracle 7.3.3 PLAN_TABLE and upgrade to 12c which can show more detailed and more colorful execution plans. See here on how to get it.

I've added some color boxes to show the four parallel distributions that I've detailed above:

  • :TQ10001 Parallel full scan of EMP distributing its rows to the consumer processes doing the join.
  • :TQ10000 Serial full scan of DEPT distributing its rows to the same processes, with the same hash function on the join column.
  • :TQ10002 The join consumer receiving both, and then becoming the producer to send rows to the consumer processes doing the group by
  • :TQ10003 Those consumer processes doing the group by and sending the rows to the coordinator for the final result.

 

Capture12cPQ3.PNG

So what is different here?

First we are in 12c and the optimizer may choose to broadcast all the rows from DEPT instead of the hash distribution. It's the new HYBRID HASH distribution. That decision is done when there are very few rows and this is why they are counted by the STATISTICS COLLECTOR.

We don't see the predicate on rowid ranges, but the BLOCK ITERATOR is there to show that each process reads its range of blocks.

And an important point is illustrated here.

Intra-operation parallelism can have a high degree (here I've set it to 4 meaning that each parallel operation can be distributed among 4 processes). But Inter-operation parallelism is limited to one set of producer sending rows to one set of consumers. We cannot have two consumer operations at the same time. This is why the :TQ0001 and the :TQ10003 have the same color: it's the same processes that act as the EMP producer, and then when finished, then are reused as the GROUP BY consumer.

And there are additional limitations when the coordinator is also involved in a serial operation. For those reasons, in a parallel query plan, some non-blocking operations (those that can send rows above on the fly as they receive rows from below) have to buffer the rows before continuing. Here you see the BUFFER SORT (which buffers but doesn't sort - the name is misleading) which will keep all the rows from DEPT in memory (or tempfiles when it's big).

Besides the plan, SQL Monitoring show the activity from ASH and the time spent in each parallel process:

 

Capture12cPQ2.PNG

 

My parallel degree was 4 so I had 9 processes working on my query: 1 coordinator, two sets of 4 processes. The coordinator started to distribute the work plan to the other processes, then had to read DEPT and distribute its rows, and when completed it started to receive the result and send it to the client. The blue set of processes started to read EMP and distribute its rows, and when completed was able to process the group by. The red set of processes has done the join. The goal is to have the DB time distributed on all the processes running in parallel, so that the response time is equal to the longest one instead of the total. Here, it's the coordinator which has taken 18 milliseconds. The query duration was 15 milliseconds:

 

CapturePQResp.PNG

 

This is the point of parallel processing: we can do a 32 ms workload in only 15 ms. Because we had several cpu running at the same time. Of course we need enough resources (CPU, I/O and temp space). It's not new. We don't have to define complex MapReduce functions. Just use plain old SQL and set a parallel degree. You can use all the cores in your server. You can use all the servers in your cluster. If you're I/O bound on the parallel full scans, you can even use your Exadata storage cells to offload some work. And in the near future the CPU processing will be even more efficient, thanks to in-memory columnar storage.

Trusting Privileged Users, DBMS_SQLHASH, and Three Misconceptions about Encryption

Clients often contact Integrigy requesting assistance to protect their sensitive data. Frequently these are requests for assistance to locate and then encrypt sensitive data. While encryption  offers protection for sensitive data, it by no means solves all security problems. How to protect sensitive data (and how to verify the trust of privileged users such as database administrators with sensitive data) requires more than just encryption.

The Oracle Database Security Guide (a great read for anyone interested in Oracle database security) makes three key points in Chapter Eight about encryption:

  1. Encryption does not solve access control problems - A user who has privileges to access data within the database has no more nor any less privileges as a result of encryption
  2. Encryption does not protect against a malicious database administrator - If untrustworthy users have significant privileges, then they can pose multiple threats to an organization, some of them far more significant than viewing unencrypted credit card numbers
  3. Encrypting everything does not make data secure – Data must be available when needed as well as backups and DR solutions considered. Moreover, encrypting all data will significantly affect performance.
     
DBMS_SQLHASH

Besides encryption, one of the security tools that Oracle provides is the DBMS_SQLHASH package. Hash values are similar to data fingerprints and can be used to validate if data has been changed (referred to as data integrity). Hashing is different from encryption and it is important to know the difference. If you need to know more about hashing, see the reference section below.  

The DBMS_SQLHASH package has been delivered since 10g and provides an interface to generate the hash value of the result set returned by a SQL query and provides support for several industry-standard hashing algorithms including SHA-1.

DBMS_SQLHASH.GETHASH(sqltext IN varchar2,

                                                digest_type IN BINARY_INTEGER,

                                                chunk_size IN number DEFAULT 134217728)

 

sqltext

The SQL statement whose result is hashed

digest_type

Hash algorithm used: HASH_MD4, HASH_MD5 or HASH_SH1

Use 3 for HASH_SH1, Use 2 for HASH_MD5 and 1 for HASH_MD4

chunk_size

Size of the result chunk when getting the hash

When the result set size is large, the GETHASH function will break it into chunks having a size equal to chunk_size. It will generate the hash for each chunk and then use hash chaining to calculate the final hash. The default chunk_size is 128 MB.

 

How Can Auditors use DBMS_SQLHASH

One use case for DBMS_SQLHASH is to help auditors trust-but-verify the actions of privileged users such as database administrators. By hashing key tables an auditor can quickly determine if the database administrator has made changes – either authorized or unauthorized. An auditor can do this by recording hashes at the start of an audit period for comparison to hashes at the end of the period. If the hashes at the end of the audit period match the hashes at the beginning of the period, no changes have been made. If there are a large number of databases and/or tables to audit, this approach is a very beneficial means of identifying what requires additional review – assuming sufficient logging has been configured to capture the details of the changes.

For example, to determine if there have been changes to Oracle database users and their associated privileges over a period of time, such as granting access to sensitive data, an auditor can hash the following Dictionary tables:

  • SYS.DBA_USERS
  • SYS.DBA_ROLES
  • SYS.DBA_TAB_PRIVS
  • SYS.DBA_SYS_PRIVS
  • SYS.DBA_ROLE_PRIVS

Examples

Note: to call the SYS.DBMS_SQLHASH package, the user will need execute rights granted from SYS.

Control

DBA_USERS

SQL

SELECT SYS.DBMS_SQLHASH.GETHASH ('SELECT * FROM SYS.DBA_USERS ORDER BY USER_ID', 3) sh1_dba_user_hash FROM DUAL;

Sample

Result

Sh1_dba_user_hash

7BD61E22E35FA2F95035E6A794F5B8CF0E37FDF6

 

Control

SYS.DBA_ROLES

SQL

SELECT SYS.DBMS_SQLHASH.GETHASH('SELECT * FROM SYS.DBA_ROLES ORDER BY ROLE', 3) sh1_dba_roles_hash FROM DUAL;

Sample

Result

sh1_dba_roles_hash

C80D69048D613E926E95AF77B627D9B5D6CB20C8

 

Control

SYS.DBA_TAB_PRIVS

SQL

SELECT SYS.DBMS_SQLHASH.GETHASH('SELECT * FROM SYS.DBA_TAB_PRIVS ORDER BY OWNER,TABLE_NAME', 3) sh1_dba_tab_privs_hash FROM DUAL;

Sample

Result

sh1_dba_tab_privs_hash

53FBDBDBF95186400A4DEEE611F51CD0B1E998DF

 

Control

SYS.DBA_SYS_PRIVS

SQL

SELECT SYS.DBMS_SQLHASH.GETHASH('SELECT * FROM SYS.DBA_SYS_PRIVS ORDER BY GRANTEE, PRIVILEGE', 3) sh1_dba_sys_privs_hash FROM DUAL;

Sample

Result

sh1_dba_sys_privs_hash

A27E8C71AD0CAEFB94AFEAB5DB108871F09BC281

 

Control

SYS.DBA_ROLE_PRIVS

SQL

SELECT SYS.DBMS_SQLHASH.GETHASH('SELECT * FROM SYS.DBA_ROLE_PRIVS ORDER BY GRANTEE, GRANTED_ROLE', 3) sh1_dba_role_privs_hash FROM DUAL;

Sample

Result

sh1_dba_role_privs_hash

5715D1B2C2A775D579B36DEBD2C2F1F608762AEC

 

Key Point

The order by clause will change the HASH. Oracle Support Note 1569256.1 explains this in more detail.  To guarantee the same HASH for a SQL issued at different times, the same ordering of the data must be used.

If you have questions or are interested in Integrigy's hashing methodology for Oracle and the Oracle E-Business Suite, please contact us at info@integrigy.com

References Tags: AuditingSecurity Strategy and StandardsSensitive DataOracle DatabaseAuditor
Categories: APPS Blogs, Security Blogs

Study reveals that comprehensive database administration is needed

Chris Foot - Thu, 2014-06-26 02:13

A huge part of securing data is knowing where the information is being stored and how it's shared among professionals. Ideally, database experts should be working 24/7, 365 days a year to monitor all server activity and contents. 

Grievous consequences
According to PC Magazine, the Montana Department of Public Health and Human Services recently sustained a data breach in which the personal information of 1.3 million people was exposed. Much of the data consisted of:

  • Names
  • Social Security numbers
  • Treatment history
  • Health statuses
  • Insurance

"Out of an abundance of caution, we are notifying those whose personal information could have been on the server," said DPHHS Director Richard Opper, as quoted by the source. 

A lack of understanding 
The problem lies in the jargon used by Opper. "Could have been" implies that the DPHHS has no way of knowing who exactly was affected by the attack. Although questionable activity was identified on May 15 – with a subsequent investigation being conducted seven days later – the breach could have been prevented if enough clarity regarding the system existed. 

Ponemon Institute recently conducted a survey of 1,587 global IT and IT security practitioners, which discovered that a mere 16 percent of respondents know where sensitive structured data is held. Even fewer study participants (7 percent) definitively know where unstructured information is located. 

Not taking appropriate measures 
After asking respondents which protective protocols were poorly executed, the Ponemon Institute discovered that: 

  • Almost three-fourths (72 percent) failed to adequately oversee intelligence sharing 
  • Approximately 63 percent were unsuccessful when assigning and refusing access permissions to staff. 
  • Just under 64 percent inadequately implemented database policy algorithms and application enhancements 

Constant surveillance
Many enterprises recognize the danger of neglecting to monitor server contents and access. Having IT personnel drop in every so often to scrutinize the system isn't enough to deter assiduous cybercriminals. A company should dedicate an entire team of database administration professionals to giving servers the attention they require.

As far as what IT professionals needed more of, 76 percent of Ponemon respondents identified real-time monitoring as a critical asset for them to possess. A focus on automation was realized across the board, with survey participants requiring data discovery and protocol workflow to be proactively conducted. 

Intelligence diagnostics, thorough vision of all database assets and integrated protective analysis were also cited as key enterprise needs. 

Knowing where data is stored, how it's transferred between professionals, who has access to an environment and the contents of encrypted information requires the expertise of database administration services. A team of professionals focused solely on monitoring all server activity is imperative in a world rife with cybercriminals. 

The post Study reveals that comprehensive database administration is needed appeared first on Remote DBA Experts.

The Connected Digital Experience

WebCenter Team - Wed, 2014-06-25 12:30
by Dave Gray, Entrepreneur, Author & Consultant
Think back ten or twenty years. Do you remember the days when you would go into work because you needed access to technology that you didn’t have at home? Maybe you went in to work to use the computers and software to make a flyer for the family picnic, or you went in to use the copy machine or the laser printer to print the flyers.
It used to be that our technology at home was so primitive that we would need to go into work in order to access the more advanced tools. But today, that dynamic has flipped.
What's happened is that regular people like you and me have adopted the cutting edge technologies faster than our companies have. We are using Facebook and Twitter to keep up with friends, to organize our social lives, to share information. Devices like the iPad, the iPhone, and other mobile and digital devices have gotten cheap and good enough that regular, ordinary people can afford them, and we the people have adopted them way faster than organizations have been able to keep up. 
Today, we go in to work and say things like, "I can do this at home. Why can't I do this at work? I have Google Docs. I can fire all this stuff up. I can send a message to my whole social network on Facebook and it’s really easy. Why can't I do that at work?"
Thanks to all these social and mobile technologies, customers are now able to organize and share information in new ways. For example, before you go into a restaurant you can read a bunch of reviews. You can sort through and find the best restaurant within five miles, and so on.
We have even seen revolutionary movements like the Arab Spring, where people are using these new tools to connect in networks and self organize in ways that are completely disruptive, not only to companies but even the nation-states that used to be able to control their populations.
This is a real shift in the balance of power, and it’s creating a new kind of marketplace that is very volatile, uncertain, and complex. 
This is the marketplace today. We see a lot of startups these days, coming seemingly out of nowhere, and they are rapidly disrupting traditional forms of business.
Imagine being Barnes & Noble or any traditional booksellers today. Imagine what it feels like to TV networks like NBC or CBS. There used to be only three choices for which channel to watch. Four if you count public television. Now there are thousands. Imagine you were a record company selling albums. Look what has happened to that market.
Yesterday it was bookstores and media companies. Today it’s taxi drivers whose business model is being completely disrupted by companies like Uber and Lyft, who use digital technologies and peer-to-peer networks to get you better car service, faster and cheaper than taxis can.
It’s happening to airlines, to hotels, to insurance companies, to financial services, to government. There is no industry that will not be touched in this new world. Think about what things like Skype and Google Hangouts are doing to the telecommunications industry.
The tools of organizing and producing and making things are more getting cheaper, and cheaper, and cheaper, so more and more people can use them. This means more startups, more innovation, more disruption, and more volatility in the marketplace.
This creates a challenge for organizations: How can you respond when the market and the world is changing as fast as it's changing today? When things are as complex, uncertain and ambiguous as they are today, how do you adapt? How do you continue to evolve and adapt the way that you offer your products and services so you can stay relevant? 
There's no way to organize in this connected world without becoming a connected company. And the most forward-thinking companies are moving in this direction. 
So what must you do to become a more connected organization? That’s a very big question, and not so easy to answer. But there are some clues. We can learn from what some companies are doing, companies that have grown up and demonstrated success in this environment, that have been able to learn quickly and adapt to rapidly-changing market situations, and have been able to scale successfully while continually adapting.
Different companies have done this in different ways.  But, really, what it comes down to in the largest sense is that a connected company is organized so that the smaller parts of the organization can operate and evolve and experiment and actually adapt to their environment.
Let’s take just one example of a leading-edge organization that’s designed to adapt.
Whole Foods Markets is kind of a nice example because it’s pretty easy to understand. It’s a grocery store. But it's not like most grocery stores where you are going to get the same stuff everywhere you go in the world or everywhere you go in the country. Whole Foods Markets has basically made each store relatively autonomous. In fact each region is relatively autonomous, each store is relatively autonomous, and even each team within the store has autonomy and a degree of freedom with regard to how they run their operation. At each level there is the opportunity to run a business within the larger business.
Whole Foods does this because they want to be able to adapt very specifically to every market they enter.
So if you go into a Whole Foods in Silicon Valley, or New York, or wherever you live, you are going to see a very different set of stuff than I'm going to see here where I live, in St. Louis. I'm going to see stuff that's locally sourced from local farmers and suppliers, and you are going to get stuff that's locally sourced from your community.
Teams at Whole Foods have the ability to self organize and work with customers and adapt to their local environment in a way that you can't really do in many companies.
How do they do this?
Each store is an autonomous profit center made up of about ten self-managed teams, who manage various aspects of the store, like produce, deli and so on.
Each team has control over its own fate. Performance data is available to all the teams, so they can compare their performance against other teams in their store, similar teams in other stores, or against their own team’s historical performance.
Teams also have access to detailed financial data, like product costs, profits per store, and even each other’s compensation and bonus information. They can look up the best-selling items at other stores and compare them to their own. Employees at Whole Foods are so well-informed that the SEC has designated all employees “insiders” for stock trading purposes.
This data transparency both builds trust and fuels a spirit of intense competition between teams and stores, since every team can compare itself with every other team and try to raise in the ranks. Whole Foods has created a platform that makes it possible for the company’s stores and teams to compete with each other so they can tune and improve their performance over time.
At the same time, each team has the autonomy to make local decisions as they see fit to improve their performance. So every Whole Foods store carries a unique mix that is tailored by self-managed teams for that particular location. This strategy allows them to target extremely small locations with highly customized stores. They are starting to open small stores in suburbs and college towns where rents are lower and competition less fierce.
The industry average sales per square foot is about $350, and Whole Foods is one of the top ten retailers in the US, with sales of about $900 per square foot, higher than Best Buy and Zale jewelers1.  Not bad for a grocery store.
Employees like it too. Whole Foods has made Fortune’s “100 best places to work” list every year since the list was started in 1998.

Whole Foods is just one company, but there are many others like it that are transforming the business landscape. In the words of science-fiction author William Gibson, “The future is already here, it’s just not evenly distributed.”

If you haven’t started connecting your company yet, now would be a good time to start.
_____________________________________________________________________________________________________________________
1Ranking U.S. Chains by Retail Sales per Square Foot, RetailSails, 2011. 

You can hear more from Dave on the Connected Digital Experience in our Digital Business Thought Leaders webcast tomorrow, June 26 at 10:00am PT - "The Digital Experience: A Connected Company’s Sixth Sense".

AWR thoughts

Jonathan Lewis - Wed, 2014-06-25 08:35

It’s been a week since my last posting - so I thought I’d better contribute something to the community before my name gets lost in the mists of time.

I don’t have an article ready for publication, but some extracts from an AWR report appeared on the OTN database forum a few days ago, and I’ve made a few comments on what we’ve been given so far (with a warning that I might not have time to follow up on any further feedback). I tried to write my comments in a way that modelled the way I scanned (or would have scanned) through the reporting – noting things that caught my attention, listing some of the guesses and assumptions I made as I went along.  I hope it gives some indication of a pattern of thinking when dealing with a previously unseen AWR report.

 

 


Automatic Diagnostics Repository (ADR) in Oracle Database 12c

Tim Hall - Wed, 2014-06-25 08:08

There’s a neat little change to the Automatic Diagnostics Repository (ADR) in Oracle 12c. You can now track DDL operations and some of the messages that would have formerly gone to the alert log and trace files are now written to the debug log. This should thin out some of the crap from the alert log hopefully. Not surprisingly, ADRCI has had a minor tweak so you can report this stuff.

You can see what I wrote about it here:

Of course, the day-to-day usage remains the same, as discussed here:

Cheers

Tim…

Automatic Diagnostics Repository (ADR) in Oracle Database 12c was first posted on June 25, 2014 at 3:08 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle Database: Query to List All Statistics Tables

Pythian Group - Wed, 2014-06-25 08:00

If you were a big fan of manual database upgrade steps, perhaps you would have come across this step many times in your life while reading MOS notes, upgrade guides, etc.

Upgrade Statistics Tables Created by the DBMS_STATS Package
If you created statistics tables using the DBMS_STATS.CREATE_STAT_TABLE procedure, then upgrade these tables by executing the following procedure:
EXECUTE DBMS_STATS.UPGRADE_STAT_TABLE(‘SYS’,’dictstattab’);

In my experience, I found the statistics tables can be created from Oracle rdbms version 8i. So this step became part of the database upgrade documents until now. I also noticed the structure of the statistics table was the same until 10gR2 version, but Oracle had modified the structure marginally on 11g and 12c versions.

I have been using this single query to list all statistics tables that exist on a database, which can be still used despite changes on the table structure.

SQL> select owner,table_name from dba_tab_columns where COLUMN_NAME=’STATID’ AND DATA_TYPE= ‘VARCHAR2′;

Though this is not a critical step, it is required as a part of the post upgrade. Here is the small action plan to run the required command to upgrade all statistics tables.

Connect as SYS database user and run these steps:
SQL> set pages 1000
SQL> set head off
SQL> set feedback off
SQL> spool /home/oracle/stattab_upg.sql
SQL> select ‘EXEC DBMS_STATS.UPGRADE_STAT_TABLE(”’||owner||”’,”’||table_name||”’);’ from dba_tab_columns where COLUMN_NAME=’STATID’ AND DATA_TYPE= ‘VARCHAR2′;
SQL> spool off
SQL> @/home/oracle/stattab_upg.sql
SQL> exit

Categories: DBA Blogs

check jdbc version

Laurent Schneider - Wed, 2014-06-25 05:12

There are 2 versions to check when using jdbc.

The first one is in the name of the file : classes12.zip works with JDK 1.2 and later, ojdbc7.jar works with java7 and later.

Even if classes12.zip works fine with JAVA 8, it is not supported.

Be sure you check the support matrix on the Oracle JDBC FAQ

According to the support note 401934.1, only Oracle JDBC driver 11.2.0.3 (and greater) versions support JDK 1.7.

To check your version of the JDBC Driver, there are two methods.

One is with the jar (or zip) utility.


$ jar -xvf ojdbc7.jar META-INF/MANIFEST.MF
 inflated: META-INF/MANIFEST.MF
$ grep Implementation META-INF/MANIFEST.MF
Implementation-Vendor: Oracle Corporation
Implementation-Title: JDBC
Implementation-Version: 12.1.0.1.0
$ unzip classes12.zip META-INF/MANIFEST.MF
Archive:  classes12.zip
  inflating: META-INF/MANIFEST.MF
$ grep Implementation META-INF/MANIFEST.MF
Implementation-Title:   classes12.jar
Implementation-Version: Oracle JDBC Driver 
  version - "10.2.0.1.0"
Implementation-Vendor:  Oracle Corporation
Implementation-Time:  Jun 22 18:51:56 2005

The last digit is often related to the java version, so if you have ojdbc6 and use java 6, you’re pretty safe. If you have java 8, you won’t find any ojdbc8 available at the time of writing, a safer bet is to use the latest version and to wait for a support note. The latest notes about ojdbc7.jar currently does not display java 8 certification. Probably we will have to wait for a more recent version of ojdbc7.jar.

Another mean to find the version of the driver is to use DatabaseMetaData.getDriverVersion()


public class Metadata {
  public static void main(String argv[]) 
    throws java.sql.SQLException {
    java.sql.DriverManager.registerDriver(
      new oracle.jdbc.driver.OracleDriver());
    System.out.println(
      java.sql.DriverManager.
        getConnection(
"jdbc:oracle:thin:@SRV01.EXAMPLE.COM:1521:DB01", 
          "scott", "tiger").
            getMetaData().getDriverVersion());
  }
}


$ javac -classpath ojdbc6.jar Metadata.java
$ java -classpath ojdbc6.jar:. Metadata
11.2.0.3.0

Conditional uniqueness

Dominic Brooks - Wed, 2014-06-25 02:50

A quick fly through the options for conditional uniqueness.

Requirement #1: I want uniqueness on a column but only under certain conditions.

For example, I have an active flag and I want to make sure there is only one active record for a particular attribute but there can be many inactive rows.

Initial setup:

create table t1
(col1      number       not null
,col2      varchar2(24) not null
,is_active number(1)    not null
,constraint pk_t1 primary key (col1)
,constraint ck_t1_is_active check (is_active in (1,0)));

Solution #1: A unique index on an expression which evaluates to null when the condition is not met.

create unique index i_t1 on t1 (case when is_active = 1 then col2 end);

unique index I_T1 created.

insert into t1 values(1,'SHAGGY',1);

1 rows inserted.

insert into t1 values(2,'SHAGGY',1);

SQL Error: ORA-00001: unique constraint (I_T1) violated
00001. 00000 -  "unique constraint (%s.%s) violated"
*Cause:    An UPDATE or INSERT statement attempted to insert a duplicate key.
           For Trusted Oracle configured in DBMS MAC mode, you may see
           this message if a duplicate entry exists at a different level.
*Action:   Either remove the unique restriction or do not insert the key.

Only one active SHAGGY allowed.
But multiple inactives allowed:

insert into t1 values(2,'SHAGGY',0);

1 rows inserted.

insert into t1 values(3,'SHAGGY',0);

1 rows inserted.

Solution #2: A virtual column with a unique constraint

drop index i_t1;

index I_T1 dropped.

alter table t1 add (vc_col2 varchar2(24) generated always as (case when is_active = 1 then col2 end));

table T1 altered.

alter table t1 add constraint uk_t1 unique (vc_col2);

table T1 altered.

Note that now we have a virtual column we have to be very aware of insert statements with no explicit column list:

insert into t1 values(4,'SCOOBY',1);

SQL Error: ORA-00947: not enough values
00947. 00000 -  "not enough values"

Unless we’re lucky enough to be on 12c and use the INVISIBLE syntax:

alter table t1 add (vc_col2 varchar2(24) invisible generated always as (case when is_active = 1 then col2 end));

But as this example is on 11.2.0.3:

insert into t1 (col1, col2, is_active) values(4,'SCOOBY',1);

1 rows inserted.

insert into t1 (col1, col2, is_active) values(5,'SCOOBY',1);

SQL Error: ORA-00001: unique constraint (UK_T1) violated
00001. 00000 -  "unique constraint (%s.%s) violated"
*Cause:    An UPDATE or INSERT statement attempted to insert a duplicate key.
           For Trusted Oracle configured in DBMS MAC mode, you may see
           this message if a duplicate entry exists at a different level.
*Action:   Either remove the unique restriction or do not insert the key.

insert into t1 (col1, col2, is_active) values(5,'SCOOBY',0);

1 rows inserted.

insert into t1 (col1, col2, is_active) values(6,'SCOOBY',0);

1 rows inserted.

Requirement #2: Sorry we forgot to tell you that we insert the new row first and the update the old one to be inactive so we need deferred constraint (hmmm!)

In which case, you can’t have deferred uniqueness on an index so the only option is the virtual column.

alter table t1 drop constraint uk_t1;

table T1 altered.

alter table t1 add constraint uk_t1 unique (vc_col2) deferrable initially deferred;

table T1 altered.

insert into t1 (col1, col2, is_active) values(7,'FRED',1);

1 rows inserted.

insert into t1 (col1, col2, is_active) values(8,'FRED',1);

1 rows inserted.

commit;

SQL Error: ORA-02091: transaction rolled back
ORA-00001: unique constraint (.UK_T1) violated
02091. 00000 -  "transaction rolled back"
*Cause:    Also see error 2092. If the transaction is aborted at a remote
           site then you will only see 2091; if aborted at host then you will
           see 2092 and 2091.
*Action:   Add rollback segment and retry the transaction.

insert into t1 (col1, col2, is_active) values(7,'FRED',1);

1 rows inserted.

insert into t1 (col1, col2, is_active) values(8,'FRED',1);

1 rows inserted.

update t1 set is_active = 0 where col1 = 7;

1 rows updated.

commit;

committed.

See previous post on similar approach for conditional foreign key


About User Groups

Floyd Teter - Tue, 2014-06-24 17:37
I'm hanging out in the middle of nowhere this week...Fort Riley, Kansas.  Here to visit my granddaughters.  Which means I'm missing ODTUG's KScope14 conference.  Missed the OAUG/Quest/IOUG Collaborate14 this year as well.  Will also be absent at OAUG's ConnectionPoint14 in Pittsburgh.  Will be missing a few others that are usually on my calendar as well (But I made it to UTOUG Training Days, Alliance14, and the MidAtlantic HEUG conference - will also make it to the ECOAUG later this year).

With all the user conferences missed in 2014, I've had some folks asking if I still believe it Oracle user groups.  The short answer is yes.  The longer answer is yes, but I do believe the user group model needs to change a bit.

Attend a user group conference this year (sorry, Oracle OpenWorld does not count - it is NOT a user group conference).  Look around at the faces.  Other than those working the partner sales booths, the vast majority of those faces will be middle-aged and older.  See, when user groups were first formed, the model was built to appeal to Baby Boomers and Echo Boomers.  And the big thrill was face-to-face networking.  Now that the Baby Boomers and Echo Boomers are riding off into the enterprise technology sunset, the user group model can only flourish by changing the model for those who take our places.

Face-to-face networking is still important, but just doesn't seem to hold the same level of importance for these younger workers.  Easily accessed on-demand education sessions on the web (for free), virtual gatherings on GoogleTalk, facilitating group chats on focused subjects, information in short snippets...simple, quick and virtual channels of information delivery seem to gain more traction with the rising generation than annual, huge national or international conferences when it comes to enterprise apps.

So, yeah, I still believe in user groups.  But, as long as you're asking, I think the model will need changing in order to flourish into the future.

I'm going back to the grandkids now...

WWW-based online education turns 20 this summer

Michael Feldstein - Tue, 2014-06-24 17:01

I’m a little surprised that this hasn’t gotten any press, but Internet-based online education turns 20 this summer. There were previous distance education programs that used networks of one form or another as the medium (e.g. University of Phoenix established its “online campus” in 1989), but the real breakthrough is the use of the world wide web (WWW), effectively creating what people most commonly know as “the Internet”.

To the best of my knowledge (correct me in comments if there are earlier examples), the first accredited school to offer a course over the WWW was the Open University in a pilot Virtual Summer School project in the summer of 1994. The first course was in Cognitive Psychology, offered to 12 students, as described in this paper by Marc Eisenstadt and others involved in the project (the HTML no longer renders):

In August and September 1994, a Virtual Summer School (VSS) for Open University undergraduate course D309 Cognitive Psychology enabled students to attend an experimental version of summer school ‘electronically’, i.e. from their own homes using a computer and a modem. VSS students were able to participate in group discussions, run experiments, obtain one-to-one tuition, listen to lectures, ask questions, participate as subjects in experiments, conduct literature searches, browse original journal publications, work in project teams, undertake statistical analyses, prepare and submit nicely formatted individual or joint written work, prepare plenary session presentations, and even socialize and chit-chat, all without ever leaving their homes. The term ‘Virtual Summer School’ was used to mean that the software packages supplied to students emulate many aspects of a residential summer school, but without requiring physical attendance. As with many other Open University activities, we feel that face-to-face tuition and peer group interaction would still be preferable if it could be achieved. However, there are sometimes circumstances which preclude physical attendance, so we want to provide the best possible alternative. Virtual Summer School was a first step in this direction. This year, it was only an experimental option for a dozen already-excused students, which gave us a low-risk entry in order to assess the viability of the approach.

There is even a concept video put together by the Open University at the end of 1994 that includes excerpts of the VSS course.

And now for your trip down memory lane, I have taken the paper, cleaned up the formatting, and fixed / updated / removed the links that no longer work. The modified paper is below for easier reading:

*************

Virtual Summer School Project, 1994

(source: http://faculty.education.ufl.edu/Melzie/Distance/Virtual%20Summer%20School%20Project)

Background

One of the great strengths of the UK’s Open University is its extensive infrastructure, which provides face-to-face tuition through a network of more than 7000 part-time tutors throughout the UK and Europe. This support network, combined with in-house production of high-quality text and BBC-produced videos, provides students with much more than is commonly implied by the phrase ‘distance teaching’! Moreover, students on many courses must attend residential schools (e.g. a one-week summer school to gain experience conducting Biology experiments), providing an additional layer of support. About 10% of students have genuine difficulty attending such residential schools, and increasingly we have started to think about addressing the needs of students at a greater distance from our base in the UK. This is where the Virtual Summer School comes in.

The Cognitive Psychology Virtual Summer School

In August and September 1994, a Virtual Summer School (VSS) for Open University undergraduate course D309 Cognitive Psychology enabled students to attend an experimental version of summer school ‘electronically’, i.e. from their own homes using a computer and a modem. VSS students were able to participate in group discussions, run experiments, obtain one-to-one tuition, listen to lectures, ask questions, participate as subjects in experiments, conduct literature searches, browse original journal publications, work in project teams, undertake statistical analyses, prepare and submit nicely formatted individual or joint written work, prepare plenary session presentations, and even socialize and chit-chat, all without ever leaving their homes. The term ‘Virtual Summer School’ was used to mean that the software packages supplied to students emulate many aspects of a residential summer school, but without requiring physical attendance. As with many other Open University activities, we feel that face-to-face tuition and peer group interaction would still be preferable if it could be achieved. However, there are sometimes circumstances which preclude physical attendance, so we want to provide the best possible alternative. Virtual Summer School was a first step in this direction. This year, it was only an experimental option for a dozen already-excused students, which gave us a low-risk entry in order to assess the viability of the approach.

Below we describe the technology involved, evaluation studies, and thoughts about the future.

The Technology

Three main categories of technology were required: communications & groupwork tools, support & infrastructure software/hardware, and academic project software.

Communications and Groupwork
  • Email, Usenet newsgroups, live chat lines and low-bandwidth (keyboard) conferencing: this technology was provided by FirstClass v. 2.5 from SoftArc in Toronto, and gave students a nice-looking veneer for many of their day-to-day interactions. A ‘Virtual Campus’ map appeared on their desktops, and folder navigation relied on a ‘room’ metaphor to describe crucial meeting places and bulletin boards.
  • WWW access: NCSA Mosaic 1.0.3 for Macintosh was provided for this purpose [in the days before Netscape was released] . Students had customized Hotlists which pointed them to academically-relevant places (such as Cognitive & Psychological Sciences on The Internet), as well as some fun places.
  • Internet videoconferencing: Using Cornell University’s CU-SeeMe, students with ordinary Macs or Windows PCs (even over dial-up lines from home) were able to watch multiple participants around the world. Video transmission from slightly higher-spec Macs & PCs was used for several Virtual Summer School events, including a Virtual Guest Lecture by Donald A. Norman, formerly Professor of Psychology at the University of California at San Diego (founder of its Cognitive Science Programme), and now an Apple Fellow.
  • Remote presentation software: we used a product called ‘The Virtual Meeting’ (from RTZ in Cupertino), which allowed synchronized slide & movie presentations on remote Macs & PCs distributed across local, wide, or global (including dial-in) networks, displayed images of all remote ‘participants’, and facilitated moderated turn-taking, ‘hand-raising’, interactive whiteboard drawing & question/answer sessions.
  • Mobile telephone support and voice conferencing: every VSS student was supplied with an NEC P100 cellular phone, so that they could use it while their domestic phone was busy with their modem (some day they’ll have ISDN of fibre optic lines, but not this year). Audio discussions were facilitated by group telephone conference calls, run concurrently with CU-SeeMe and other items shown above. Our largest telephone conference involved 17 participants, and worked fine given that basic politeness constraints were obeyed.
  • Remote diagnostic support and groupwork: Timbuktu Pro from Farallon, running over TCP/IP, enabled us to ‘cruise in’ to our students’ screens while chatting to them on their mobile phones, and to help them sort out specific problems. Students could also work in small self-moderated groups this way, connecting as observers to one user’s Macintosh.
Support and infrastructure software/hardware
  • Comms Infrastructure: TCP/IP support was provided by a combination of MacTCP, MacPPP, VersaTerm Telnet Tool on each student’s machine, plus an Annex box at The Open University connecting to a Mac Quadra 950 running a FirstClass Server and 3 Suns running cross-linked CU-SeeMe reflectors.
  • Tutorial Infrastructure: each student was supplied with HyperCard, MoviePlay, and SuperCard 1.7 to run pre-packaged tutorial and demonstration programs, some of which were controlled remotely by us during group presentations. Pre-packaged ‘guided tour’ demos of all the software were also provided (prepared with a combination of MacroMind Director and CameraMan). To help any computer-naive participants ‘bootstrap’ to the point where they can at least send us an email plea for help, we also supplied a short video showing them how to unpack and connect all of their equipment, and how to run some of the demos and FirstClass.
  • Hardware: one of our aims was to foreshadow the day in the near future when we can presuppose that (a) most students will be computer-literate, (b) students will have their own reasonable-specification hardware, (c) bandwidth limitations will not be so severe, and (d) all of our software will be cross-platform (e.g. Mac or Windows). We could only approximate that in 1994, so we supplied each VSS student with a Macintosh LC-II with 8MB of RAM, a 14.4Kbps modem, a StyleWriter-II printer, 13″ colour monitor, mobile phone and extra mobile phone battery. Students were given a conventional video cassette showing how to set up all the equipment (see tutorial infrastructure above).
Academic project software

Our students had four main support packages to help them in their Cognitive Psychology studies:

  • a custom-built ‘Word Presentation Program’, which allowed them to create stimuli for presentation to other students and automatically record data such as reaction times and button presses (they could create a turnkey experiment package for emailing to fellow students, and then have results emailed back);
  • a HyperCard-based statistics package, for analysing their data;
  • MacProlog from Logic Programming Associates in the UK, for writing simple Artificial Intelligence and Cognitive Simulation programs;
  • ClarisWorks, for preparing reports and presentations, reading articles that we emailed to them as attachments, and doing richer data analyses.
Timetable and evaluation

Students had a three-week warmup period in order to become familiar with their new equipment and run some trial (fun) activities with every piece of software, and formal academic activities took place from August 27th – Sept. 9th, 1994, mostly in the evenings. Thus, the conventional one-week residential summer school was stretched out for two weeks to allow for part-time working. During week one the students concentrated on experimental projects in the area of “Language & Memory” (typically demonstrating inferences that “go beyond the information given”). During week two the students wrote simple AI programs in Prolog that illustrate various aspects of cognitive processing (e.g. simulating children’s arithmetic errors). They were supplied with Paul Mulholland’s version of our own Prolog trace package (see descriptions of our work on Program Visualization) to facilitate their Prolog debugging activities.

A detailed questionnaire was supplied both to the Virtual Summer School students and to conventional summer school students taking the same course. We looked at how students spent their time, which activities were beneficial for them, and many other facets of their Virtual Summer School experience.

[removed reference to Kim Isikoff's paper and student interviews, as all links were broken]

The future

The Virtual Summer School finished on 9th September 1994 (following our Virtual Disco on 8th September 1994, incidentally…. we told students about music available on the World Wide Web for private use). What happens next? Here are several issues of importance to us:

  • We must lobby for ever-increasing ‘bandwidth’ [i.e. channel capacity, reflected directly in the amount and quality of full-colour full-screen moving images and quality sound that can be handled]. This is necessary not only for Open University students, but also for the whole of the UK, and indeed for the whole world. As capacity and technology improve, so does the public expectation and need [analagous to the way the M25 motorway was overfull with cars the first day it opened-- the technology itself helps stimulate demand]. Whatever the current ‘Information SuperHighway’ plans are [just like Motorway construction plans], there is a concern that they don’t go far enough.
  • We must RADICALLY improve both (i) the user interfaces and (ii) the underlying layers of communications tools. Even with the excellent software and vendor support that we had at our disposal, all the layers of tools needed (TCP/IP, PPP, Communications Toolbox, etc.) made a veritable house of cards. The layers of tools were (i) non-trivial to configure optimally in the first place (for us, not the students); (ii) non-trivial to mass-install as ‘turnkey’-ready systems for distribution to students; (iii) non-trivial for students to use straight ‘out of the box’ (naturally almost everything in the detailed infrastructure is hidden from the students, but one or two items must of necessity rear their ugly heads, and that gets tricky); and (iv) ‘temperamental’ (students could get interrupted or kicked off when using particular combinations of software). We were fully prepared for (iv), because that’s understandible in the current era of communicating via computers, but (i), (ii), and (iii) were more surprising. [If anyone doubts the nature of these difficulties, I hereby challenge them to use Timbuktu Pro, a wonderful software product, with 4 remotely-sited computer-naive students using TCP/IP over a dial-up PPP connection.] We can do better, and indeed we MUST do better in the future. Many vendors and academic institutions are working on these issues, and they need urgent attention.
  • We must obtain a better understanding of the nature of remote groupwork. Our students worked in groups of size 2, 3, or 4 (depending on various project selection circumstances). Yet even with pre-arranged group discussions by synchronous on-line chat or telephone conference calls, a lot of fast-paced activity would suddenly happen, involving just one student and one tutor. For example, student A might post a project idea to a communal reading area accessible only to fellow project-group students B and C and also tutor T. Tutor T might post a reply with some feedback, and A might read it and react to it before B and C had logged in again. Thus, A and T would have inadvertently created their own ‘shared reality’– a mini-dialogue INTENDED for B and C to participate in as well, yet B and C would get left behind just because of unlucky timing. The end result in this case would be that students A, B, and C would end up doing mostly individual projects, rather than a group project. Tutors could in future ‘hold back’, but this is probably an artificial solution. The ‘shared reality’ between A and T in the above scenario is no different from what would happen if A cornered T in the bar after the day’s activities had finished at a conventional Summer School. However, in that situation T could more easily ensure that B and C were brought up to date the next day. We may ultimately have to settle for project groups of size 2, but not before doing some more studies to try to make larger groups (e.g. size 4) much more cohesive and effective.
  • We need to improve ‘tutor leverage’ (ability to reach and influence more people). Let’s suppose that we have thoroughly researched and developed radical improvements for the three items above (more bandwidth, nice user interfaces with smooth computer/communications infrasture [sic], happy cohesive workgroups of size 4). It would be a shame if, after all that effort and achievement, each tutor could only deal with, say, 3 groups of 4 students anywhere in the world. The sensory overload for tutors at the existing Virtual Summer School was considerable… many simultaneous conversations and many pieces of software and technology running at once. The 1994 Virtual Summer School was (of necessity) run by a self-selecting group of tutors who were competent in both the subject matter and the technology infrastructure. Less technologically-capable tutors need to be able to deal with larger numbers of students in a comfortable fashion, or Virtual Summer School will remain quite a ‘niche’ activity.

The four areas above (more bandwidth, better computer/comms interfaces, larger workgroups, increased tutor leverage) are active areas of research for us…. stay tuned (and see what we’re now doing in KMi Stadium)!

Who made it work?
  • Marc Eisenstadt: VSS Course Director, Slave Driver, and Fusspot
  • Mike Brayshaw: VSS Tutor & Content Wizard
  • Tony Hasemer: VSS Tutor & FirstClass Wizard
  • Ches Lincoln: VSS Counsellor and FirstClass Guru
  • Simon Masterton: VSS Academic Assistant, Mosaic Webmaster, and Mobile Phone Guru
  • Stuart Watt: VSS Mac Wizard
  • Martin Le Voi: VSS Memory/Stats Advisor & Unix Guru
  • Kim Issroff: VSS Evaluation and <A HREF=”#kim-report”>Report</A>
  • Richard Ross: VSS Talking Head Guided Tour
  • Donald A. Norman (Apple, Inc.): VSS Virtual Guest Lecturer
  • Blaine Price: Unix & Internet Guru & Catalyst
  • Adam Freeman: Comms & Networking Guru
  • Ian Terrell: Network Infrastructure Wizard
  • Mark L. Miller (Apple, Inc.): Crucial Guidance
  • Christine Peyton (Apple UK): Support-against-all-odds
  • Ortenz Rose: Admin & Sanity Preservation
  • Elaine Sharkey: Warehousing/Shipping Logistics

Update: Changed title and Internet vs. WWW language to avoid post-hoc flunking of Dr. Chuck’s IHTS MOOC.

The post WWW-based online education turns 20 this summer appeared first on e-Literate.

New Pastures

Duncan Davies - Tue, 2014-06-24 16:58

I try to keep the content on here focused on the products and implementation tips however I hope you’ll indulge me with one personal post.

After six and a half enjoyable years I have left Succeed Consultancy. I’m leaving behind a lot of talented colleagues and great friends, however for reasons that I don’t want to bore you with it’s time to move on.

As of yesterday I’ve started work for Cedar Consulting. One of the largest the ‘tier 2′ PeopleSoft consultancies in EMEA.

Cedar have been running – in one form or other – for nearly 20 years and have an impressive list of PeopleSoft implementations, upgrades and support/hosting clients. There are few UK PeopleSoft clients who haven’t engaged Cedar at one point or other. As well as their large team of UK consultants they have a number of offices spread globally and a solution centre in India.

Importantly for me, Cedar also have a strong focus on Fusion and already have both a live Fusion client under their belt and the UKOUG Fusion Partner of the Year gold award.

This career move also means that the branding of the PeopleSoft and Fusion Weeklies will change. I’d to thank Succeed for sponsoring the newsletters up to this point and I’m grateful to Cedar for agreeing to sponsor it going forwards. You should notice a rebrand in this week’s editions.