Database Cloud Service — a new multi-tenant database-as-a-service offering that will let customers migrate their existing apps and databases to the cloud “with the push of a button,” said Ellison. Data will be compressed ten to one and encrypted for secure and efficient transfer to the cloud, with no reprogramming. “Every single Oracle feature — even our latest high-speed in-memory processing — is included in the Oracle Cloud Database Service,” Ellison said. “Hundreds of thousands of customers and ISVs have been waiting for exactly this. Database is our largest software business and database will be our largest cloud service business.”
If you are attending Oracle OpenWorld this year and you are interested in the cloud (who isn’t nowadays?!) here are a few sessions focused on Database as a Service. I will be attending a few of them too.
- Oracle OpenWorld 2012 – Latest News (Oct02)
- If You Have Oracle Database You Have Oracle Application Express; Use it or Lose it
- Oracle OpenWorld 2012 – Latest News (Oct01)
I will attend this year’s event by invitation from the Oracle ACE Program. Prior to the start of the conference, I will be attending a two day product briefing with product teams at Oracle HQ. It’s like a mini OpenWorld but only for Oracle ACE Directors.
During the briefing, Oracle product managers talk about the latest and greatest product news. They also share super secret information that is not yet made public. I will report this information to you here on awads.net and via Twitter, unless of course it is protected by a non-disclosure agreement.
See you there!
- How Big Will Oracle OpenWorld Be This Year?
- [Video] Oracle ACE Director Product Briefing at Oracle HQ
- Oracle ACE Director Product Briefing
A recent addition to my Oracle PL/SQL library is the book Oracle PL/SQL Performance Tuning Tips & Techniques by Michael Rosenblum and Dr. Paul Dorsey.
I agree with Steven Feuerstein’s review that “if you write PL/SQL or are responsible for tuning the PL/SQL code written by someone else, this book will give you a broader, deeper set of tools with which to achieve PL/SQL success”.
In the foreword of the book, Bryn Llewellyn writes:
The database module should be exposed by a PL/SQL API. And the details of the names and structures of the tables, and the SQL that manipulates them, should be securely hidden from the application server module. This paradigm is sometimes known as “thick database.” It sets the context for the discussion of when to use SQL and when to use PL/SQL. The only kind of SQL statement that the application server may issue is a PL/SQL anonymous block that invokes one of the API’s subprograms.
I subscribe to the thick database paradigm. The implementation details of how a transaction is processed and where the data is stored in the database should be hidden behind PL/SQL APIs. Java developers do not have to know how the data is manipulated or the tables where the data is persisted, they just have to call the API.
However, like Bryn, I have seen many projects where all calls to the database are implemented as SQL statements that directly manipulate the application’s database tables. The manipulation is usually done via an ORM framework such as Hibernate.
In the book, the authors share a particularly bad example of this design. A single request from a client machine generated 60,000 round-trips from the application server to the database. They explain the reason behind this large number:
Java developers who think of the database as nothing more than a place to store persistent copies of their classes use Getters and Setters to retrieve and/or update individual attributes of objects. This type of development can generate a round-trip for every attribute of every object in the database. This means that inserting a row into a table with 100 columns results in a single INSERT followed by 99 UPDATE statements. Retrieving this record from the database then requires 100 independent queries. In the application server.
Wow! That’s bad. Multiply this by a 100 concurrent requests and users will start complaining about a “slow database”. NoSQL to the rescue!
- 15 Ways Oracle Can Make Java Better (and Improve Its Stance with Developers)
- The Smallest Database Management System Is Just 359 Kilobytes
- When You Should Consider Flash for Database Storage
Integrigy:The UTL_FILE database package is used to read from and write to operating system directories and files. By default, PUBLIC is granted execute permission on UTL_FILE. Therefore, any database account may read from and write to files in the directories specified in the UTL_FILE_DIR database initialization parameter [...] Security considerations with UTL_FILE can be mitigated by removing all directories from UTL_FILE_DIR and using the Directory functionality instead.
- Oracle Database Listener Security Guide
- Oracle Strengthens Security Offerings
- New Oracle Security Videos and Blog
Steven Feuerstein was dismayed when he found in a PL/SQL procedure a cursor FOR loop that contained an INSERT and an UPDATE statements.That is a classic anti-pattern, a general pattern of coding that should be avoided. It should be avoided because the inserts and updates are changing the tables on a row-by-row basis, which maximizes the number of context switches (between SQL and PL/SQL) and consequently greatly slows the performance of the code. Fortunately, this classic antipattern has a classic, well-defined solution: use BULK COLLECT and FORALL to switch from row-by-row processing to bulk processing.
- Row versus Set Processing, Surprise!
- NO_DATA_FOUND Gotcha
- 5 Recommendations About Cursor FOR Loops in Oracle PL/SQL