Karl Reitschuster

Subscribe to Karl Reitschuster feed
Updated: 3 hours 13 min ago

The Oracle Active Session History (ASH) - a real treasure trove

Sat, 2018-08-11 11:02
When Oracle introduced the Active Session History (ASH) a new gate of tuning and monitoring options was opened.  Yes we had statspack, extended tracing and so on but all these tools were mssing a time(line) dimension. For example statspack reports and later AWR reports are flattened aggregations of numbers carefully to interpret as details are flushed as you always have only a look on an aggregated view of performance metrics.

Yet another workaround for missing AVG() function for the Oracle interval type

Sun, 2018-08-05 10:07

Isn't it similar to you experience with Oracle SQL? there are a few places in the Oracle SQL implementation where you think why didn't they think it through to the end or did they only partially implement a feature?

So you are not the only one suffering from this. It does not mean Oracle SQL is inconsistent as it is indeed a mighty mature tool to handle date and structures in the Oracle database.

Coming to the point:

There are max()/min() functions defined for the interval Oracle data type but not an average avg() function. This means if you want a consistent output, same data type and precision you have to work around a bit and grab into the tool set of SQL to emulate this missing functionality.

dramatic differences of in memory scanning performance on range queries

Tue, 2015-02-17 11:07
Given following two identical tables, on which run the same SQL,  replicated in memory with oracle in-memory option - one table created out of the other.
each tables covers 24m rows.
    
Same structure ...

Why the In-Memory Column Store is not used (II)

Mon, 2014-10-06 04:10
Now after some research - I detected one simple rule for provoking In-Memory scans :

Why the In-Memory Column Store is not used (I)

Wed, 2014-09-24 11:17

yuuupie!

Now finally after a long period of waiting - and looking presentations and reading blogs I was able to do In-Memory databasing on my own. Having sampled some theoretical background how a a pure In-Memory Db works with SAP HANA; my expectations on the Oracle In-Memory Option were high. Also because Oracle promised it would work without code change, out of the box.

The setup

The Oracle database is located on Sun Solaris Machine with enough memory and 32 Xeon-Cores; The Memory Pool is sized 64G, compared to the rest of the SGA , buffer cache , shared_pool ... 4G, and 300G of data.

Oracle 12C - In-Memory Option Resources

Thu, 2014-08-21 00:17

Hi folks,

Introduced as an Option Oracles In-Memory option will change the world of databasing also like SAP HANA does; Since July the release is out but the search for resources and documentation is  poor;

Here some useful links I found.

First the Home of Oracle In-Memory

The empire strikes back!

Wed, 2014-06-18 06:31

About 3 years ago SAP started to create a new database engine, SAP HANA, with a pure In-Memory concept. SAP aggressively move it's new database to it's software stack. The database was not used as cache but for running Enterprise Application satisfying both OLTP and OLAP demands on the same database.  

As oracle announced the new Oracle 12c in late 2012 there the 'c' was for cloud based computing, means the simplification of creating several database instances under the hood of a container parent database. For the end user this wasn't a visible benefit. It seemed Oracle did ignore SAP HANA. But even you cannot compare number installed databases for SAP HANA and Oracle HANA made an impact. Something new and very visible to the end user arises.

Now about a half year Oracles In-Memory Option announcement the European launch event was done on Tuesday this week. In the radison Blue hotel - which was a very exciting and comfortable place for that - the conference room was much more filled up with the oracle followers then the soccer arenas of this years soccer world championship.



The event was well organized and mixed up with high professional speakers.

What still in my mind was ...

Maria Colgan introduced the more detailed usage and environment of the Oracle In-Memory Option. She did it in a so clear and compact form - I am really impressed. Also the life demos have been amazing.

I was also impressed about Dr. Maaike Limper's session. She works as scientist at CERN; and tested to use the In-Memory Option to get faster analysis about particle collisions and used a data model of particle typed tables with hundreds of parameter per particle and immense number of rows. By using the In-Memory Option she said it was possible to play with data to drill down and possible find something new due the detected data patterns of the sampled particle sensor data.

Finally Dr. Dietmar Neugebauer  held a session like 'is the DWH dead now?' which proofed clearly the DWH is not only about analytic queries and so superfluous but also to consolidate and validate data from different data-sources/systems of the whole company. So the DWH is not dead with introduction of the new In-Memory Option. Maybe some 1:1 replication of operative data will get obsolete.

At the end of the event everybody knows and feels something has happened in the database world which will be visible for all end users and will have a tremendous effect on system landscapes and software development - back to database centric ultra-fast processing.

/Karl

Conditional Unique Indexes

Fri, 2013-05-17 04:25

Matrix : What you must learn is that these rules are no different than rules of a computer system. Some of them can be bent, others can be broken. Understand?
 
Usually an unique index grants the uniqueness of all rows in a specific table which have non-null values; But what if some data depended on a given business type in the the row of table needs to be unique and some not?
combining function based index feature from Oracle with a unique index makes this possible.

--
-- SCOPE : OASIS
--
-- DDL:CALLITEMS_UK01           :INDEX.MOD              - TEST
--

DROP   INDEX CALLITEMS_UK01;
CREATE UNIQUE INDEX CALLITEMS_UK01
       ON    CALLITEMS
             ( CASE UNIQUE_REF WHEN 1 THEN UPPER(TRIM(ITEMCODE)) ELSE NULL END
             , CASE UNIQUE_REF WHEN 1 THEN ITEMTYPE ELSE NULL END
             , CASE UNIQUE_REF WHEN 1 THEN ACTIVESINCE ELSE NULL END  )
  TABLESPACE OASISIXM
;

 

Dependend on the uniqeness flag UNIQUE_REF a row may be unique to others or not. Maybe this makes sense for specific call item types. In our project a CALLITEMTYPES table controlled the unqueness of specifc call item types , populating the UNIQUE_REF flag to the CATLLITEMS table.

cheers

/K

 

 




InfoQ : Running the Largest Hadoop DFS Cluster

Wed, 2013-03-20 08:46

Since I joined a Big Data Event : Frankfurter Datenbanktage 2013 - I started to take also a look to non-relational technics too. The RDBMS is not for every asepct the correct and fitting and fulfilling answer to all data related IT challenges. 

Frequently I wondered about how facebook could handle such an dramatic amount of users and data growth. I found an interesting presentation from the facebooks HDFS - Development-Lead Hairong Kuang optimizing HDFS (Hadoop DFS) for Scalability, Storage Effiency and Availability.

An RDBMS would not scale to that amount of load - reasons for that is the explained in theory with the CAP-Theorem which I will post about later;

Now to the presentation on InfoQ :  http://www.infoq.com/presentations/Hadoop-HDFS-Facebook

enjoy

/

Karl


Learn from the smaller one's : Native support for ENUM Types

Tue, 2012-12-18 08:30
Gerhard a colleague is dealing with postgres databases and was very suprised how many features are very oracle like. For example the same concept of Sequences as in Oracle. But then he detected the postgres enum native type support in postgres and asked me if the same feature would exist in Oracle.

When a Query runs slower on second execution - a possible side effect of cardinality feedback

Tue, 2012-12-18 01:13
After a successful migration from Oracle 10gR2 to Oracle 11gR2, I observed a very odd behavior executing a query; On first execution the query run fast - means 0.3 s; On second not faster but with tremendous reduced speed - approximate 20 s; Now this behavior is the opposite I experienced with complex queries (lot of joins, lot of predicates) , the first time of execution needs the extra cost of hard parsing and disk reads if the data is not in cache. the second time even the query run initial several seconds it run in a fraction of a second.