Skip navigation.

Development

And so it goes

Greg Pavlik - Sat, 2014-04-19 17:28
Between just being flat out busy and frankly finding Twitter a much lazier way to share basic information, this blog has been dormant for too long. In May, it will get a lot more attention - too much is going on in the Big Data space and Hadoop area more specifically to keep so quiet about it. Time to speak up a bit...

Dynamic ADF Forms with the new Dynamic Component (and synch with DB)

Shay Shmeltzer - Fri, 2014-04-11 17:22

I wrote a couple of blogs in the past that talked about creating dynamic UIs based on a model layer that changes (example1 example2). Well in 12c there is a new ADF Faces component af:dynamicComponent that makes dynamic forms even more powerful. This component can be displayed as various UI components at runtime. This allows us to create Forms and tables with full functionality in a dynamic way.

In fact, we use this when you create either a form or a table component in your JSF page dragging over a data control. We now allow you to not specify each field in your UI but just say that you want to show all the fields in the data control.

In the demo below I show you how this is done, and then review how your UI automatically updates when you add fields in your model layer. For example if your DB changed and you used the "Synchronize with DB" and added the field to the VO - that's it no more need to go to every page and add the new field.

Check it out:

<span id="XinhaEditingPostion"></span>

Categories: Development

Install latest patch of APEX 4.2 (4.2.5)

Dimitri Gielis - Fri, 2014-04-11 02:34
A few days ago Oracle brought out a new patch for APEX 4.2, this will be the latest version of this build, the next version of APEX will be 5.0.
If you already have APEX 4.2.x installed you can download a patch from support.oracle.com, the patch number is 17966818.
If you have an earlier version of APEX you can download the full version of APEX and install that.
As with other patch sets, this one is not different; it includes some bug fixes, updates in the packaged apps and the introduction of some new apps. You find the full patch set notes here.
Installing the patch in my APEX 4.2.4 environment took less than 15 minutes and everything went fine. 

I recommend everybody moving to the latest version as this is the final build of APEX 4.2.

Update 16-APR-2014: we actually hit one issue, which was fixed by Oracle today. So I would install this additional patch too. In support.oracle.com search for Patch 18609856: APEX_WEB_SERVICE.CLOBBASE642BLOB CONVERTS INCORRECTLY.
Categories: Development

To_char, Infinity and NaN

XTended Oracle SQL - Mon, 2014-03-31 15:23

Funny that oracle can easily cast ‘nan’,’inf’,’infinity’,’-inf’,’-infinity’ to corresponding binary_float_infinity,binary_double_nan, but there is no any format models for to_char(binary_float_infinity,format) or to_binary_***(text_expr,format) that can output the same as to_char(binary_float_infinity)/to_binary_float(‘inf’) without format parameter:

If a BINARY_FLOAT or BINARY_DOUBLE value is converted to CHAR or NCHAR, and the input is either infinity or NaN (not a number), then Oracle always returns the pound signs to replace the value.

Little example:

SQL> select to_binary_float('inf') from dual;

TO_BINARY_FLOAT('INF')
----------------------
                   Inf

SQL> select to_binary_float('inf','9999') from dual;
select to_binary_float('inf','9999') from dual
                       *
ERROR at line 1:
ORA-01722: invalid number

SQL> select
  2     to_char(binary_float_infinity)         without_format
  3    ,to_char(binary_float_infinity,'99999') with_format
  4    ,to_char(1e6d,'99999')                  too_large
  5  from dual;

WITHOUT_F WITH_FORMAT        TOO_LARGE
--------- ------------------ ------------------
Inf       ######             ######

SQL> select to_char(0/0f) without_format, to_char(0/0f,'tme') with_format from dual;

WITHOUT_F WITH_FORMAT
--------- --------------------------------------------------------------------------
Nan       ################################################################

ps. it’s just crossposting of my old blog.

Categories: Development

Deterministic functions, result_cache and operators

XTended Oracle SQL - Sun, 2014-03-30 16:51

In previous posts about caching mechanism of determinstic functions I wrote that cached results are kept only between fetch calls, but there is one exception from this rule: if all function parameters are literals, cached result will not be flushed every fetch call.
Little example with difference:

SQL> create or replace function f_deterministic(p varchar2)
  2     return varchar2
  3     deterministic
  4  as
  5  begin
  6     dbms_output.put_line(p);
  7     return p;
  8  end;
  9  /
SQL> set arrays 2 feed on;
SQL> set serverout on;
SQL> select
  2     f_deterministic(x) a
  3    ,f_deterministic('literal') b
  4  from (select 'not literal' x
  5        from dual
  6        connect by level<=10
  7       );

A                              B
------------------------------ ------------------------------
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal

10 rows selected.

not literal
literal
not literal
not literal
not literal
not literal
not literal

As you can see, ‘literal’ was printed once, but ‘not literal’ was printed 6 times, so it was returned from cache 4 times.

Also i want to show the differences in consistency between:
1. Calling a function with determinstic and result_cache;
2. Calling an operator for function with result_cache;
3. Calling an operator for function with deterministic and result_cache;

In this example I will do updates in autonomouse transactions to emulate updates in another session during query execution:
Spoiler:: Tables and procedures with updates SelectShow

drop table t1 purge;
drop table t2 purge;
drop table t3 purge;

create table t1 as select 1 id from dual;
create table t2 as select 1 id from dual;
create table t3 as select 1 id from dual;

create or replace procedure p1_update as
  pragma autonomous_transaction;
begin
   update t1 set id=id+1;
   commit;
end;
/
create or replace procedure p2_update as
  pragma autonomous_transaction;
begin
   update t2 set id=id+1;
   commit;
end;
/
create or replace procedure p3_update as
  pragma autonomous_transaction;
begin
   update t3 set id=id+1;
   commit;
end;
/


Spoiler:: Variant 1
SelectShow

create or replace function f1(x varchar2) return number result_cache deterministic
as
  r number;
begin
   select id into r from t1;
   p1_update;
   return r;
end;
/


Spoiler:: Variant 2
SelectShow

create or replace function f2(x varchar2) return number result_cache
as
  r number;
begin
   select id into r from t2;
   p2_update;
   return r;
end;
/
create or replace operator o2
binding(varchar2)
return number
using f2
/


Spoiler:: Variant 3
SelectShow

create or replace function f3(x varchar2) return number result_cache deterministic
as
  r number;
begin
   select id into r from t3;
   p3_update;
   return r;
end;
/
create or replace operator o3
binding(varchar2)
return number
using f3
/


Test:

SQL> set arrays 2;
SQL> select
  2     f1(dummy) variant1
  3    ,o2(dummy) variant2
  4    ,o3(dummy) variant3
  5  from dual
  6  connect by level<=10;

  VARIANT1   VARIANT2   VARIANT3
---------- ---------- ----------
         1          1          1
         2          1          1
         2          1          1
         3          1          1
         3          1          1
         4          1          1
         4          1          1
         5          1          1
         5          1          1
         6          1          1

10 rows selected.

SQL> /

  VARIANT1   VARIANT2   VARIANT3
---------- ---------- ----------
         7         11         11
         8         11         11
         8         11         11
         9         11         11
         9         11         11
        10         11         11
        10         11         11
        11         11         11
        11         11         11
        12         11         11

10 rows selected.

We can see that function F1 returns same results every 2 execution – it is equal to fetch size(“set arraysize 2″),
operator O2 and O3 return same results for all rows in first query execution, but in the second query executions we can see that they are incremented by 10 – it’s equal to number of rows.
What we can learn from that:
1. A calling a function F1 with result_cache and deterministic reduces function executions, but all function results inconsistent with query;
2. Operator O2 returns consistent results, but function is always executed because we invalidating result_cache every execution;
3. Operator O3 works as well as operator O2, without considering that function is deterministic.

All tests scripts: tests.zip

Categories: Development

When v$sesstat statistics are updated

XTended Oracle SQL - Thu, 2014-03-20 18:41

Craig Shallahamer wrote excellent article “When is v$sesstat really updated?”.
And my today post just a little addition and correction about the difference of updating ‘Db time’ and ‘CPU used by this session’ statistics.

Test #1

In this test I want to show that the statistics will be updated after every fetch call.
I have set arraysize=2, so sql*plus will fetch by 2 rows:
(full script)

-- Result will be fetched by 2 rows:
set arraysize 2;
-- this query generates CPU consumption 
-- in the scalar subquery on fetch phase,
-- so CPU consumption will be separated 
-- into several periods between fetch calls:
with gen as (
            select/*+ materialize */
               level n, lpad(level,400) padding
            from dual
            connect by level<=200
            )
    ,stat as (
            select/*+ inline */
               sid,name,value 
            from v$mystat st, v$statname sn
            where st.statistic#=sn.statistic#
              and sn.name in ('DB time'
                             ,'CPU used by this session'
                             ,'user calls'
                             ,'recursive calls')
            )
--first rows just for avoiding SQL*Plus effect with fetching 1 row at start,
-- others will be fetched by "arraysize" rows:
select null rn,null cnt,null dbtime,null cpu,null user_calls, null recursive_calls from dual
union all -- main query:
select
   rownum rn
  ,(select count(*) from gen g1, gen g2, gen g3 where g1.n>g2.n and g1.n*0=main.n*0) cnt
  ,(select value from stat where sid*0=n*0 and name = 'DB time'                    ) dbtime
  ,(select value from stat where sid*0=n*0 and name = 'CPU used by this session'   ) cpu
  ,(select value from stat where sid*0=n*0 and name = 'user calls'                 ) user_calls
  ,(select value from stat where sid*0=n*0 and name = 'recursive calls'            ) recursive_calls
from gen main
where rownum<=10;
set arraysize 15;

Test results:

SQL> @tests/dbtime

        RN        CNT     DBTIME        CPU USER_CALLS RECURSIVE_CALLS
---------- ---------- ---------- ---------- ---------- ---------------

         1    3980000      12021      11989        200             472
         2    3980000      12021      11989        200             472
         3    3980000      12121      12089        201             472
         4    3980000      12121      12089        201             472
         5    3980000      12220      12186        202             472
         6    3980000      12220      12186        202             472
         7    3980000      12317      12283        203             472
         8    3980000      12317      12283        203             472
         9    3980000      12417      12383        204             472
        10    3980000      12417      12383        204             472

As you can see the statistics are updated after every fetch call.

Test #2

Now since we already tested simple sql query, I want to do a little bit more complicated test with PL/SQL:
I’m going to write single PL/SQL block with next algorithm:
1. Saving stats
2. Executing some pl/sql code with CPU consumption
3. Getting statistics difference
4. Starting query from first test
5. Fetch 10 rows
6. Getting statistics difference
7. Fetch next 10 rows
8. Getting statistics difference
9. Fetch next 10 rows
10. Getting statistics difference
And after executing this block, i want to check statistics.

Full script:

set feed off;

-- saving previous values
column st_dbtime      new_value prev_dbtime      noprint;
column st_cpu_time    new_value prev_cputime     noprint;
column st_user_calls  new_value prev_user_calls  noprint;
column st_recur_calls new_value prev_recur_calls noprint;

select   max(decode(sn.NAME,'DB time'                  ,st.value))*10 st_dbtime
        ,max(decode(sn.NAME,'CPU used by this session' ,st.value))*10 st_cpu_time
        ,max(decode(sn.NAME,'user calls'               ,st.value))    st_user_calls
        ,max(decode(sn.NAME,'recursive calls'          ,st.value))    st_recur_calls
from v$mystat st, v$statname sn
where st.statistic#=sn.statistic# 
  and sn.name in ('DB time','CPU used by this session'
                 ,'user calls','recursive calls'
                 )
/
-- variable for output from pl/sql block: 
var output varchar2(4000);

prompt Executing test...;
----- main test:
declare
   cnt int;
   st_dbtime      number; 
   st_cpu_time    number; 
   st_user_calls  number; 
   st_recur_calls number; 
   cursor c is 
      with gen as (select/*+ materialize */
                     level n, lpad(level,400) padding
                   from dual
                   connect by level<=200)
      select
          rownum rn
        , (select count(*) from gen g1, gen g2, gen g3 where g1.n>g2.n and g1.n*0=main.n*0) cnt
      from gen main
      where rownum<=60;
   
   type ctype is table of c%rowtype;
   c_array ctype;
   
   procedure SnapStats(descr varchar2:=null)
   is
      st_new_dbtime      number;
      st_new_cpu_time    number;
      st_new_user_calls  number;
      st_new_recur_calls number;
   begin
      select   max(decode(sn.NAME,'DB time'                 ,st.value))*10 st_dbtime
              ,max(decode(sn.NAME,'CPU used by this session',st.value))*10 st_cpu_time
              ,max(decode(sn.NAME,'user calls'              ,st.value))    st_user_calls
              ,max(decode(sn.NAME,'recursive calls'         ,st.value))    st_recur_calls
          into st_new_dbtime,st_new_cpu_time,st_new_user_calls,st_new_recur_calls
      from v$mystat st, v$statname sn
      where st.statistic#=sn.statistic#
        and sn.name in ('DB time','CPU used by this session'
                       ,'user calls','recursive calls'
                       );
      if descr is not null then
         :output:= :output || descr ||':'||chr(10)
                || 'sesstat dbtime:     ' || (st_new_dbtime      - st_dbtime      )||chr(10)
                || 'sesstat cputime:    ' || (st_new_cpu_time    - st_cpu_time    )||chr(10)
                || 'sesstat user calls: ' || (st_new_user_calls  - st_user_calls  )||chr(10)
                || 'sesstat recur calls:' || (st_new_recur_calls - st_recur_calls )||chr(10)
                || '======================================'||chr(10);
      end if;
      st_dbtime      := st_new_dbtime     ;
      st_cpu_time    := st_new_cpu_time   ;
      st_user_calls  := st_new_user_calls ;
      st_recur_calls := st_new_recur_calls;
   end;
   
begin
   -- saving previous stats:
   SnapStats;

   -- generating cpu load:
   for i in 1..1e7 loop
      cnt:=cnt**2+cnt**1.3-cnt**1.2;
   end loop;
   -- getting new stats:
   SnapStats('After pl/sql loop');
   
   open c;
   SnapStats('After "open c"');
   fetch c bulk collect into c_array limit 10;
   SnapStats('After fetch 10 rows');
   fetch c bulk collect into c_array limit 10;
   SnapStats('After fetch 20 rows');
   fetch c bulk collect into c_array limit 10;
   SnapStats('After fetch 30 rows');
   close c;
   SnapStats('After close c');
end;
/ 

prompt 'Delta stats after statement(ms):';
select   max(decode(sn.NAME,'DB time'                 ,st.value))*10
          - &&prev_dbtime      as delta_dbtime
        ,max(decode(sn.NAME,'CPU used by this session',st.value))*10
          - &&prev_cputime     as delta_cpu_time
        ,max(decode(sn.NAME,'user calls'              ,st.value))  
          - &&prev_user_calls  as delta_user_calls
        ,max(decode(sn.NAME,'recursive calls'         ,st.value))  
          - &&prev_recur_calls as delta_recur_calls
from v$mystat st, v$statname sn
where st.statistic#=sn.statistic# 
  and sn.name in ('DB time','CPU used by this session'
                 ,'user calls','recursive calls'
                 )
/
prompt 'Test results:';
col output format a40;
print output;
set feed off;

Output:

SQL> @tests/dbtime2

Executing test...
'Delta stats after statement(ms):'

DELTA_DBTIME DELTA_CPU_TIME DELTA_USER_CALLS DELTA_RECUR_CALLS
------------ -------------- ---------------- -----------------
       18530          18460                5                33

Test results:
OUTPUT
----------------------------------------
After pl/sql loop:
sesstat dbtime:     0
sesstat cputime:    4350
sesstat user calls: 0
sesstat recur calls:2
======================================
After "open c":
sesstat dbtime:     0
sesstat cputime:    20
sesstat user calls: 0
sesstat recur calls:4
======================================
After fetch 10 rows:
sesstat dbtime:     0
sesstat cputime:    4680
sesstat user calls: 0
sesstat recur calls:2
======================================
After fetch 20 rows:
sesstat dbtime:     0
sesstat cputime:    4680
sesstat user calls: 0
sesstat recur calls:2
======================================
After fetch 30 rows:
sesstat dbtime:     0
sesstat cputime:    4690
sesstat user calls: 0
sesstat recur calls:2
======================================
After close c:
sesstat dbtime:     0
sesstat cputime:    0
sesstat user calls: 0
sesstat recur calls:3
======================================

We can notice that “CPU time” is updated at the same time as recursive calls, but “DB time” is updated only with “User calls”. Although this difference is not so important(because in most cases we can use other statistics in sum), but i think, if you want to instrument some code, it gives reason to check out desirable statistics for update time.

Categories: Development

New Continuous Integration tutorial published

Lynn Munsinger - Mon, 2012-07-02 09:44
Hot off the press – a new continuous integration tutorial. It’s really not just about continuous integration, though! You’ll find it useful even if you aren’t using a continuous integration server like Hudson. It’s useful if you are doing any part of the scenario it documents: Setting up Team Productivity Center for your team and [...]

Advanced ADF eCourse, Part Deux

Lynn Munsinger - Tue, 2012-06-19 15:11
In February, we published the first in a series of FREE(!) online advanced ADF training: http://tinyurl.com/advadf-part1 The response to that course has been overwhelmingly positive as more and more people are moving past the evaluation/prototype stages with ADF and looking for more advanced topics. I’m pleased to relay the good news that the 2nd part [...]

Fun with Hudson, Part 1.1

Lynn Munsinger - Tue, 2012-06-05 09:19
Earlier I posted that I had used the following zip command in the ‘execute shell’ action for my Hudson build job: zip -r $WORKSPACE/builds/$JOB_NAME-$BUILD_NUMBER * -x ‘*/.svn/*’ -x ‘*builds/*’ This zips up the content of the exported source, so that I can send it on to team members who need the source of each build [...]

Hiring a Curriculum Developer

Lynn Munsinger - Tue, 2012-05-15 09:34
If you are an instructional designer with an eye for technologies like ADF, or if you are an ADF enthusiast and excel at creatively producing technical content, then ADF Product Management would like to hear from you. We’re looking for a curriculum developer to join our ADF Curriculum team, which is tasked with ensuring that [...]

Hiring a Curriculum Developer

Lynn Munsinger - Tue, 2012-05-15 09:34
If you are an instructional designer with an eye for technologies like ADF, or if you are an ADF enthusiast and excel at creatively producing technical content, then ADF Product Management would like to hear from you. We’re looking for a curriculum developer to join our ADF Curriculum team, which is tasked with ensuring that [...]

New ADF Insider on Layouts

Lynn Munsinger - Mon, 2012-03-26 13:22
I’ve published an ADF Insider session that helps de-mystify the ADF Faces components and how to work with them (and not against them), when building ADF applications. There’s also some great information on building ADF prototypes. Take a look here: http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/layouts/layouts.html

New ADF Insider on Layouts

Lynn Munsinger - Mon, 2012-03-26 13:22
I’ve published an ADF Insider session that helps de-mystify the ADF Faces components and how to work with them (and not against them), when building ADF applications. There’s also some great information on building ADF prototypes. Take a look here: http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/layouts/layouts.html

Wed, 1969-12-31 18:00