Skip navigation.

Feed aggregator

Announcing “SLOB Recipes”

Kevin Closson - Fri, 2015-07-17 10:28

I’ve started updating the SLOB Resources page with links to “recipes” for certain SLOB testing. The first installment is the recipe for loading 8TB scale SLOB 2.3 Multiple Schema Model with a 2-Socket Linux host attached to EMC XtremIO. Recipes will include (at a minimum) the relevant SLOB program output (e.g., setup.sh or runit.sh), init.ora and slob.conf.

Please keep an eye on the SLOB Resources page for updates…and don’t miss the first installment. It’s quite interesting.

SLOB-recipes


Filed under: oracle

Monitoring and Diagnostics without Oracle Enterprise Manager

Hemant K Chitale - Fri, 2015-07-17 07:22
Here's a copy of a presentation I made at Singapore Oracle Sessions - III ,14-Jul-2015 on the topic of Monitoring and Diagnostics without Oracle Enterprise Manager.

.
.
.

Categories: DBA Blogs

Using JRE 1.8 for the Liberty Buildpack in Bluemix

Pas Apicella - Fri, 2015-07-17 06:17
The latest Liberty biuldpack update now adds the ability to switch to JRE 1.8 runtime. Given I am now switching to JDK 1.8 for all my projects this has come in very handy, and of course Spring Boot favors JDK 1.8 as well. The default is still JRE 1.7 so to switch to 1.8 set an ENV variable as follows in your manifest.yml as shown below.

manifest.yml

applications:
- name: pas-myapp
  memory: 512M
  instances: 1
  host: pas-myapp
  domain: mybluemix.net
  path: ./target/myapp-1.0-SNAPSHOT.jar
  env:
   JBP_CONFIG_IBMJDK: "version: 1.8.+"

More Information

https://developer.ibm.com/bluemix/2015/05/05/liberty-buildpack-updates-java-8-java-ee-7-updates/http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

SQL Server 2016: In-Memory OLTP enhancement

Yann Neuhaus - Fri, 2015-07-17 05:19

The CTP2.1 has been released for some weeks and it is time to check which interesting enhancements have been already deployed for the In-Memory OLTP feature.
I will first have a look to the memory optimized table.

Memory Optimized table

I have a Memory Optimized table named command_queue_HK and I will add a column to this table. This table is saved on my disk in a XPT folder with its corresponding files:

b2ap3_thumbnail_In-memory1.jpg

I execute the following command:

b2ap3_thumbnail_In-memory2.jpg

If I have a look to my XTP folder I see that I have six new files:

b2ap3_thumbnail_In-memory3.jpg

Does it mean that each time I make an alter table I have 6 new files? Let’s try another alter table:

b2ap3_thumbnail_In-memory4.jpg

Go back to my XTP container:

b2ap3_thumbnail_In-memory5.jpg

I have six new files for my last alter. Each time you run an alter table you create six new files corresponding to the new structure of your table… could by really disk consuming if you update a lot your tables… which hopefully could not arrive too often.
For the time being I cannot find a process which cleans obsoletes files, but when you restart you SQL Server Engine this process runs and deletes all files corresponding to your tables and recreates just six for the current structure:

b2ap3_thumbnail_In-memory6.jpg

This alter table is an offline process and need twice the size of the table in memory… don’t forget it.

It is also possible now to use Non-BIN2 collations in index key columns:

b2ap3_thumbnail_In-memory7.jpg

Some functionalities are not yet available (no exhaustive list).

Foreign Keys for Memory Optimized table:

b2ap3_thumbnail_In-memory8.jpg

Check constraint:

b2ap3_thumbnail_In-memory9.jpg

Natively Compiled Stored Procedure

For Natively Compiled Stored Procedures, we also have some enhancements and as for Memory Optimized table the first one is the possibility to alter them.
If I check on my container folder I see the six files for my Natively Compiled Stored Procedure:

b2ap3_thumbnail_In-memory10.jpg

If I right click on my SP in Management Studio I see that I have two possibility to update my SP:

  • select Modify
  • go through “Script Stored Procedure as”, “ALTER to…”, select where to script it

Once my Stored Procedure is scripted, I can make the modifications I need and execute the code:

b2ap3_thumbnail_In-memory11.jpg


Once the code is executed, I don’t have like for table six files more but directly six new files and just six:

b2ap3_thumbnail_In-memory12.jpg

Execute function in Natively Compiled Stored Procedure is also available but just with natively compiled modules:

b2ap3_thumbnail_In-memory13.jpg

It is not working if we try to call a nested Stored Procedure.

Like for Memory Optimized table some functionalities are not already available in this Technology Preview 2.1.

Subqueries are also not available for the time being:

b2ap3_thumbnail_In-memory14.jpg

There are interesting enhancements for In-Memory OLTP already available with this CTP 2.1 like Alter Table or Alter Natively Compiled Procedure but some important ones are not yet getting out like Foreign keys or Constraints for In-Memory tables. We will have to wait the RTM version to be fixed on remaining caveats.

Descending Indexes

Jonathan Lewis - Fri, 2015-07-17 01:42

I’ve written about optimizer defects with descending indexes before now but a problem came up on the OTN database forum a few days ago that made me decide to look very closely at an example where the arithmetic was clearly defective. The problem revolves around a table with two indexes, one on a date column (TH_UPDATE_TIMESTAMP) and the other a compound index which starts with the same column in descending order (TH_UPDATE_TIMESTAMP DESC, TH_TXN_CODE). The optimizer was picking the “descending” index in cases where it was clearly the wrong index (even after the statistics had been refreshed and all the usual errors relating to date-based indexes had been discounted). Here’s an execution plan from the system which shows that there’s something wrong with the optimizer:


SELECT COUNT(*) FROM TXN_HEADER WHERE TH_UPDATE_TIMESTAMP BETWEEN SYSDATE-30 AND SYSDATE;

---------------------------------------------------------------------------------------
| Id  | Operation          | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                  |       |       |     4 (100)|          |
|   1 |  SORT AGGREGATE    |                  |     1 |     8 |            |          |
|*  2 |   FILTER           |                  |       |       |            |          |
|*  3 |    INDEX RANGE SCAN| TXN_HEADER_IDX17 |  1083K|  8462K|     4   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

There are two clues here: first, Oracle has used the (larger) two-column index when the single column is (almost guaranteed to be) the better choice simply because it will be smaller; secondly, we have a cost of 4 to acquire 1M rowids through an (implicitly b-tree) index range scan, which would require at least 250,000 index entries per leaf block to keep the cost that low (or 2,500 if you set the optimizer_index_cost_adj to 1; so it might just be possible if you had a 32KB block size).

The OP worked hard to supply information we asked for, and to follow up any suggestions we made; in the course of this we got the information that the table had about 90M rows and this “timestamp” column had about 45M distinct values ranging from 6th Sept 2012 to 2nd July 2015 with no nulls.

Based on this information I modelled the problem in an instance of 11.2.0.4 (the OP was using 11.2.0.3).


create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                                                          id,
        to_date('06-SEP-2012 15:13:00','dd-mon-yyyy hh24:mi:ss') +
                trunc((rownum - 1 )/4) / (24 * 60)                      upd_timestamp,
        mod(rownum - 1,10)                                              txn_code,
        rpad('x',50)                                                    padding
from
        generator       v1,
        generator       v2
where
        rownum <= 4 * 1030  *  24 * 60
;

create index t1_asc on t1(upd_timestamp) nologging;
create index t1_desc on t1(upd_timestamp desc) nologging;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          => 'T1',
                method_opt       => 'for all columns size 1',
                cascade          => false
        );
end;
/

My data set has 4 rows per minute from 6th Sept 2012 to 3rd July 2015, with both an ascending and descending index on the upd_timestamp column. For reference, here are the statistics about the two indexes:


INDEX_NAME               SAMPLE     BLEVEL LEAF_BLOCKS CLUSTERING_FACTOR
-------------------- ---------- ---------- ----------- -----------------
T1_DESC                 5932800          2       17379            154559
T1_ASC                  5932800          2       15737             59825

The ascending index is smaller with a significantly better clustering_factor, so for queries where either index would be a viable option the ascending index is the one (we think) that the optimizer should choose. In passing, the 5.9M index entries is exactly the number of rows in the table – these stats were computed automatically as the indexes were created.

Here’s a simple query with execution plan (with rowsource execution stats):

select max(id) from t1 where upd_timestamp between
to_date('01-jun-2015','dd-mon-yyyy')                and
to_date('30-jun-2015','dd-mon-yyyy')

---------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |         |      1 |        |    29 (100)|      1 |00:00:01.29 |    4710 |
|   1 |  SORT AGGREGATE              |         |      1 |      1 |            |      1 |00:00:01.29 |    4710 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1      |      1 |    167K|    29   (0)|    167K|00:00:00.98 |    4710 |
|*  3 |    INDEX RANGE SCAN          | T1_DESC |      1 |    885 |     5   (0)|    167K|00:00:00.35 |     492 |
---------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."SYS_NC00005$">=HEXTORAW('878CF9E1FEF8FEFAFF')  AND
              "T1"."SYS_NC00005$"<=HEXTORAW('878CF9FEF8FEF8FF') )
       filter((SYS_OP_UNDESCEND("T1"."SYS_NC00005$")>=TO_DATE(' 2015-06-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND SYS_OP_UNDESCEND("T1"."SYS_NC00005$")<=TO_DATE(' 2015-06-30 00:00:00',
              'syyyy-mm-dd hh24:mi:ss')))

The optimizer’s index estimate of 885 rowids is a long way off the actual rowcount of 167,000 – even though I have perfect stats describing uniform data and the query is simple enough that the optimizer should be able to get a good estimate. Mind you, the predicate section looks a lot messier than you might expect, and the table estimate is correct (though not consistent with the index estimate, of course).

Here’s the plan I get when I make the descending index invisible:


--------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name   | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |        |      1 |        |  2146 (100)|      1 |00:00:01.25 |    2134 |
|   1 |  SORT AGGREGATE              |        |      1 |      1 |            |      1 |00:00:01.25 |    2134 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1     |      1 |    167K|  2146   (1)|    167K|00:00:00.94 |    2134 |
|*  3 |    INDEX RANGE SCAN          | T1_ASC |      1 |    167K|   453   (2)|    167K|00:00:00.31 |     446 |
--------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("UPD_TIMESTAMP">=TO_DATE(' 2015-06-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "UPD_TIMESTAMP"<=TO_DATE(' 2015-06-30 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

With the descending index invisible we can see that the cardinality estimate for the ascending index correct and notice how much higher this makes the cost. It’s not surprising that the optimizer picked the wrong index with such a difference in cost. The question now is why did the optimizer get the index cardinality (i.e. selectivity, hence cost) so badly wrong.

The answer is that the optimizer has made the same mistake that applications make by storing dates as character strings. It’s using the normal range-based calculation for the sys_op_descend() values recorded against the virtual column and has lost all understanding of the fact that these values represent dates. I can demonstrate this most easily by creating one more table, inserting a couple of rows, gathering stats, and showing you what the internal storage of a couple of “descendomg” dates looks like.


drop table t2;

create table t2 (d1 date);
create index t2_i1 on t2(d1 desc);

insert into t2 values('01-Jun-2015');
insert into t2 values('30-Jun-2015');
commit;

select d1, sys_nc00002$ from t2;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          => 'T2',
                method_opt       => 'for all columns size 1',
                cascade          => false
        );
end;
/

column endpoint_value format 999,999,999,999,999,999,999,999,999,999,999,999
break on table_name skip 1

select
        table_name, column_name, endpoint_number, endpoint_value, to_char(endpoint_value,'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX') end_hex
from
        user_tab_histograms
where
        table_name in ('T1','T2')
and     column_name like 'SYS%'
order by
        table_name, column_name, endpoint_number
;

I’ve put the dates 1st June 2015 and 30th June 2015 into the table because those were the values I used in the where clause of my query. Here are the results showing the internal representation of the (descending) index column and the stored low and high values for the virtual columns in both t1 and t2.


D1        SYS_NC00002$
--------- ------------------------
01-JUN-15 878CF9FEF8FEF8FF
30-JUN-15 878CF9E1FEF8FEFAFF

TABLE_NAME           COLUMN_NAME          ENDPOINT_NUMBER                                   ENDPOINT_VALUE END_HEX
-------------------- -------------------- --------------- ------------------------------------------------ ---------------------------------
T1                   SYS_NC00005$                       0  703,819,340,111,320,000,000,000,000,000,000,000    878CF8FCEFF30BCEBC8823F7000000
                     SYS_NC00005$                       1  703,880,027,955,346,000,000,000,000,000,000,000    878FF6F9EFF20256F3B404F7400000

T2                   SYS_NC00002$                       0  703,819,411,001,549,000,000,000,000,000,000,000    878CF9E1FEF8F24C1C7CED46200000
                     SYS_NC00002$                       1  703,819,419,969,389,000,000,000,000,000,000,000    878CF9FEF8FEEED28AC5B22A200000

If you check the t2 data (sys_nc00002$) values against the end_hex values in the histogram data you’ll see they match up to the first 6 bytes (12 digits). Oracle has done its standard processing – take the first 6 bytes of the column, covert to decimal, round to the most significant 15 digits (technically round(value, -21)), convert and store the result as hex.

So let’s do some arithmetic. The selectivity of a range scan that is in-bounds is (informally): “range we want” / “total range”. I’ve set up t2 to show us the values we will need to calculate the range we want, and I’ve reported the t1 values to allow us to calculate the total range, we just have to subtract the lower value (endpoint number 0) from the higher value for the two sys_nc0000N$ columns. So (ignoring the 21 zeros everywhere) our selectivity is:

  • (703,819,419,969,389 – 703,819,411,001,549) / ( 703,880,027,955,346 – 703,819,340,111,320) = 0.00014777
  • We have 5.9M rows in the table, so the cardinality estimate should be about: 5,932,800 * 0.00014777 = 876.69

The actual cardinality estimate was 885 – but we haven’t allowed for the exact form of the range-based predicate: we should add 1/num_distinct at both ends because the range is a closed range (greater than or EQUAL to, less than or EQUAL to) – which takes the cardinality estimate up to 884.69 which rounds to the 885 that the optimizer produced.

Conclusion

This note shows that Oracle is (at least for dates) losing information about the underlying data when dealing with the virtual columns associated with descending columns in indexes. As demonstrated that can lead to extremely bad selectivity estimates for predicates based on the original columns, and these selectivity estimates make it possible for Oracle to choose the wrong index and introduce a spuriously low cost into the calculation of the full execution plan.

Footnote

This note seems to match bug note 11072246 “Wrong Cardinality estimations for columns with DESC indexes”, but that bug is reported as fixed in 12.1, while this test case reproduces in 12.1.0.2

 


Error referenced ‘irman ioracle’ during binary installation

DBASolved - Thu, 2015-07-16 19:27

This week I decided to redo a few of my virtual box machines and build a few new ones for testing. Getting the Oracle Linux 6.6 installed was a breeze; however, when I started to install the Oracle Database 12c binaries I was hitting errors during the linking phase. This had me puzzled for a bit to say the least. I kept getting this error:

After driving myself nuts, I decided to look closer to what is going on. The file listed in the error message contained a reference to ‘irman ioracle’. The actual message was:

So how did I get past this issue? The message is referring to a linking issue of the Oracle binaries. The issue is due to the libjavavm12.a file not being placed in the $ORACLE_HOME/lib. In order to fix this, I had to run:

cp /opt/app/oracle/product/12.1.0.2/dbhome_1/javavm/jdk/jdk6/lib/libjavavm12.a /opt/app/oracle/product/12.1.0.2/dbhome_1/lib/

Once this was ran, the installation completed without error and other configuration assistants within the binaries were able to complet successfully.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Database
Categories: DBA Blogs

ODI Agent

Dylan's BI Notes - Thu, 2015-07-16 18:55
An ODI agent is a process running for performing the data integration jobs. The data integration jobs are ODI scenarios in ODI.   Each scenario can be regarded as an executable program. However, we do not run those ODI scenarios directly.  These jobs are executed by the ODI agent. – An ODI agent is a Java program.  It connects to […]
Categories: BI & Warehousing

Beat 39

Floyd Teter - Thu, 2015-07-16 17:00
Let's start today's thought with a tidbit from the Standish Group's 2013 Chaos Report.  In that report, the Standish Group cheerfully shares that IT project success came in at 39%...cheerful because that is an improvement.  In other words, 6 out of 10 IT projects are failing to meet schedule, cost and quality objectives and we're thinking that's good news.  Yikes!!!

If we look at the numbers in SaaS carefully - regardless of vendor - we see a pretty consistent gap between sales and "go live".  Guess how large the gap is?  Yeah, about 61%.  Arithmetic anybody?  Granted that my access to data is somewhat limited here but, even with my small sample size, it's one of those things that make me "stop and go hmmm".

The upshot?  In the developing space of SaaS, I think we may have all underestimated the level of difficulty in implementing those nifty SaaS applications.  At the very least, it seems like we're missing the boat on how to move from vision to achievement.

Enablement.  SaaS customers need tools that ease the implementation and use of the applications.  And preferably things that scale...inventing the tool every time you tackle the project buys nothing but headaches.  But I think good tools for enablement are the key if we're ever going to "Beat 39".

More on this in later posts.  I think I may be focusing on this for a bit.



Oracle Priority Support Infogram for 16-JUL-2015

Oracle Infogram - Thu, 2015-07-16 15:33

Time to Patch!
The British actor David Niven once said: “After 50 it’s just patch, patch, patch”. Technology doesn’t wait until you are 50:
Oracle Critical Patch Update for July 2015

Dear Oracle Security Alert Subscriber,

The Critical Patch Update for July 2015 was released on July 14th, 2015.
Oracle strongly recommends applying the patches as soon as possible.

The Critical Patch Update Advisory is the starting point for relevant information. It includes the list of products affected, pointers to obtain the patches, a summary of the security vulnerabilities for each product suite, and links to other important documents. Supported products that are not listed in the "Affected Products and Components" section of the advisory do not require new patches to be applied.

Also, it is essential to review the Critical Patch Update supporting documentation referenced in the Advisory before applying patches, as this is where you can find important pertinent information. Critical Patch Update Advisories are available at the following location:

Oracle Technology Network: http://www.oracle.com/technetwork/topics/security/alerts-086861.html

The Critical Patch Update Advisory for July 2015 is available at the following location:

Oracle Technology Network: http://www.oracle.com/technetwork/topics/security/cpujul2015-2367936.html

Important information can also be found at: https://blogs.oracle.com/security/

The next four dates for Critical Patch Updates are:
October 20, 2015January 19, 2016April 19, 2016July 19, 2016
RDBMS
Implementing a Mutually Exclusive Relationship in the Database, from All Things SQL.
Big Data
OBI11g Analyses Hadoop data directly (without ETL) using Oracle Big Data SQL, from the BI & EPM Partner Community EMEA blog.
WebLogic
OTD active/standby failover, from the WebLogic Partner Community EMEA blog.
And from the same source:
Encrypt & decrypt your way into WebLogic SSL
Java
Develop Non-Blocking Web Applications in Java, from The Java Source.
SOA
Managing Idempotence in SOA Suite, from the SOA & BPM Partner Community Blog.
From the same blog comes this interesting series on some of the controversies in the SOA community: SOA Mythbusters
WebCenter
Part 1: An Overview of the Oracle.com Localization Framework in WCS, from PDIT Collaborative Application Services.
From Proactive Support - WebCenter Content: Oracle WebCenter Content (WCC) 11.1.1.8.13 Bundle Patch is Here!
OBIEE
OBIEE 11g 11.1.1.7.150714 Bundle Patch is Available for Oracle Business Intelligence Enterprise Edition and Exalytics, from Business Analytics - Proactive Support.
Hyperion
Patch Set Update: Hyperion Calculation Manager 11.1.2.4.003, from Business Analytics - Proactive Support.
Ops Center
New Books in 12.3, from the Ops Center blog.
Oracle Technology
It’s hard to put Jeff Taylor’s Weblog into a single product box, as you can tell from the wide-ranging list of blog posts in this Table of Contents at his blog.
EBS
From the Oracle E-Business Suite Support blog:
Webcast: How to Reconcile Certain ATP Date Calculations (ODS vs. PDS)
From the Oracle E-Business Suite Technology blog:
Using Logical Hostnames in E-Business Suite Release 12.x
JRE 1.8.0_51 Certified with Oracle E-Business Suite
JRE 1.7.0_85 Certified with Oracle E-Business Suite
Java JRE 1.6.0_101 Certified with Oracle E-Business Suite
Critical Patch Update for July 2015 Now Available
Accessing Oracle E-Business Suite from Mobile Apps
Oracle Active Data Guard 11.2.0.4, 12.1.01 Certified with EBS 12.1.3
Oracle Active Data Guard 11.2.0.4 Certified with EBS 12.2.3
Oracle Data Guard 11.2.0.4 Certified with EBS 12.2
Oracle Data Guard 12.1.0.2 Certified with EBS 12.1.3
DB 12.1.0.2 Certified with EBS 12.1.3 on Linux on IBM System z


First patch set (5.0.1) released for APEX 5.0

Dimitri Gielis - Thu, 2015-07-16 15:22
I know some people waiting till the first patch set that comes available after a major Oracle APEX release... today you no longer have to wait, APEX 5.0.1 is now available.

In the patch set notes you can read what changed.

If you're still on APEX 4.x you can go immediately to APEX 5.0.1, you only need to download the latest version on OTN.

If you're already on APEX 5.0, you can download the patch set from support.oracle.com, search for patch number 21364820. Applying the patch took less than 5 minutes in my environment.



This patch set updates the Universal Theme too, so don't forget to update your images folder. When you login in APEX after the patch, it will check if you have the correct images folder, if not it will give you an alert. Although I updated the images directory I still got that alert due to browser caching. Refresh your page and it should be ok.

Note that it's important to be on APEX 5.0.1 when you use APEX Office Print - currently available in beta for a select audience, public release end of this month (July 2015). Behind the scenes we use the APEX_JSON and APEX_WEB_SERVICE packages which got an update in APEX 5.0.1.

And finally, there's a nice new D3 chart available in APEX 5.0.1 called "D3 Collapsible Treemap Chart"


Happy upgrading...

Update 18-JUL-2015: there're still a couple of known issues, please read those too and install some patchset exception when necessary.
Categories: Development

How to Hide Actions in OBPM 12c Workspace

Jan Kettenis - Thu, 2015-07-16 12:17
In this article I explain how to hide the actions in the drop-down in Workspace.

In some situations you may need to hide the actions that are shown in the Actions drop-down in Workspace.


One way to do so is by configuring the access that users with a specific Workspace role have for a specific task (not to be confused with a swim-lane role), by going to the task definition -> Access -> Actions. For example, if you want to disable that an assignee can acquire or reassign a task, you can uncheck the "Acquire" and "Reassign" check boxes in the "Assignees" column.


You can also uncheck the outcomes, for example like the "APPROVE" and "REJECT" actions in the picture above. However, this will make that the assignee cannot choose the outcomes at all, because then the buttons are not rendered either. When you uncheck all outcomes this will practically make that the assignee cannot execute the activity at all, which is probably not what you want. As a matter of fact, you will also not be able to commit the task using the updateTaskOutcome() operation on the TaskService, as you will get an error when tying to do so.



A more practical case for hiding the outcomes from the drop-down menu is where the user should not be able to chose them from there, but should be able to chose the actions using buttons on the screen. An example would be where you need to submit data through the form, because it has to update data in the database directly (instead of via a service call in the process). This you can do through the Configure option in the task definition.


When you check "Require payload review before approval" the user will not be able to chose any action from the drop down. However, the buttons will be available on the screen.

Dealing with SSRS subscription schedules in AlwaysOn environment

Yann Neuhaus - Thu, 2015-07-16 11:00

A couple of days ago, I had the opportunity to work with AlwaysOn 2014 infrastructure that included a report server (SSRS) in native and scale-out mode. As you know, dealing with SSRS and availability groups is not an easy task depending on the used features. This is particularly true when scheduling and subscriptions are used by customers.

Here’s a brief description of the concerned architecture:

 

blog_57-_0_-_infra_aag_ssrs

 

2 SSRS servers in scale-out mode and an availability group with 2 replicas. The corresponding availability group configuration is shown below:

 

blog_57-_01_-_config_aagjpg

 

First of all, schedules are tied to SQL Server agent jobs and according to the BOL, there are some precautions to take in this case:

  • A schedule is created on the primary replica and is not automatically replicated to the other secondaries. In the same manner, a deleted schedule will not be replicated to other secondaries
  • After the database failover completes and after the report server service is re-started, SQL Server Agent jobs will be re-created automatically. Until the SQL agent jobs are recreated, any background executions associated with SQL Server Agent jobs will not be processed. This includes Reporting Services subscriptions, schedules, and snapshots

These two points introduce some challenges in the management stuff for the following reasons:

We must detect an availability group failover event in order to take the corresponding actions:

  • Disable the concerned SQL Server agent jobs on the secondaries to avoid monitoring false errors
  • Enable the SSRS agents jobs on the primary
  • Ensure that all of the SSRS jobs are synchronized between replicas when they are used as primary (schedules can be created or deleted between failovers)


How can we process in this case?

In fact there is a different wayto meet these requirements and you can probably find one of them on the internet. From my part, I decided to choose a solution based on our DMK monitoring tool for AlwaysOn that consists of:

  • A stored procedure that detects the availability group failover events. It will run on each replica on scheduled basis and will return the availability groups related to a failover event and also their new primary replica.
  • A SQL Server agent job that gets information from the above stored procedure and perform the necessary actions described below:
  • Disable the SSRS scheduling jobs if the replica’s new role is secondary
  • Drop the SSRS scheduling jobs and then restart one of the SSRS Server in the farm in order to resynchronize all the schedule jobs on the new replica (PowerShell script)

At this point, I just want to point out that the executed scheduling jobs may fail in a transitioning state according to BOL. Take a look at the following points:

  • The execution of background processing may occur more than once due to retry logic and the inability of the report server to mark scheduled work as completed during the failover period
  • The execution of background processing that would have normally been triggered to run during the period of the failover will not occur because SQL Server Agent will not be able to write data into the report server database and this data will not be synchronized to the new primary replica.


The DMK stored procedure

This script is part of our DMK AlwaysOn monitoring script. When a failover is triggered for an availability group, the stored procedure gives as output the concerned availability group and the new replica

 

 

:connect sql161   exec dbi_tools.[dbo].[dbi_alwayson_failover_detect] go   :connect sql162   exec dbi_tools.[dbo].[dbi_alwayson_failover_detect] go   :connect sql163   exec dbi_tools.[dbo].[dbi_alwayson_failover_detect]

 

blog_57-_1_-_dmk_monitoring_aag_script


 

After the first detection of the availability group failover is done, the next time no output is provided by the stored procedure to avoid a false positive:

 

blog_57-_2_-_dmk_monitoring_aag_script_2

 

PowerShell script:

The PowerShell script gets automatically the availability group where the ReportServer database is member of. Then it detects if a failover has occurred for the concerned availability group and perform the extra steps in order to update the reporting environment to use correctly the new availability replica.

Moreover, note that we get the report server names in the scale-out infrastructure from the dbo.keys table in the ReportServer database. Then we try to restart the first report server in the farm and we stop if it is successfully (we need only to restart one of the report servers in order to recreate all of the SSRS jobs). If the restart fails, we move on the next server and try to restart it and so on…

 

       .SYNOPSIS    Restart a SSRS report server in the farm if a failover is detected for the availability that concerns the ReportServer database      .DESCRIPTION        You must configure two parameters      $replica : replica name where the script is running       .EXAMPLE #>   $replica = "replica-name";   # Get the availability group that includes the ReportServer database $aag_ssrs = Invoke-Sqlcmd "SELECT DISTINCT                                   g.name as aag_name                            FROM sys.availability_groups as g                            JOIN sys.dm_hadr_database_replica_states as rs                                   ON g.group_id = rs.group_id                            JOIN sys.databases AS d                                   ON rs.database_id = d.database_id                            WHERE d.name = N''ReportServer'';" -ServerInstance $replica;   $restart_state = 0;   # Detect if a failover has occured $aags_failover = Invoke-Sqlcmd "EXEC dbi_tools.dbo.dbi_alwayson_failover_detect" -ServerInstance $replica;   Foreach ($aag in $aags_failover) {    If (![string]::IsNullOrEmpty($aag.group_name) -and ![string]::IsNullOrEmpty($aag_ssrs.aag_name))    {        If ($aag_ssrs.aag_name -eq $aag.group_name)        {            #Write-Host "SSRS availability group failover detected";              # If the concerned replica is the new primary --> Get SSRS reports servers in the farm (Scale-Out)             $ssrs_servers = Invoke-Sqlcmd "If (SELECT sys.fn_hadr_is_primary_replica(N''ReportServer'')) = 1 SELECT MachineName FROM [ReportServer].[dbo].[Keys] WHERE MachineName IS NOT NULL" -ServerInstance $replica;              If ($ssrs_servers.ItemArray.Count -gt 0)            {                # Delete all SSRS agent jobs (deleted subscriptions are not automatically replicated on the secondaries)                Invoke-Sqlcmd "DECLARE @sql NVARCHAR(MAX) = N'''';                                  SELECT                                       @sql = @sql + ''EXEC msdb.dbo.sp_delete_job @job_name = N'''''' + j.name + '''''''' + CHAR(13)                                FROM msdb.dbo.sysjobs AS j                                JOIN msdb.dbo.syscategories AS c                                        ON j.category_id = c.category_id                                WHERE c.name = N''Report Server'';                                  EXEC sp_executesql @sql;" -ServerInstance $replica;                  Foreach ($ssrs in $ssrs_servers)                {                    try                    {                                           #Write-Host "Restart ssrs server : " $ssrs.MachineName;                          # Restart one SSRS report in the farm (scale-out) in order to recreate subscription jobs                        Get-Service -Computer $ssrs.MachineName -Name "ReportServer" | Restart-Service;                                           Break;                    }                    catch [Microsoft.PowerShell.Commands.ServiceCommandException]                    {                           $pass = 0;                        #Write-Host "Error during the service restart of the report server " $ssrs.MachineName;                    }                }            }        }    } }

 

The T-SQL script for disabling / enabling SSRS subscription jobs

This T-SQL script code will be used to either enable or disable the SSRS jobs depending on the replica context. It will use the new sys.fn_hadr_is_primary_replica() DMF to determine if the current replica is primary or not for a concerned database.

 

DECLARE @sql NVARCHAR(MAX) = ''; DECLARE @state BIT = COALESCE((SELECT sys.fn_hadr_is_primary_replica(N'ReportServer')), 0);   SELECT @sql = @sql + 'IF NOT EXISTS (SELECT 1 FROM msdb.dbo.sysjobs WHERE name = N''' + j.name + ''' AND enabled = ' + CAST(@state AS CHAR(1)) + ')                                 EXEC sp_update_job @job_name = N''' + j.name + ''', @enabled = ' + CAST(@state AS CHAR(1)) + '' + CHAR(13) FROM msdb.dbo.sysjobs AS j JOIN msdb.dbo.syscategories AS c        ON j.category_id = c.category_id WHERE c.name = N'Report Server'   --PRINT @sql;   EXEC sp_executesql@sql;

 

Let me show you the scenario:

1- First, we have two schedules on the SSRS report server as follows …

 

blog_57-_30_-_ssrs_schedules

 

… with 2 associated SQL Server agent jobs on the primary replica (SQL161). You may also notice that the jobs have not been replicated on the secondary (SQL162) yet.

From the SQL Server side, I created the ReportServerSubscriptionControl job that includes two steps with the above scripts. It’s up to you to configure the schedule.

 

blog_57-_4_-_config_sql_job

 

We begin with an initial configuration where SQL161 is the primary replica and SQL162 the secondary

 

blog_57-_31_-_aag_case_1

 

2- A failover has occurred and the availability group has switched from SQL161 to SQL162 which is the new primary. After that ReportServerSubscriptionControl  job has been executed the configuration has changed as shown in the next picture:

 

blog_57-_32_-_aag_case_2

 

 

SQL162 is the new primary replica and all of the SSRS jobs are enabled and synchronized on it. Likewise, SQL161 becomes the new secondary and all of the SRRS jobs are now disabled.

 

Note that in this blog post, I voluntary bypassed the security configuration because it depends mainly on the customer context with the security rules in place. For instance, in my scenario, I played with a proxy account for executing some steps in the ReportServerSubscriptionControl job and the PowerShell subsystem. This proxy account is tied to a Windows credential that has minimum privileges configured in order to deal with both the information from the SQL Server instance itself and restart remotely and safely the SSRS service.

Hope this helps and please feel free to contact me if you have any questions

 

 

 

 

 


Auto Refresh for ADF BC Cached LOV

Andrejus Baranovski - Thu, 2015-07-16 08:19
You can configure auto refresh for ADF BC cached LOV and this works out of the box, no special coding is needed. My previous post describes how ADF BC session cached LOV behaves - ADF BC Session Cached LOV - Understanding Expire Time. Auto refresh can be configured for both application and session caching. One important thing - auto refresh doesn't work when ADF BC is configured with disconnected mode (ADF BC Tuning with Do Connection Pooling and TXN Disconnect Level). I would guess this is related to DB notification listener, it can't notify about DB changes, when DB connection is being taken from AM.

Make sure to remember to configure AM without disconnected mode. Auto refresh works for both application and session caching levels:


As a reminder - in order to use shared LOV functionality, you must select VO instance for LOV from shared AM configuration. See how it looks in the LOV definition wizard:


Auto refresh functionality is enabled with a single property - AutoRefresh = true. This can be set through VO general properties or directly in the VO source code:


Make sure Disconnect Application Module Upon Release option is not set:


I have overriden VO method processChangeNotification for logging purposes, this method is invoked automatically for VO's set with AutoRefresh = true. No need to code anything here, I'm just logging method invocation:


Let's do a test. First of all, check entries of Job list:


Go to DB and update one of the entries. For example, change Stock Clerk title and commit changes:


Come back to UI and press any button. This will trigger new request and Jobs list shared VO will be notified about DB change to refresh:


Check the list again, you should see changed entry for Stock Clerk:


In the log we can observe a call for change notification, it triggers shared LOV query re-execution and data re-fetch:


See recorded demonstration video, where I show refresh behaviour live:


Download sample application - ADFBCSharedSessionLOVApp.zip.

Oracle Supply Chain Management Cloud

OracleApps Epicenter - Thu, 2015-07-16 06:11
Cloud = Service-based computing model providing self-service, elasticity, shared resources, and pay-as-you-go New cloud computing technologies are enabling breakthrough innovations in supply chain management (SCM) applications delivered via software as a service (SaaS) models. To help companies support their complete quote-to-cash process in the Cloud, Oracle has expanded the Oracle Supply Chain Management Cloud with […]
Categories: APPS Blogs

Oracle APEX 5.0.1 now available

Patrick Wolf - Thu, 2015-07-16 05:48
Oracle Application Express 5.0.1 is now released and available for download. If you wish to download the full release of Oracle Application Express 5.0.1, you can get it from the Downloads page on OTN. If you have Oracle APEX 5.0.0 … Continue reading →
Categories: Development

How to solve accent sensitive problems?

Yann Neuhaus - Thu, 2015-07-16 02:45

Some days ago, one of my customer claimed that searching for “Muller” doesn’t return “Mueller” and “Müller” as well!
This is typically an issue due to the collation of the SQL Server database, but how to resolve this problem?
The collation of the database is Latin1_General_CI_AS, so a case insensitive and accent sensitive collation.

If a run the following query:

Select name from Person where name like 'muller'

 

alt


I get only “Muller” which is normal as I use an accent sensitive collation so u is not the same as ü
Next, if I execute the same query by using an accent insensitive collation:

Select name from Person where name like 'muller' collate Latin1_GENERAL_CI_AI

I have as result:

alt

This time, “Muller” and “Müller” are retrieved as my collation is fully insensitive. For Latin1_General and AI (Accent Insensitive) collation u = ü, o = ö, a = ä…
But I get not yet “Mueller” which is a synonym of “Müller” without using a ü in German writing.
So I decided to use a German collation to see if it could solve my issue by returning my three forms of “Muller”. In this phonebook collation, ü is sorted like ue, ä like ae…

Select name from Person where name like 'muller' collate German_PhoneBook_CI_AI

 

alt


As expected, I received just “Muller” which is quite normal as “Muller” in German speaking is not “Müller”…
Let’s try with:

Select name from Person where name like 'müller' collate German_PhoneBook_CI_AI

 

alt


This result is consistent with the German speaking where “Mueller” and “Müller” are the same. But I cannot yet get my three forms of “Muller”…

Getting the result excepted by my customer seems like an impossible task by just changing the column collation.

Another possibility is to use the SOUNDEX string function. This function converts an alphanumeric string to a four-character code based on how the string sounds when spoken. So, let’s try with those queries:

select * from Person where soundex(Name) like soundex('muller')

select soundex('muller'), soundex('müller'), soundex('mueller')

 

alt


This string function was able to retrieve all forms of “muller” without any collation change. I saw that all version of “Muller” is converted to the same SOUNDEX code. The only problem is the utilization of indexes by this function which is not ensure.

Finally, I took a look at the FullText catalog feature which can be accent insensitive and that will include a FullText index with German language:

CREATE FULLTEXT CATALOG FTSCatalog WITH ACCENT_SENSITIVITY=OFF AS DEFAULT

CREATE FULLTEXT INDEX ON Person(name LANGUAGE German) KEY INDEX PK_Person

GO

 

After I used the following queries based on the contains clause and the Formsof predicate with the inflectional option for my different forms of Muller:

Select name from Person where contains(name,'FORMSOF(INFLECTIONAL,"muller")')

Select name from Person where contains(name,'FORMSOF(INFLECTIONAL,"müller")')

Select name from Person where contains(name,'FORMSOF(INFLECTIONAL,"mueller")')

 

alt


As expected the result was consistent with the other ones as we don’t have all forms when searching for “muller”. In contrary searching for “müller” or “Mueller” gives me all the results.

In conclusion, the FullText capabilities of SQL Server is certainly the best solution as it will be also faster with a huge number of rows and give the possibility to not change the collation which could be sometimes a real nightmare but we have to use “Müller” instead of “muller” to retrieve all the expected results.

Unizin Perspective: Personalized learning’s existence and distance education experience

Michael Feldstein - Wed, 2015-07-15 18:46

By Phil HillMore Posts (346)

By reading the Unizin pitch for the State University System of Florida shared yesterday, we can see quite a few claims about the (potential) benefits to be provided by the consortium. “Make sure that the universities were not cut out of [distance ed] process”; “Secure our foothold in the digital industry”; “Promote greater control and influence over the digital learning ecosystem”; Provide “access to the Canvas LMS at the Unizin price”; Provide “access to tools under development, including a Learning Object Repository and Learning Analytics”; Provide “potential for cooperative relationships to ‘share’ digital instruction within and across the consortium”.

I want to pick up on University of Florida provost Joe Glover’s further comment on Learning Analytics, however.

The third goal for Unizin is to acquire, create, or develop learning analytics. Some of the learning management systems have a rather primitive form of learning analytics. Unizin will build on what they have, and this will go from very mechanical types of learning analytics in terms of monitoring student progress and enabling intrusive advising and tutoring; all the way up to personalized learning, which is something that really does not exist yet but is one of the objectives of Unizin.

Personalized learning “really does not exist yet”? You can argue that personalized learning as a field is evolving and mostly in pilot programs, or that it is poorly defined and understood, or that there are not yet credible studies independently reviewing the efficacy of this family of approaches. But you cannot accurately say that personalized learning “really does not exist yet”. And is Unizin claiming that the consortium is key to making personalized learning a reality? This seemed to be one of the arguments in the pitch.

If A Tree Falls In A Different Sector . . .

There are multiple examples of personalized learning in practice, particularly at community colleges to deal with developmental math challenges. I have written about the massive investment in the emporium approach at Austin Community College’s ACCelerator Lab.

Rather than a pilot program, which I have argued plagues higher ed and prevents diffusion of innovations, Austin CC has committed to a A) a big program up front (~700 students in the Fall 2014 inaugural semester) and ~1,000 students in Spring 2015, yet B) they offer students the choice of traditional or emporium. To me, this offers the best of both worlds in allowing a big bet that doesn’t get caught in the “purgatory of pilots” while offering student choice.

We also shared through e-Literate TV an entire case study on Essex County College, showing their personalized learning approach.

In another e-Literate TV case study that does not focus on developmental math, we shared the personalized learning program at Empire State College, and they have been trying various personalized approaches for more than 40 years.

If A Tree Falls In A Non-Unizin Campus . . .

Personalized learning does exist, and Unizin schools could learn from the pioneers in this field. It would be wonderful if Unizin ends up helping to spread innovative teaching & learning practices within the research university community, but even there I would note that there are also some great examples in that group of schools (including at Arizona State University, UC Davis, and even at Unizin member Penn State). For that matter, the University of Florida would do well to travel two hours south and see the personalized learning programs in place at the University of Central Florida.

If this “consortium of large public universities that intends to secure its niche in the evolving digital ecosystem” means that the schools want to learn primarily among themselves, then Unizin will be falling prey to the same mistake that the large MOOC providers made – ignoring the rich history of innovation in in the field and thinking they are creating something without precedent leveraging their unique insight.

If A Tree Falls In A Distance Forest . . .

While Unizin has never claimed to be focused only on distance education, Glover does bring bring up the topic twice as the core of his argument.

That is a situation that we got ourselves in by not looking ahead to the future. We believe we are in a similar position with respect to distance learning at this point. [snip]

Every university in some sense runs a mom & pop operation in distance learning at this point, at least in comparison with large organizations like IBM and Pearson Learning that can bring hundreds of millions of dollars to the table. No university can afford to do that.

Let’s ignore the non sequitur about IBM for now. A few notes:

While these are larger non-profit online programs, it is not accurate to say that “every university in some sense runs a mom & pop operation”. It might be accurate based on the Unizin member institution experience, however. And the University of Florida did recently sign a long-term contract with Pearson Embanet to create its UF Online program, largely based on the school’s inexperience (beyond some masters programs) with fully online education.

In the graph below taken from the Fall 2013 IPEDS data, the Y axis is ratio of students taking exclusively DE courses (fully online programs), and the X axis is ratio of students taking some, but not all, DE courses (online courses within f2f program).

DE_Comparison_Unizin_vs_Public_4-year_-_philonedtech___Tableau_Public

We see that the U Florida and Penn State U has fairly high percentage of students taking some online courses,Penn State World Campus is fully online (not sure if World Campus is part of Unizin or not, but I included it to be safe), and that Oregon State seems to have some fully online presence. But in general Unizin schools are not leaders in distance learning compared to other public 4-year universities. This is not a solid basis to think they have the answers on distance learning needs within the consortium.

Look Outward, Not Inward

In my mind, Unizin is looking the wrong direction. The money and focus thus far has been for the 10 (now 11) member institutions to look inward – form a club, talk amongst themselves, and figure out what should happen in a digital ecosystem. A different, more useful approach would be to look outward: get the club together and look beyond their own successes (e.g. Penn State World Campus), go visit schools that are at the forefront of digital education, invite them in to present and share, and learn from others.

What I’m suggesting is that Unizin should focus a lot more on participating in open communities and focus a lot less on forming an exclusive club. If the schools then set the consortium’s mission as leading instructional change within the member institutions, and forming a support community based on the similar profile of schools, then we might see real results.

The post Unizin Perspective: Personalized learning’s existence and distance education experience appeared first on e-Literate.

Were you at Alliance, Collaborate, Interact this year, or wished you were?

PeopleSoft Technology Blog - Wed, 2015-07-15 17:40

This year, as well as uploading my PDF presentation, I've uploaded a couple of additional files.

The one you may find interesting is a short form Security Check List.

You can find it here: 

This is a supplement to the Securing Your PeopleSoft Application Red Paper (it includes the link) and it covers a number of points I've discussed with customers over the years. I include most of the check list as slides in my session but the PDF is an expanded set. The check list also contains a number of useful links.

In the discussions with customers we frequently find there are topics they have overlooked because they don't appear directly related to PeopleSoft security, but they are part of the overall infrastructure security and often managed by people outside of the PeopleSoft team. It's more important that as teams are reduced in size, that you build collaborative, virtual teams in the rest of your organization. I hope the check list will also provide the conversation starters to help build those virtual teams.

If you think some of the points are  topics by themselves, let me know and I can work on building out the information.

I appreciate any and all feedback. 

How to become/learn Oracle Apps DBA R12.2 : Part I

Online Apps DBA - Wed, 2015-07-15 16:10

I started this blog 9 years back with first post as How to become Oracle Apps DBA (back then it was 11i) and with 225 comments, this is still the most common question I get in mail or on this blog.

We are starting our new batch for Oracle Apps DBA training (R12.2) from August 8, 2015 and first thing we cover is Architecture of Oracle E-Business Suite.  If you are learning (getting trained) on Oracle E-Business Suite on your own then first thing you should learn is Architecture of Oracle Apps.

As shown below Oracle E-Business suite is Three Tier Architecture

a) Database Tier : With Oracle Database where data resides
b) Application Tier : With Application & Web Server where business logic resides
c) Client Tier : browser based client from where end user access application

apps_architecture

 

Note: Till Oracle E-Business Suite R12.1 (prior versions include 12.0 & 11i), Application Tier uses 10g Application Server (or 9 for some versions of 11i). From Oracle E-Business Suite 12.2 onwards Application Tier is deployed on Oracle WebLogic Server as application Server.

 

applicationTierArchitecture

You can get more information on Architecture of Oracle E-Business Suite in Concepts Guide or learn it from our expert team by registering to Oracle Apps DBA Training (starting on 8th August) where Day1 covers

Architecture and File System
  • Architecture of R12.2
  • Changes in Oracle Apps from previous version
  • Requirement/Hardware Sizing Guidelines
  • File System Overview
  • Benefit of New Architecture
  • File System including Changes from previous version
Architecture and File System (Lab Activity)
  • Provide one working instance of R12.2 to the Trainee with Front end and backend access
  • Get comfortable with the Terminology/File system/Environment Variables
  • Understand the Architecture via Navigation

 

Get 200 USD off by registering before 20th July and use code A2OFF at time of checkout (We limit seats per batch to register early to avoid disappointment).

 

Previous in series Related Posts for 12.2 New Features
  1. ADOP : Online Patching in Oracle Apps (E-Business Suite) R12 12.2 : Apps DBA’s Must Read
  2. How to become/learn Oracle Apps DBA R12.2 : Part I

The post How to become/learn Oracle Apps DBA R12.2 : Part I appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

Connecting to DBaaS, did you know this trick?

Kris Rice - Wed, 2015-07-15 15:10
SSHTunneling Trick The new command line is a must try, says 10 out of 10 people that built it.  The tool has sshtunneling of ports built in as described by Barry. This means you can script opening your sshtunnel from the command line and run sql very quickly.  Here's the one I used recently at Kscope15. Now the trick? is that once this port is forwarded, any tool can now use it.  In case