Skip navigation.

Feed aggregator

CREATE USER and ALTER USER – changing passwords and a New Year’s Resolution

The Anti-Kyte - Sun, 2015-01-11 08:39

Monday morning. The first day back at work in the New Year.
Still groggy, having been awoken from my slumber at the insistence of my Darth Vader Lego Alarm Clock, I stagger downstairs in search of coffee.
The clock was a Christmas present from Deb. Whilst clinking around the kitchen, I wonder whether it was intended as a subtle reminder of how she likes her coffee with only a little milk. Yes, she prefers it on the Dark Side.
Then I remember. I usually avoid New Year’s resolutions. I find them to be not unlike a cheap plastic toy at Christmas – there in the morning and then broken by the first afternoon.
This year however, is an exception.

In a recent post about APEX Authentication Schemes, I went to great lengths to ensure that a dynamic SQL statement to re-set a users password was safe from the possibility of injection.
Fortunately, Jeff Kemp took the time to point out a couple of issues with my approach.
As a result, this year, my resolution is to :
READ THE MANUAL.

What follows is the result of keeping this resolution ( so far, at least)…

Setting a Password with CREATE USER and ALTER USER

To start with, we’re going to forget about any PL/SQL context for this exercise and simply focus on the ALTER USER command when used to change a user’s password.

First of all, we need a user…

create user marvin identified by 42!
/

create user marvin identified by 42!
                                   *
ERROR at line 1:
ORA-00922: missing or invalid option


That’s a bit surprising. There’s no password verify function on the DEFAULT profile in the database :

select resource_name, limit
from dba_profiles
where resource_type = 'PASSWORD'
and resource_name = 'PASSWORD_VERIFY_FUNCTION'
and profile = 'DEFAULT'
/

RESOURCE_NAME                    LIMIT
-------------------------------- ----------------------------------------
PASSWORD_VERIFY_FUNCTION         NULL

…and even if there were – ORA-00922 ?

At this point, much head-scratching can be saved if we follow Jeff’s recommended link to the relevant documentation.

From this we can see that the password has the same rules applied to it as a database object. So…

  • it must start with a letter
  • …and can’t contain spaces…
  • …or characters other than alphanumeric, $, # or _

…unless…

create user marvin identified by "42!"
/

User created.

Yes, double-quotes around the password will allow other characters to be used, just as with database objects.
So, does the user need to provide the double-quotes when they connect ?

grant create session to marvin
/

…now….

connect marvin/42!@mydb

…works fine. No need to include the double-quotes when connecting.
If the password contains spaces however, then the double-quotes are required :

alter user marvin identified by "Feeling very depressed"
/

conn marvin/Feeling very depressed@XE
/
SP2-0306: Invalid option.
Usage: CONN[ECT] [{logon|/|proxy} [AS {SYSDBA|SYSOPER|SYSASM}] [edition=value]]
where <logon> ::= <username>[/<password>][@<connect_identifier>]
      <proxy> ::= <proxyuser>[<username>][/<password>][@<connect_identifier>]

conn marvin/"Feeling very depressed"@XE
/
Connected.
A Sensible Change Password Procedure

We now come back to the question of how to provide the functionality for a user to change their password by means of a PL/SQL procedure. For the sake of simplicity, we’re going to assume that there is no password verify function in place and therefore the old password does not need to be provided :

create or replace procedure change_pwd_pr
(
    i_new_pwd in varchar2
)
    authid current_user
is
begin
    execute immediate 'alter user '||user||' identified by '||CHR(34)||i_new_pwd||CHR(34);
end;
/

Now, we are dropping user input directly into a dynamic SQL statement, something that rings alarm bells in terms of the potential for injection.
There would appear to be two techniques that are common to injection of such code.

Here, however, we’re only concatenating a password. This is not a database object per se. Once it is set, it will be referenced only as the password hash by Oracle’s internal authentication process and will therefore not be useful as a blind injection string.
Additionally, by enclosing the password in double-quotes, it is simply treated as a string within the dynamic SQL statement itself.

Let’s put this to the test.

First, we need to make the procedure available to MARVIN :

grant execute on change_pwd_pr to marvin;

Now let’s run some tests. Connect as MARVIN and …

set serveroutput on size unlimited
declare
    
    function test_pwd_fn( i_test_no in pls_integer, i_pwd in varchar2)
        return varchar2
    is
    begin
        mike.change_pwd_pr( i_new_pwd => i_pwd);
        return ('Test '||i_test_no||' - Password Changed.');
    exception
        when others then
            return('Test '||i_test_no||'- Password Change Failed : '||substr(sqlerrm,1,100));
    end test_pwd_fn;
begin
    dbms_output.put_line(test_pwd_fn( 1, '42!'));
    dbms_output.put_line(test_pwd_fn( 2, 'silly'||'; grant dba to marvin'));
    dbms_output.put_line(test_pwd_fn( 3, q'[I want you to know that I'm feeling very depressed]'));
    dbms_output.put_line(test_pwd_fn( 4, 'How awful!'));
end;
/    

Run this as Marvin and we get :

SQL> @tests.sql
Test 1 - Password Changed.
Test 2 - Password Changed.
Test 3- Password Change Failed : ORA-00972: identifier is too long
Test 4 - Password Changed.

PL/SQL procedure successfully completed.

SQL> 

If we now check, we can confirm that our attempt at injection in test2 was not successful :

select granted_role 
from user_role_privs
/

no rows selected

SQL> 
Conclusion

Thanks to Jeff, I will henceforth be moderating my paranoia when dealing with programatic password resets.
Oh yes, and I’ll try to remember to read the manual.


Filed under: Oracle, PL/SQL, SQL Tagged: alter user, alter user identified by, create user, create user identified by, dba_profiles, escape new password with double quotes, identified by, ORA-00922, ORA-00972, oracle password rules, password verify function

Using Oracle Big Data SQL to Add Dimensions and Attributes to Hadoop Reporting

Rittman Mead Consulting - Sun, 2015-01-11 06:18

In a previous post I looked at using Oracle’s new Big Data SQL product with ODI12c, where I used Big Data SQL to expose two Hive tables as Oracle external tables, and then join them using the BETWEEN operator, something that’s not possible with regular HiveQL. In this post I’m going to look at using Oracle Big Data SQL with OBIEE11g, to enable reporting against Hive tables without the need to use Hive ODBC drivers and to bring in reference data without having to stage it in Hive tables in the Hadoop cluster.

In this example I’ve got some webserver log activity from the Rittman Mead Blog stored as a Hive table in Hadoop, which in its raw form only has a limited amount of descriptive data and wouldn’t be all that useful to users reporting against it using OBIEE. Here’s the contents of the Hive table as displayed via SQL*Developer:

NewImage

When I bring this table into OBIEE, I really want to add details of the country that each user is visiting from, and also details of the category that each post referenced in the webserver logs belongs to. Tables for these reference data items can be found in an accompanying Oracle database, like this:

NewImage

The idea then is to create an ORACLE_HIVE external table over the Hive table containing the log activity, and then import all of these tables into the OBIEE RPD as regular Oracle tables. Back in SQL*Developer, connected to the database that has the link setup to the Hadoop cluster via Big Data SQL, I create the external table using the new ORACLE_HIVE external table access driver:

NewImage

And now with the Hive table exposed as the Oracle external table BDA_OUTPUT.ACCESS_PER_POST_EXTTAB, I can import all four tables into the OBIEE repository.

NewImage

I can now create joins across the two Oracle schemas and four tables:

NewImage

and then create a business model and presentation model to define a simple star schema against the combined dataset:

NewImage

Once the RPD is saved and made available to the Presentation layer, I can now go and create some simple reports against the Hive and Oracle tables, with the Big Data SQL feature retrieving the Hive data using SmartScan technology running directly on the Hadoop cluster – bypassing MapReduce and filtering, projecting and just returning the results dataset back to the Exadata server running the Oracle SQL query.

NewImage

In the previous ODI12c and Big Data SQL posting, I used the Big Data SQL feature to enable a join between the Hive table and a table containing IP address range lookups using the BETWEEN operator, so that I could return the country name for each visitor to the website. I can do a similar thing with OBIEE, by first recreating the main incoming fact table source as a view over the ORACLE_HIVE external table and adding an IP integer calculation that I can then use for the join to the IP range lookup table (and also take the opportunity to convert the log-format date string into a proper Oracle DATE datatype):

NewImage

and then using that to join to a new table I’ve imported from the BLOG_REFDATA Oracle schema that contains the IP range lookups:

NewImage

Now I can add country as a dimension, and create reports that break down site visits by country of access.

NewImage

Similarly, I can break the date column in the view over the Hive external table out into its own logical dimension table, and then create some reports to show site access over time.

NewImage

and with the final RPD looking like this:

NewImage

If you’re interested in reading more about Oracle Big Data SQL I also covered it earlier on the blog around the launch date, with this post introducing the feature and another looking at how it extends Oracle security over your Hadoop cluster.

Categories: BI & Warehousing

EBS 12.2 Blocking a Concurrent Program while Online Patching

Senthil Rajendran - Sat, 2015-01-10 22:36
Will there be a need to Blocking a Concurrent Program while Online Patching in EBS 12.2 , if so then here are the steps.

  • Use the Concurrent Program window or page to edit your concurrent program definition.
  • Select the Incompatibilities button to open the Incompatible Programs window.
  • Add a new global incompatibility rule for your program with the following program:
    • Application Name: Applications DBA
    • Program Name: Online Patching In Progress (internal name: ADZDPATCH) concurrent program

    How To Start a Case in Oracle Adaptive Case Management 12c

    Andrejus Baranovski - Sat, 2015-01-10 03:43
    Blog reader was asking to describe how to start a new case in Oracle ACM 12c. You can read my previous blog post on ACM 12c topic - Adaptive Case Management 12c and ADF Human Tasks. There are multiple ways to start a case, depends if you want just to test the case or really use it. I would recommend to use SoapUI to test the case. In the real scenario, case most likely will be started from third party web service or through ACM 12c Java API. Here I would like to describe, how you could use SoapUI to test ACM process during development.

    Once ACM 12c application is deployed, you could open it in EM. There is the option to test deployed composite and invoke it through EM Web Service tester. In the example below you can see startCase operation selected and payload structure displayed. Unfortunately it doesn't work well to test ACM process through EM, payload structure is very confusing and it usually fails with mandatory attributes missing errors:


    Instead I would recommend to use SoapUI to test the ACM 12c composite. You could download and use it free of charge. Start the tool and choose to create New SOAP Project:


    We need WSDL URL to define a SOAP project in SoapUI. You can copy WSDL URL from EM test screen:


    Paste it into SoapUI dialog (Initial WSDL field) and Project Name will be filled in automatically (keep Create sample requests for all operations? option selected):


    SoapUI will fetch all operations from the service and new SoapUI project will be generated:


    In order to start a new case, expand startCase operation and open Request 1. Request will be pre-filled with default payload structure (similar as we saw in EM):


    Save SoapUI project, before doing any changes to the request structure. I would suggest to save it under ACM JDEV application structure - project is saved into single XML file:


    Change the request to include custom data payload (you can find this XML code inside sample application available for download) and invoke the service. If request will be successful, you should see case number returned in the response:


    Case with the same number should be available in BPM Workspace:


    Here you can download sample application with SoapUI project (XML file) included - HotelBookingProcessing_v1.zip.

    Migration

    DBMS2 - Sat, 2015-01-10 00:45

    There is much confusion about migration, by which I mean applications or investment being moved from one “platform” technology — hardware, operating system, DBMS, Hadoop, appliance, cluster, cloud, etc. — to another. Let’s sort some of that out. For starters:

    • There are several fundamentally different kinds of “migration”.
      • You can re-host an existing application.
      • You can replace an existing application with another one that does similar (and hopefully also new) things. This new application may be on a different platform than the old one.
      • You can build or buy a wholly new application.
      • There’s also the inbetween case in which you extend an old application with significant new capabilities — which may not be well-suited for the existing platform.
    • Motives for migration generally fall into a few buckets. The main ones are:
      • You want to use a new app, and it only runs on certain platforms.
      • The new platform may be cheaper to buy, rent or lease.
      • The new platform may have lower operating costs in other ways, such as administration.
      • Your employees may like the new platform’s “cool” aspect. (If the employee is sufficiently high-ranking, substitute “strategic” for “cool”.)
    • Different apps may be much easier or harder to re-host. At two extremes:
      • It can be forbiddingly difficult to re-host an OLTP (OnLine Transaction Processing) app that is heavily tuned, tightly integrated with your other apps, and built using your DBMS vendor’s proprietary stored-procedure language.
      • It might be trivial to migrate a few long-running SQL queries to a new engine, and pretty easy to handle the data connectivity part of the move as well.
    • Certain organizations, usually packaged software companies, design portability into their products from the get-go, with at least partial success.

    I mixed together true migration and new-app platforms in a post last year about DBMS architecture choices, when I wrote:

    • Sometimes something isn’t broken, and doesn’t need fixing.
    • Sometimes something is broken, and still doesn’t need fixing. Legacy decisions that you now regret may not be worth the trouble to change.
    • Sometimes — especially but not only at smaller enterprises — choices are made for you. If you operate on SaaS, plus perhaps some generic web hosting technology, the whole DBMS discussion may be moot.

    In particular, migration away from legacy DBMS raises many issues:

    • Feature incompatibility (especially in stored-procedure languages and/or other vendor-specific SQL).
    • Your staff’s programming and administrative skill-sets.
    • Your investment in DBMS-related tools.
    • Your supply of hockey tickets from the vendor’s salesman.

    Except for the first, those concerns can apply to new applications as well. So if you’re going to use something other than your enterprise-standard RDBMS, you need a good reason.

    I then argued that such reasons are likely to exist for NoSQL DBMS, but less commonly for NewSQL. My views on that haven’t changed in the interim.

    More generally, my pro-con thoughts on migration start:

    • Pure application re-hosting is rarely worthwhile. Migration risks and costs outweigh the benefits, except in a few cases, one of which is the migration of ELT (Extract/Load/Transform) from expensive analytic RDBMS to Hadoop.
    • Moving from in-house to co-located data centers can offer straightforward cost savings, because it’s not accompanied by much in the way of programming costs, risks, or delays. Hence Rackspace’s refocus on colo at the expense of cloud. (But it can be hard on your data center employees.)
    • Moving to an in-house cluster can be straightforward, and is common. VMware is the most famous such example. Exadata consolidation is another.
    • Much of new application/new functionality development is in areas where application lifespans are short — e.g. analytics, or customer-facing internet. Platform changes are then more practical as well.
    • New apps or app functionality often should and do go where the data already is. This is especially true in the case of cloud/colo/on-premises decisions. Whether it’s important in a single location may depend upon the challenges of data integration.

    I’m also often asked for predictions about migration. In light of the above, I’d say:

    • Successful DBMS aren’t going away.
      • OLTP workloads can usually be lost only so fast as applications are replaced, and that tends to be a slow process. Claims to the contrary are rarely persuasive.
      • Analytic DBMS can lose workloads more easily — but their remaining workloads often grow quickly, creating an offset.
    • A large fraction of new apps are up for grabs. Analytic applications go well on new data platforms. So do internet apps of many kinds. The underlying data for these apps often starts out in the cloud. SaaS (Software as a Service) is coming on strong. Etc.
    • I stand by my previous view that most computing will wind up on appliances, clusters or clouds.
    • New relational DBMS will be slow to capture old workloads, even if they are slathered with in-memory fairy dust.

    And for a final prediction — discussion of migration isn’t going to go away either. :)

    Categories: Other

    APEX and Font Awesome integration

    Dimitri Gielis - Fri, 2015-01-09 17:30
    In the previous post I talked about scalable vector images. In this post we will integrate Font Awesome in our APEX app, but the steps for the same if you would pick another library.

    Step 1: we need to make the library available in our APEX application.

    The easiest way to make the library available in your entire application is to add the following line in your Page Template:


    How do I know it's that string you might ask. If you go to the Getting Started section of the library you chose it will tell you:

    You're actually done now with the integration... You'll find some examples on the site to see the properties you can use (size etc.). The list of icons is also available here.

    Here're some examples I use the image fonts for:

    An image in a region

    Sometimes I just want an image in a region, so I included the paypal icon (fa-paypal) and gave it a bigger size (fa-5x).


    You see the result here:

    Edit icon in a report

    During the creation of a report region you can select the image you want, those are either images or text. Change the Link Icon to following:


    Replacing the image will give your report immediately a fresher and more modern look




    Icons in the Navigation List

    For my navigation list I mainly use the "Vertical List with Subtext and Icon" list template.
    By default it looks like this:

    We want to say for every list item which icon we want to use. To do that edit the list template and change in the Template Definition:

    #LIST_LABEL#

    to:





    Note: I changed it in both the Current as Noncurrent List Template.

    Now we can define the icon we want to use in our List at Attribute 02:


    The final step to make the list look nice we need to make the icon float left so it's next to the text and we make it white when you hover. As we use an icon font we can just change the color with some css (you can add that on your page or in your page template in the Inline CSS position).


    This is the result:

    You find the online example at https://www.apexrnd.be/ords/f?p=DGIELIS_BLOG:FONTAWESOME
    Categories: Development

    Flashback logging overhead: 'db file sequential read' on UNDO tablespace

    Yann Neuhaus - Fri, 2015-01-09 13:09

    in my previous post I've reproduced an issue where some bulk insert into a GTT was generating too much undo. But the real case I encountered was worse: the insert was reading lot of UNDO. And lot of this undo was read from disk.

    Insert into GTT: bulk with APPEND_VALUES

    Yann Neuhaus - Fri, 2015-01-09 13:04

    This post is about the case I had where a GTT was generating too much undo. The effects were amplified by the fact that the database was in flashback logging (see Jonathan Lewis answer in my question on OTN forum about that, but that's for the next post.

    Toad world's Oracle Pro for January 2015 - truly honored

    Syed Jaffar - Fri, 2015-01-09 12:44
    Thank you Toad for honoring me with 'Toad World's Oracle Pro for January 2015'. I am indeed truly touched for the honor.


    http://www.toadworld.com/platforms/oracle/default.aspx?Redirected=true

    I thank my family, all friends, colleagues, Oracle community, my readers, Chris and Steve Hilker from Toad for your support and encouragement.

    2015 AT&T Developer Summit & Hackathon

    Oracle AppsLab - Fri, 2015-01-09 12:16

    Editor’s Note: Noel did it! After competing in 2013 and 2014, he broke through and won a prize at the annual AT&T Developer Summit Hackathon (@attdeveloper). Congrats to the whole team.

    MediaTek.prize

    The whole team minus Anthony who was too sick to enjoy the moment.

     

    This year, Anthony (@anthonyslai), Raymond, Osvaldo (@vaini11a), Luis (@lsgaleana), Tony and I (@noelportugal) participated in the AT&T Developer Summit & Hackathon.

    From the beginning we realized we had too much brain power for just one project so we decided to split the group. The first group would attempt to go for the first overall prize. And the second group would focus on just one accelerator price from a sponsor.

    “Your Voice”  – First Overall Prize Entry:

    We knew we only had three minutes to present, and we had to leave an impression with the judges. So, we opted to build our solution around a specific use case with only one persona in mind. The use case was to use our voice to control AT&T Digital Life, AT&T WebRTC and the AT&T Speech APIs. The persona was an older gentleman going around his daily life around the house. We opted to use our latest toy, the Amazon Echo as way to interface with AT&T services. We went to work, found a couple limitations, but at the end we overcame them and we felt pretty confident with our solution.

    Here is our use case:

    Tony is an 89 year old man and lives alone. He is pretty self sufficient, but his daughter (Cindy) worries about his well being. So she bought AT&T Digital Life to make sure her dad was safe and sound. Tony doesn’t want to be bothered to learn all the new mumbo-jumbo that comes with new technology, like a mobile app, a fancy remote, etc. Instead he prefers to use “Your Voice” to make sure all doors are locked, garage door closed, lights off/on, etc. “Your Voice” also works as personal assistant that can take care reminding Tony of important things, read email, initiate video calls (WebRTC), etc.

    So that’s it! We pre-programmed sequences to identify actions. When Tony said “Alexa, I’m tired. I’m going to bed,” the system started a series of actions, not just one. When Tony said “Alexa, call my grandson,” the system automatically started the projector and did a video conference.

    And finally we created a video introduction for our presentation:

     “Sensus” – Accelerator Entry:

    Raymond and Anthony decided to enter the “MediaTek Labs IoT and Wearables Challenge.” MediaTek (@MediaTekLabs) has a very nice multipurpose development board called LinkIt ONE that includes an array of connectivity options (BLE, Wifi, GSM, GPRS, etc.), plus access to a lot of plug-and-play sensors.

    They built a sensor station to monitor environmental safety metrics (temperature, fire hazard) and environmental health metrics (noise, dust, UV). They used Android Wear as the wearable platform to notify users when things happen, using an IFTTT model.

    Their solution was an end-to-end solution using only the MediaTek LinkIt One and their cloud platform. This gave them the edge since this was a pure MediaTek solution. It became a clear solution when the judges came to talk to them. Our room had A/C issues and constantly overheated, so we had to chase the maintenance guys quite often for them to fix it. Raymond talk to them about the opportunity to solve the issue by giving a wearable device to the head of maintenance so he would know whats going on in the building by just “glancing.”

    “Sensus” got the first prize for the accelerator entry.  And as team we could not be happier!

    Conclusion:

    Hackathons or developer challenges are a great way to work as a team, learn new technologies and push the limits of what can be accomplished on such short time. As a team we have proven to be always ahead of the curve with our solutions, e.g. last year we built a Smart Holster for Law Enforcement, and if you have been following CES 2015, there are some companies doing similar implementations.

    There is no doubt that voice control will be huge this year. The technology is maturing at a very fast rate and we are bound to see a lot more great implementations.

    Finally, winning is not everything in these events. The journey is what matters. What we learned along the way. I find it very apt to have this competition in Las Vegas since this place is full of chance, probability and ultimately pure luck.Possibly Related Posts:

    Inserting into a table with potentially long rows

    Hemant K Chitale - Fri, 2015-01-09 09:25
    Note :  This post builds on blog posts by Nikolay Savvinov and Jonathan Lewis.


    Nikolay Savvinov and Jonathan Lewis have identified that when you have a table with a potentially large row size -- where the theoretical maximum row length exceeds the block size -- redo overheads are significantly greater for multi-row inserts.

    First, I demonstrate with a table with a small potential row size. A table with 1 number column and 3 columns of a total max length of 30 characters.  The actual data inserted is also very little.  The first row has the values (1,'1','1','1'), the second row has the values (2,'2','2','2') and so on to (1000,'1000','1000','1000) for a total of 1,000 rows. :

    SQL> drop table hkc_test_small_row_size purge;
    drop table hkc_test_small_row_size purge
    *
    ERROR at line 1:
    ORA-00942: table or view does not exist


    SQL>
    SQL> create table hkc_test_small_row_size
    2 (id_column number, data_col_1 varchar2(10), data_col_2 varchar2(10), data_col_3 varchar2(10));

    Table created.

    SQL> create unique index hkc_tlrs_undx on hkc_test_small_row_size(id_column);

    Index created.

    SQL>
    SQL>
    SQL>
    SQL> select n.name, s.value At_Beginning
    2 from v$statname n, v$sesstat s
    3 where n.statistic#=s.statistic#
    4 and n.name in ('redo entries', 'redo size')
    5 and s.sid = (select distinct sid from v$mystat)
    6 order by 1
    7 /

    NAME AT_BEGINNING
    ---------------------------------------------------------------- ------------
    redo entries 102
    redo size 23896

    SQL>
    SQL> insert into hkc_test_small_row_size
    2 select rownum+4000, to_char(rownum), to_char(rownum), to_char(rownum)
    3 from dual
    4 connect by level < 1001
    5 /

    1000 rows created.

    SQL> commit;

    Commit complete.

    SQL>
    SQL> select n.name, s.value Normal_Insert
    2 from v$statname n, v$sesstat s
    3 where n.statistic#=s.statistic#
    4 and n.name in ('redo entries', 'redo size')
    5 and s.sid = (select distinct sid from v$mystat)
    6 order by 1
    7 /

    NAME NORMAL_INSERT
    ---------------------------------------------------------------- -------------
    redo entries 154
    redo size 92488

    SQL>
    SQL>
    SQL>
    SQL> insert /*+ APPEND */ into hkc_test_small_row_size
    2 select rownum, to_char(rownum), to_char(rownum), to_char(rownum)
    3 from dual
    4 connect by level < 1001
    5 /

    1000 rows created.

    SQL>
    SQL> commit;

    Commit complete.

    SQL>
    SQL> select n.name, s.value APPEND_Insert
    2 from v$statname n, v$sesstat s
    3 where n.statistic#=s.statistic#
    4 and n.name in ('redo entries', 'redo size')
    5 and s.sid = (select distinct sid from v$mystat)
    6 order by 1
    7 /

    NAME APPEND_INSERT
    ---------------------------------------------------------------- -------------
    redo entries 252
    redo size 193396

    SQL>
    SQL>
    SQL> drop table hkc_test_small_row_size purge;

    Table dropped.

    SQL>

    Thus, we can see that, for the "SMALL_ROW_SIZE" table, the redo entries for 1000 of these small rows  :
    Simple Insert of 1,000 rows :   52 redo entries and 68,592 bytes.
    Direct Path (APPEND) of 1,000 rows : 98 redo entries and 100,908 bytes.


    Next, I demonstrate with a  table with a large potential row size -- exceeding the database block size.  Exactly the same data set is inserted  --  The first row has the values (1,'1','1','1'), the second row has the values (2,'2','2','2') and so on to (1000,'1000','1000','1000) for a total of 1,000 rows.

    SQL> drop table hkc_test_large_row_size purge;
    drop table hkc_test_large_row_size purge
    *
    ERROR at line 1:
    ORA-00942: table or view does not exist


    SQL>
    SQL> create table hkc_test_large_row_size
    2 (id_column number, data_col_1 varchar2(4000), data_col_2 varchar2(4000), data_col_3 varchar2(4000));

    Table created.

    SQL> create unique index hkc_tlrs_undx on hkc_test_large_row_size(id_column);

    Index created.

    SQL>
    SQL>
    SQL>
    SQL> select n.name, s.value At_Beginning
    2 from v$statname n, v$sesstat s
    3 where n.statistic#=s.statistic#
    4 and n.name in ('redo entries', 'redo size')
    5 and s.sid = (select distinct sid from v$mystat)
    6 order by 1
    7 /

    NAME AT_BEGINNING
    ---------------------------------------------------------------- ------------
    redo entries 102
    redo size 23900

    SQL>
    SQL>
    SQL> insert into hkc_test_large_row_size
    2 select rownum+4000, to_char(rownum), to_char(rownum), to_char(rownum)
    3 from dual
    4 connect by level < 1001
    5 /

    1000 rows created.

    SQL> commit;

    Commit complete.

    SQL>
    SQL> select n.name, s.value Normal_Insert
    2 from v$statname n, v$sesstat s
    3 where n.statistic#=s.statistic#
    4 and n.name in ('redo entries', 'redo size')
    5 and s.sid = (select distinct sid from v$mystat)
    6 order by 1
    7 /

    NAME NORMAL_INSERT
    ---------------------------------------------------------------- -------------
    redo entries 2145
    redo size 526320

    SQL>
    SQL>
    SQL> insert /*+ APPEND */ into hkc_test_large_row_size
    2 select rownum, to_char(rownum), to_char(rownum), to_char(rownum)
    3 from dual
    4 connect by level < 1001
    5 /

    1000 rows created.

    SQL>
    SQL> commit;

    Commit complete.

    SQL>
    SQL> select n.name, s.value APPEND_Insert
    2 from v$statname n, v$sesstat s
    3 where n.statistic#=s.statistic#
    4 and n.name in ('redo entries', 'redo size')
    5 and s.sid = (select distinct sid from v$mystat)
    6 order by 1
    7 /

    NAME APPEND_INSERT
    ---------------------------------------------------------------- -------------
    redo entries 2243
    redo size 627228

    SQL>
    SQL>
    SQL> drop table hkc_test_large_row_size purge;

    Table dropped.

    SQL>

    Thus, we can see that, for the "LARGE_ROW_SIZE" table, the redo entries for 1000 of these actually small rows :
    Simple Insert of 1,000 rows :   2,043 redo entries and 502,420 bytes.
    Direct Path (APPEND) of 1,000 rows : 98 redo entries and 100,908 bytes.

    Therefore, a simple (regular) Insert into such a table where the *potential* row size (not necessarily the actual row size) exceeds the block size is very expensive.  Apparently, the optimization to insert multiple rows into a single block with minimal redo is not invoked when Oracle thinks that the row may exceed the block size.  It switches to a row-by-row insert.  This is evident in the LARGE_ROW_SIZE case where 2,043 redo entries is more than 1000 rows + 1000 index entries.  The SMALL_ROW_SIZE had only 52 redo entries.
    Remember : This happens when
    (a) The potential row size exceeds the block size (irrespective of the actual row size which may be very few bytes)
    AND
    (b) a regular (non-Direct) Insert is used.
    In such cases, a Direct Path Insert works out better.  Although there are obvious downsides to doing Direct Path Inserts -- the HighWaterMark keeps extending, irrespective of Deletes that may have been issued.
    .
    .
    .


    Categories: DBA Blogs

    Securing Big Data - Part 4 - Not crying Wolf.

    Steve Jones - Fri, 2015-01-09 09:00
    In the first three parts of this I talked about how Securing Big Data is about layers, and then about how you need to use the power of Big Data to secure Big Data, then how maths and machine learning helps to identify what is reasonable and was is anomalous. The Target Credit Card hack highlights this problem.  Alerts were made, lights did flash.  The problem was that so many lights flashed and
    Categories: Fusion Middleware

    count(*) – again !

    Jonathan Lewis - Fri, 2015-01-09 06:56

    Because you can never have enough of a good thing.

    Here’s a thought – The optimizer doesn’t treat all constants equally.  No explanations, just read the code – execution plans at the end:

    
    SQL> drop table t1 purge;
    SQL> create table t1 nologging as select * from all_objects;
    SQL> create bitmap index t1_b1 on t1(owner);
    
    SQL> alter session set statistics_level = all;
    
    SQL> set serveroutput off
    SQL> select count(*) from t1;
    SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
    
    SQL> select count(1) from t1;
    SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
    
    SQL> select count(-1) from t1;
    SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
    
    SQL> alter session set cursor_sharing = force;
    SQL> alter system flush shared_pool;
    
    SQL> select count(1) from t1;
    SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
    
    

    So, are you expecting to see the same results and performance from every single one of those queries ?

    
    select count(*) from t1
    ----------------------------------------------------------------------------------------------------------
    | Id  | Operation                     | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
    ----------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT              |       |      1 |        |      1 |00:00:00.01 |       9 |      5 |
    |   1 |  SORT AGGREGATE               |       |      1 |      1 |      1 |00:00:00.01 |       9 |      5 |
    |   2 |   BITMAP CONVERSION COUNT     |       |      1 |  84499 |     31 |00:00:00.01 |       9 |      5 |
    |   3 |    BITMAP INDEX FAST FULL SCAN| T1_B1 |      1 |        |     31 |00:00:00.01 |       9 |      5 |
    ----------------------------------------------------------------------------------------------------------
    
    select count(1) from t1
    -------------------------------------------------------------------------------------------------
    | Id  | Operation                     | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    -------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT              |       |      1 |        |      1 |00:00:00.01 |       9 |
    |   1 |  SORT AGGREGATE               |       |      1 |      1 |      1 |00:00:00.01 |       9 |
    |   2 |   BITMAP CONVERSION COUNT     |       |      1 |  84499 |     31 |00:00:00.01 |       9 |
    |   3 |    BITMAP INDEX FAST FULL SCAN| T1_B1 |      1 |        |     31 |00:00:00.01 |       9 |
    -------------------------------------------------------------------------------------------------
    
    select count(-1) from t1
    -------------------------------------------------------------------------------------------------
    | Id  | Operation                     | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    -------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT              |       |      1 |        |      1 |00:00:00.43 |       9 |
    |   1 |  SORT AGGREGATE               |       |      1 |      1 |      1 |00:00:00.43 |       9 |
    |   2 |   BITMAP CONVERSION TO ROWIDS |       |      1 |  84499 |  84499 |00:00:00.22 |       9 |
    |   3 |    BITMAP INDEX FAST FULL SCAN| T1_B1 |      1 |        |     31 |00:00:00.01 |       9 |
    -------------------------------------------------------------------------------------------------
    
    SQL> alter session set cursor_sharing = force;
    SQL> alter system flush shared_pool;
    
    select count(1) from t1
    select count(:"SYS_B_0") from t1    -- effect of cursor-sharing
    -------------------------------------------------------------------------------------------------
    | Id  | Operation                     | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    -------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT              |       |      1 |        |      1 |00:00:00.46 |       9 |
    |   1 |  SORT AGGREGATE               |       |      1 |      1 |      1 |00:00:00.46 |       9 |
    |   2 |   BITMAP CONVERSION TO ROWIDS |       |      1 |  84499 |  84499 |00:00:00.23 |       9 |
    |   3 |    BITMAP INDEX FAST FULL SCAN| T1_B1 |      1 |        |     31 |00:00:00.01 |       9 |
    -------------------------------------------------------------------------------------------------
    
    

    Check operation 2 in each plan – with the bitmap index in place there are two possible ways to count the rows referenced in the index – and one of them converts to rowids and does a lot more work.

    The only “real” threat in this set of examples, of course, is the bind variable one – there are times when count(*) WILL be faster than count(1). Having said that, there is a case where a redundant “conversion to rowids” IS a threat – and I’ll write that up some time in the near future.

    Trick question: when is 1+1 != 2 ?
    Silly answer: compare the plan for: “select count (2) from t1″ with the plan for “select count(1+1) from t1″

    Note: All tests above run on 12.1.0.2


    Oracle Audit Vault - Custom Reports and BI Publisher

    Custom reports can be created in Oracle Audit Vault using Oracle BI Publisher.  BI Publisher is an add-on to Microsoft Word and can be used to modify or create new reports.

    For example, to modify a new report, to meet specific corporate or internal audit needs, download a standard Oracle Audit Vault report that is similar (Auditor -> Reports -> Custom Reports -> Uploaded Reports).  Click on the icon to download both the template and the report definition and load both files into BI Publisher.

    Once complete, upload the report definition to the same location (Auditor -> Reports -> Custom Reports -> Uploaded Reports).

    If you have questions, please contact us at mailto:info@integrigy.com

    Reference

     

    Tags: AuditingOracle Audit Vault
    Categories: APPS Blogs, Security Blogs

    Je suis Charlie

    Frank van Bortel - Fri, 2015-01-09 03:34
    Je suis Charlie Bien sûr, moi aussi, je suis Charlie, moi.Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

    Histograms Tidbits

    Pakistan's First Oracle Blog - Fri, 2015-01-09 03:00
    Make sure histograms exist on columns with uneven data distributions to ensure that the optimizer makes the best choice between indexes and table scans.




    For range scans on data that is not uniformly distributed, the optimizers’ decisions will be improved by the presence of a histogram.

    Histograms increase the accuracy of the optimizer’s cost calculations but increase the overhead of statistics collections. It’s usually worth creating histograms for columns where you believe the data will have an irregular distribution, and where the column is involved in WHERE or JOIN expressions.

    CREATING HISTOGRAMS WITH METHOD_OPT

    The METHOD_OPT option controls how column level statistics, in particular histograms, are created. The default value is ‘FOR ALL COLUMNS SIZE AUTO’,

    which enables Oracle to choose the columns that will have a histogram collected and set the appropriate histogram bucket size.
    Categories: DBA Blogs

    The reasons behind nation-state hackers

    Chris Foot - Fri, 2015-01-09 01:15

    There are the archetypal teenage hackers who advance their reputations by accessing restricted networks just for the thrill of it, and then there are cyberespionage masters who target the databases of nationwide financial enterprises and public entities. 

    When one thinks of the latter, it's easy to imagine a character out of a modern spy movie. However, it's difficult to identify the exact reasons why a nation would use hackers to conduct covert cyber-operations on another country, or large businesses operating within a state of interest. 

    Why nations infiltrate large banks 
    According to BankInfoSecurity contributor Eric Chabrow, an attack on a major financial institution is usually conducted by a nation-state that is looking to obtain intelligence for the purpose of protecting or improving its economy. Bankers, analysts and economists working in the finance industry all have insight into how certain agreements, global shifts and other factors will affect the condition of national markets. 

    Surprisingly enough, hackers contracted by a nation-state to infiltrate an organization such as JPMorgan Chase, for example, are likely not interested in stealing money or personally identifiable information. Philip Casesa, director of IT service operations at IT security education and certification company (ISC)2 agrees with this viewpoint. 

    "A government-sponsored actor doesn't have the same goals as a crime organization – the objective is much bigger than that," said Casesa, as quoted by Chabrow. "It isn't stealing dollars – it's manipulating world politics by shifting the economic balance of power." 

    Goals are elusive 
    One of the reasons why many people opt to speculate as to what the intentions of hackers acting on behalf of nation-states are is that, sometimes, that's simply all that can be done. In a way, only organizations such as the U.S. National Security Agency and the Federal Bureau of Investigation identify concrete intentions behind a specific attacks. 

    Yet there are times when journalists can scrutinize a clear pattern. Dark Reading noted that there have been a number of cases in which intellectual property owned by a person or organization within the U.S. was stolen by Chinese operatives. Think of the impact the automobile had on the 20th-century economy. If China could gain intelligence regarding a new invention that could impact the global market in such a way, it would establish itself as an economic superpower. 

    All things considered, this particular topic deserves extensive coverage – the kind often found in a college dissertation. While a blog can provide a glance, a book can provide understanding. 

    The post The reasons behind nation-state hackers appeared first on Remote DBA Experts.

    APEX and Scalable Vector Icons (Icon Fonts)

    Dimitri Gielis - Thu, 2015-01-08 17:30
    For a couple of years now webdesigners and developers don't use image icons anymore, instead we moved to scalable vector icons.

    Before you had to create different images for the different formats and colours you wanted. Then to gain performance we created one big image with all those smaller images in it (called a sprite).
    Next with some CSS we showed a part of the bigger image. A hassle...

    In fact the evolution of using icon images you can perfectly see in APEX too. If you go to your images folder you will see many .gif files, all different icons:


    In a later release APEX (till APEX 4.2) moved to sprites (see /images/themes/theme_25/images/), for example the Theme 25 sprite you see here.


    The scalable vector icons (or icon fonts) solve the issues the image icons had. With the vector icons you can define the color, size, background etc. all with CSS. It makes building responsive applications so much easier, the icons stay fresh and crisp regardless of the size of the device and it's next to no overhead. This is exactly what APEX 5.0 will bring to the table: nice scalable vector icons, handcrafted and made pixel perfect by the Oracle team.

    Image from Font Awesome 
    In fact you don't have to wait till APEX 5.0 is there, you can add icon fonts to your own APEX application today.

    There're many icon fonts out there, but here're some I like:

    • Font Awesome - One of the most popular ones and probably included in APEX 5.0 too (next to Oracle's own library)
    • Glyphicons - This library is for example used by Bootstrap
    • Foundation Icon Fonts 3 - Used by the Foundation framework
    • NounProject - I saw the founder at the TEDx conference and was very intrigued by it - they build a visual language of icons anyone can understand
    • IcoMoon - This is more than an icon library, you can actually create your own library with their app. When my wife creates fonts in Illustrator, IcoMoon transforms them into a font.
    • Fontello - You can build your own library based on other libraries on the net (the site shows many other popular font libraries I didn't put above), so if you can't find your icon here, you probably want to build it yourself :)

    In the next post I'll show you how to integrate one of those in your APEX app.
    Categories: Development

    Rittman Mead BI Forum 2015 Call for Papers Now Open – Closes on Jan 18th 2015

    Rittman Mead Consulting - Thu, 2015-01-08 16:05

    The Call for Papers for the Rittman Mead BI Forum 2015 is currently open, with abstract submissions open to January 18th 2015. As in previous years the BI Forum will run over consecutive weeks in Brighton, UK and Atlanta, GA, with the provisional dates and venues as below:

    • Brighton, UK : Hotel Seattle, Brighton, UK : May 6th – 8th 2015
    • Atlanta, GA : Renaissance Atlanta Midtown Hotel, Atlanta, USA : May 13th-15th 2015

    Now on it’s seventh year, the Rittman Mead BI Forum is the only conference dedicated entirely to Oracle Business Intelligence, Oracle Business Analytics and the technologies and processes that support it – data warehousing, data analysis, data visualisation, big data and OLAP analysis. We’re looking for session around tips & techniques, project case-studies and success stories, and sessions where you’ve taken Oracle’s BI products and used them in new and innovative ways. Each year we select around eight-to-ten speakers for each event along with keynote speakers and a masterclass session, with speaker choices driven by attendee votes at the end of January, and editorial input from myself, Jon Mead and Charles Elliott and Jordan Meyer.

    NewImage

    Last year we had a big focus on cloud, and a masterclass and several sessions on bringing Hadoop and big data to the world of OBIEE. This year we’re interested in project stories and experiences around cloud and Hadoop, and we’re keen to hear about any Oracle BI Apps 11g implementations or migrations from the earlier 7.9.x releases. Getting back to basics we’re always interested in sessions around OBIEE, Essbase and data warehouse data modelling, and we’d particularly like to encourage session abstracts on data visualization, BI project methodologies and the incorporation of unstructured, semi-structured and external (public) data sources into your BI dashboards. For an idea of the types of presentations that have been selected in the past, check out the BI Forum 2014, 2013 and 2012 homepages, or feel free to get in touch via email at mark.rittman@rittmanmead.com

    The Call for Papers entry form is here, and we’re looking for speakers for Brighton, Atlanta, or both venues if you can speak at both. All session this year will be 45 minutes long, all we’ll be publishing submissions and inviting potential attendees to vote on their favourite sessions towards the end of January. Other than that – have a think about abstract ideas now, and make sure you get them in by January 18th 2015 – just over a week from now!.

    Categories: BI & Warehousing

    Oracle GoldenGate Processes – Part 4 – Replicat

    DBASolved - Thu, 2015-01-08 15:00

    The replicat process is the apply process within the Oracle GoldenGate environment.  The replicat is responsible for reading the remote trail files and applying the data found in cronilogical order.  This ensures that the data is applied in the same order it was captured.  

    Until recently there was only one version of a replicat, that version was the classic version.  As of 12.1.2.0, there are now three distinct versions of a replicat.  These replicat types are:

    • Classic Mode
    • Coordinated Mode
    • Integrated Mode

    Each of on these modes provide some sort of benefit depending on the database being applied to.  Oracle is pushing everyone to a more integrated approach; however, you have to be on database version 11.2.0.4 at a minimum.  

    To configure the replicat process is similar to the extract and data pump processes.

    Adding a Replicat:

    From GGSCI (classic):

    $ cd $OGG_HOME
    $ ./ggsci
    GGSCI> add replicat REP, exttrail ./dirdat/rt

    Note:  The add command is assuming that you already have a checkpoint table configured in the target environment.

    Edit Replicat parameter file:

    From GGSCI:

    $ cd $OGG_HOME
    $ ./ggsci
    GGSCI> edit params REP

    From Command Line:

    $ cd $OGG_HOME
    $ cd ./dirprm
    $ vi REP.prm

    Example of Replicat Parameter file:

    REPLICAT REP
    SETENV (ORACLE_HOME=”/u01/app/oracle/product/11.2.0/db_3″)
    SETENV (ORACLE_SID=”orcl”)
    USERID ggate, PASSWORD ggate
    ASSUMETARGETDEFS
    DISCARDFILE ./dirrpt/REP.dsc, append, megabytes 500
    WILDCARDRESOLVE IMMEDIATE
    MAP SCOTT.*, TARGET SCOTT.*;

    Start the Replicat process:

    Depending on if you are starting the replicat for the first time or not; how you start is going to be similar yet different.

    To star the Replicat after an inital load:

    $ cd $OGG_HOME
    $ ./ggsci
    GGSCI> start replicat REP, atcsn [ current_scn ]

    Note: The current_scn needs be obtained from the source database prior to doing the inital load of data to the target.  This ensure the consistancy of the data and provides a starting point for the replicat to start applying data from.

    To start Replicat normally:

    $ cd $OGG_HOME
    $ ./ggsci
    GGSCI> start replicat REP

    Stop the Replicat process:

    Stop replicat normally:

    $ cd $OGG_HOME
    $ ./ggsci
    GGSCI> stop replicat REP

    Enjoy!

    about.me: http://about.me/dbasolved


    Filed under: Golden Gate
    Categories: DBA Blogs