Skip navigation.

Laurent Schneider

Syndicate content
Oracle Certified Master
Updated: 1 hour 35 min ago

permission issue due to one role

6 hours 34 min ago

Most permissions issues are due to a missing role or privilege.

But in the following test case you need to revoke the right to get more privileges.


create table tt(x number);
create view v as select * from tt;
create role rw;
grant all on v to rw;

I’ve created a read-write role on a view. The owner of the role is the DBA, but the owner of the view is the application. Next release, the role may prevent an application upgrade


SQL> create or replace view v as select * from dual;
ORA-01720: grant option does not exist for 'SYS.DUAL'

Ok, if I drop the role, it works


SQL> drop role r;
Role dropped.
SQL> create or replace view v as select * from dual;
View created.

It is not always a good thing to grant privileges on a view, when you are not the owner of that view

Open last one-note page

Mon, 2015-08-03 10:52

If you got a one-note document, you may want to automatically go to the last page. This is possible with powershell.

First you create a ComObject. There are incredibly many ComObject that could be manipulated in powershell.


$o = New-Object -ComObject OneNote.Application

Now it get’s a bit confusing. First you open your document


[ref]$x = ""
$o.OpenHierarchy("Z:\Reports.one", "", $x, "cftNone")

Now you get the XML


$o.GetHierarchy("", "hsPages", $x)

With the XML, you select the last page. For instance :


$p = (([xml]($x.value)).Notebooks.OpenSections.Section.Page | select -last 1).ID

And from the id, you generate an URL the GetHyperlinkToObject.


[ref]$h = ""
$o.GetHyperlinkToObject($p,"",$h)

Now we can open the url onenote:///Z:\Reports.one#W31&section-id={12345678-1234-1234-123456789ABC}&page-id={12345678-1234-1234-123456789ABC}&end


start-process $h.value

Don’t call it test

Mon, 2015-07-06 09:00

There are quite a few names to avoid in your scripts. Even if there are not reserved-words, keep away !

I’ll start with test


cd $HOME/bin
vi test
  echo hello world
chmod +x test
./test
  hello world

The problem is that it may break your other scripts


$ ssh localhost test 1 = 2 && echo WHAT???
hello world
WHAT???

And it may break sooner or later, depending on your OS / version / default shell / default path / others.

There are quite a few filenames you should not use, like test, date, time, hostname, mail, view, touch, sort and make. The command type lists some of those as reserved word, shell builtin, tracked alias, shell keyword. But again it is not consistent over Unix flavors.


$ uname -sr; type date
SunOS 5.10
date is /usr/bin/date
$ uname -sr; type date
Linux 2.6
date is a tracked alias for /bin/date

Your sysadmin may also alias things for colors and safety in the common profile: for instance vi, ls, rm. But if it annoys you, then use \vi instead of vi.

list database monitoring users

Wed, 2015-06-10 10:34

I am quite familiar with the SYSMAN tables but this one required me some googling beyond the Oracle documentation.

The list of targets in your Oracle Enterprise Manager is in SYSMAN.MGMT_TARGETS. Each database target is monitored by a database user, typically DBSNMP.

To retrieve this information, you need some to hijack your database, read this : Gökhan Atil

  1. you copy your encryption key to your repository database, on the OMS server
  2. 
    $ emctl config emkey -copy_to_repos
    Enter Enterprise Manager Root (SYSMAN) Password :
    

    Now anyone with select any table on your repository will see all passwords. You don’t want to do this, but unfortunately you have to do this because even the username is encrpyted.

  3. you decrypt the credentials for db monitoring
  4. 
    SELECT *
    FROM (
      SELECT target_name,
        sysman.em_crypto.decrypt (
          c.cred_attr_value, 
          c.cred_salt) cred,
        cred_attr_name attr
      FROM SYSMAN.EM_TARGET_CREDS c
      JOIN SYSMAN.mgmt_targets t USING (target_guid)
      JOIN sysman.EM_NC_CRED_COLUMNS c USING (cred_guid)
      WHERE c.target_type = 'oracle_database'
      AND c.set_name = 'DBCredsMonitoring' 
    ) PIVOT ( 
      MAX (cred)
      FOR (attr) IN (
        'DBUserName' AS USERNAME, 
        'DBRole' AS "ROLE")
      )
    

    
    TARGET_NAME USERNAME ROLE  
    ----------- -------- ------
    DB01        dbsnmp   NORMAL
    DB02        dbsnmp   NORMAL
    DB03        sys      SYSDBA
    

  5. remove the security leak
  6. 
    $ emctl config emkey -remove_from_repos
    Enter Enterprise Manager Root (SYSMAN) Password :
    

Now the em_crypto won’t work any more


select 
  sysman.em_crypto.decrypt('0','0')
from dual
  *
Error at line 2
ORA-28239: no key provided
ORA-06512: at "SYS.DBMS_CRYPTO_FFI", line 67
ORA-06512: at "SYS.DBMS_CRYPTO", line 44
ORA-06512: at "SYSMAN.EM_CRYPTO", line 250
ORA-06512: at line 1

This information could be used to change the password dynamically accross all databases.


emcli login \
  -username=sysman \
  -password=sysmanpw
emcli update_db_password \
  -target_name=DB01 \
  -user_name=dbsnmp \
  -change_at_target=yes \
  -old_password=oldpw \
  -new_password=newpw \
  -retype_new_password=newpw

Drop table if exists

Wed, 2015-06-03 03:16

The syntax that you are looking for is
docs.oracle.com/…/drop-table.html

 
DROP [TEMPORARY] TABLE [IF EXISTS]
    tbl_name [, tbl_name] ...
    [RESTRICT | CASCADE]

Wait, this does not work !


SQL> drop table if exists t;
drop table if exists t
              *
ERROR at line 1:
ORA-00933: SQL command not properly ended

Okay. It was the Oracle MySQL book 😉

In the Oracle database, I have created my own droptableifexists script.

I went for a SQL*Plus no-plsql approach. PL/SQL is also possible but it generated different error messages (ORA-06512: at line 1) and different feedback (PL/SQL procedure successfully completed.)

So I check the dictionary, put a command to drop in the sqlplus buffer if a table exists, then run that command first.

droptableifexists.sql


set feed off ver off pages 0 newp none
def cmd="select 'OK: Table does not exist' from dual"
col cmd new_v cmd nopri
select 'drop table "'||table_name||'"' cmd 
from user_tables 
where table_name='&1';
1 select
del *
1 &cmd
set feedb 6 head off
/
set head on
del *
undef cmd
col cmd clear

Ok, let’s try


SQL> create table t(x number);

Table created.

SQL> @droptableifexists T

Table dropped.

SQL> @droptableifexists T
OK: Table does not exist

A PL/SQL approach could be


for f in (
  select 'drop table "'||table_name||'"' cmd 
  from user_tables where table_name='T')
loop 
  execute immediate f.cmd;
end loop;

Try it :


SQL> create table t(x number);

Table created.

SQL> exec for f in (select 'drop table "'||table_name||'"' cmd from user_tables where table_name='T')loop execute immediate f.cmd;end loop

PL/SQL procedure successfully completed.

SQL> exec for f in (select 'drop table "'||table_name||'"' cmd from user_tables where table_name='T')loop execute immediate f.cmd;end loop

PL/SQL procedure successfully completed.

A bit easier to read. Same has to be done for USER, VIEW and so on.

PS: there are also other versions around catching for ORA-942, but ORA-942 may be ORA-943 in next release, try drop table t;; in 11g and 12c to see those things really happen !

Untrusted X11 forwarding

Mon, 2015-06-01 08:08

I wrote a while ago about my security concerns regarding


xhost +
xterm -display mypc:0

Way back then, I suggested ssh tunnel. SSH is pretty easy to set up, by enabling the X11Forwarding option.

In OpenSSH 3.8 release note, 2004, there was a new default .

ssh(1) now uses untrusted cookies for X11-Forwarding

In the man ssh_config page, it’s still documented as being the default

ForwardX11Trusted The default is ‘no’

But it actually isn’t on most *ix derivates, e.g. RedHat /etc/ssh/ssh_config

# If this option is set to yes then
# remote X11 clients will have full access
# to the original X11 display. As virtually
# no X11 client supports the untrusted
# mode correctly we set this to yes.
ForwardX11Trusted yes

Who is we?

Okay, let’s go back.

If you use the unsafest method, xhost + and xterm -display pc:0, then you grant everybody the right to manipulate X.

If you use trusted ssh, which is the _undocumented_ default in Linux, then you grant this right only to anyone with access to your authority, most probably located in the file $HOME/.Xauthority. So root and yourself, at least.

If you trust neither yourself nor root, you could restrict access to your resource, preventing one hacker from switching your mouse buttons or doing a screenshot. But this is probably going to prevent most of your applications from working. Also, it probably won’t work at all if you use putty, reflection and (virtually any?) other client tools.

If you want to force Trusted mode, use -Y or -o ForwardX11Trusted=yes.

If you want to force Untrusted mode, use -X and -o ForwardX11Trusted=no.

If you use only -X, it may transparently defaults to the more convenient but less secure -Y. Sometimes. At least on Linux OpenSSH. But if you use different Unix / SSH flavours, the -X may ends with an error message like connection to “localhost:10.0″ refused by server. In that case, simply use -Y. Actually, always use -Y if you want Trusted.

Irrecoverable part III : a fix

Thu, 2015-05-21 09:05

After part I:the problem, Part II: two reports here is part III.

We have backed up archivelog during a full and deleted them before the full completed.


RMAN> list backup of archivelog all;

List of Backup Sets
===================

BS Size   Type Elapsed  Completion
-- ------ ---- -------- ----------
15 4.00K  DISK 00:00:00   13:31:08
    BP Key: 15   Status: AVAILABLE
    Piece Name: /FULL/0fq7gc0s_1_1

  List of Archived Logs in backup set 15
  Seq LowSCN  LowTime  NextSCN NextTime
  --- ------- -------- ------- --------
  15  355533  13:29:55 355777  13:31:08

BS Size   Type Elapsed  Completion
-- ------ ---- -------- ----------
25 4.00K  DISK 00:00:00   13:31:26
    BP Key: 25   Status: AVAILABLE
    Piece Name: /ARCH/0pq7gc1e_1_1

  List of Archived Logs in backup set 25
  Seq LowSCN  LowTime  NextSCN NextTime
  --- ------- -------- ------- --------
  15  355533  13:29:55 355777  13:31:08

BS Size   Type Elapsed  Completion
-- ------ ---- -------- ----------
26 3.00K  DISK 00:00:00 13:31:26
  BP Key: 26   Status: AVAILABLE
  Piece Name: /ARCH/0qq7gc1e_1_1

  List of Archived Logs in backup set 26
  Seq LowSCN  LowTime  NextSCN NextTime
  --- ------- -------- ------- --------
  16  355777  13:31:08 355827  13:31:22

BS Size   Type Elapsed  Completion
-- ------ ---- -------- ----------
28 2.50K  DISK 00:00:00 13:31:28
    BP Key: 28   Status: AVAILABLE
    Piece Name: /FULL/0sq7gc1g_1_1

  List of Archived Logs in backup set 28
  Seq LowSCN  LowTime  NextSCN NextTime
  --- ------- -------- ------- --------
  17  355827  13:31:22 355863  13:31:28

Sequence 16 is missing in /FULL/ and is available in /ARCH/

Ok, let’s copy the missing pieces from ARCH to FULL


$ cd /
$ cp -p ARCH/* FULL/

Now let’s try. I save my directory on an offline location (tape/usb/cdrom) and dropped my database including backups.


cd /
tar cvf /tmp/FULL.tar FULL

Let’s drop (it’s just a test)


RMAN> startup force mount dba 
RMAN> drop database including backups;
database dropped

Ok let’s get this USB stick back from the moon to my datacenter and see if I can restore …


cd /
tar xvf /tmp/FULL.tar 

Let’s do this !


RMAN> startup force nomount
RMAN> restore controlfile from '/FULL/c-2414975447-20150521-01';
RMAN> alter database mount;

Now I want RMAN to look for other files in FULL and also marked expired stuff. The start with does the magic.

RMAN> catalog start with '/FULL/' noprompt;
RMAN> crosscheck backup;
RMAN> crosscheck copy;
RMAN> crosscheck archivelog all;
RMAN> restore database;
RMAN> recover database;
RMAN> alter database open resetlogs;

As @dbastorage said on twitter:
“if it was easy, we would not need DBAs! ”

@laurentsch and have a job to backup the before backup redo, that includes the delete… if it was easy, we would not need DBAs! :) 2/2

— Peter Herdman-Grant (@DBAStorage) May 20, 2015

Irrecoverable full backup part II : reporting

Wed, 2015-05-20 08:34

After my post Can you restore from a full online backup ?, I needed to come up with a report.

Assuming that each backup goes in a different directory, I just wrote two reports.

  1. Report gaps in v$backup_redolog (or rc_backup_redolog if you use the catalog)
    
    DIR     FIRST_CHANGE# NEXT_CHANGE#
    ------- ------------- ------------
    /bck01/        284891       285140
    /bck01/        285140       285178
    
    /bck02/        284891       285140
    === GAP ===
    /bck02/        285178       285245 
    
    /bck03/        285178       285245
    /bck03/        285245       286931
    /bck03/        286931       287803
    /bck03/        287803       288148
    

    This could be done with analytics, by checking where the last next_change is not the current first_change, within a directory

    
    SELECT dir, 
      LAG missing_from_change#, 
      first_change# missing_to_change#
    FROM (
      SELECT REGEXP_REPLACE (handle, '[^/\]+$') dir,
        first_change#,
        next_change#,
        LAG(next_change#) OVER (
          PARTITION BY REGEXP_REPLACE (handle, '[^/\]+$')
          ORDER BY first_change#
        ) LAG
      FROM v$backup_piece p
      JOIN v$backup_redolog l 
        USING (set_stamp, set_count))
    WHERE LAG != first_change#;
    
    DIR     MISSING_FROM_CHANGE# MISSING_TO_CHANGE#
    ------- -------------------- ------------------
    /bck02/               285140             285178
    
  2. Reports directories where archivelogs don’t include changes (backup redolog) from the earliest to the latest checkpoint (backup datafile)
    
    SELECT 
      REGEXP_REPLACE (handle, '[^/\]+$') dir,
      MIN (checkpoint_change#),
      MAX (checkpoint_change#),
      MIN (first_change#),
      MAX (next_change#)
    FROM v$backup_piece p
      LEFT JOIN v$backup_datafile f 
        USING (set_stamp, set_count)
      LEFT JOIN v$backup_redolog l 
        USING (set_stamp, set_count)
    WHERE handle IS NOT NULL
    HAVING 
      MIN (checkpoint_change#) < MIN (first_change#)
      OR 
      MAX (checkpoint_change#) > MAX (next_change#)
    GROUP BY REGEXP_REPLACE (handle, '[^/\]+$');
    
    DIR     MINCHECKPT MAXCHECKPT   MINFIRST    MAXNEXT
    ------- ---------- ---------- ---------- ----------
    /bck04/     954292     954299     959487    1145473
    

    the archives for the changes from 954292 to 959487 are missing.

If some archive backups are missing in one directory, it does not mean the database is irrecoverable, the archive backups could be in another directory. But it means that single directory would no longer permit you to restore or duplicate.

Another approach with RESTORE PREVIEW was provided by Franck in my previous post : List all RMAN backups that are needed to recover.

Usual disclaimer: there are plenty of other irrecoverabilty causes from hardware defect to backup “optimization” that are beyond the scope of this post.

Can you restore from a full online backup ?

Mon, 2015-05-18 11:07

The question is not HOW TO DO IT but WHETHER YOU CAN DO IT !

A typical backup script would contains something like BACKUP DATABASE PLUS ARCHIVELOG:


backup database format
'/u99/backup/DB01/20150518/full_0_%d_s%s_p%p'
plus archivelog format
'/u99/backup/DB01/20150518/arc_%d_s%s_p%p';

Starting backup at 2015-05-18_18:27:55
current log archived
input archived log thread=1 sequence=469
...
piece handle=
/u99/backup/DB01/20150518/arc_DB01_s86_p1
Finished backup at 2015-05-18_18:27:58

Starting backup at 2015-05-18_18:27:58
input datafile file number=00002 name=
/u02/oradata/DB01/undots01.dbf
...
including current control file in backup set
including current SPFILE in backup set
piece handle=
/u99/backup/DB01/20150518/full_0_DB01_s88_p1
Finished backup at 2015-05-18_18:28:16

Starting backup at 2015-05-18_18:28:16
current log archived
input archived log thread=1 sequence=20
piece handle=
/u99/backup/DB01/20150518/arc_DB01_s89_p1
Finished backup at 2015-05-18_18:28:17

This sounds pretty safe, but what happened if you keep this full backup for ever ? do you have all what you need in it to restore ?

It depends. Chance exists that you can restore. To annoy the paranoiacs, here is a counter example.

  1. you start your full backup at 6pm.
    backuping sequence 21,22,23,24
    backup datafiles…

  2. at 7 pm one cronjob issue backup archivelog all delete input;
    backuping and deleting sequence 21,22,23,24,25,26,27,28,29,30

  3. at 8pm your backup is about to finish
    backuping sequence 31

Well, where is my sequence 27 ?

Let’s try


rm /tmp/arch_DB01*
startup force mount;
crosscheck backup;restore database;


Starting restore at 2015-05-18_18:47:45
channel ORA_DISK_1: restore complete, elapsed time: 00:02:05
Finished restore at 2015-05-18_18:49:51

Ok, the restore was fine. Now what?


RMAN&gt; recover database;

Starting recover at 2015-05-18_18:50:35
using channel ORA_DISK_1

starting media recovery

RMAN-00571: =================================================
RMAN-00569: ========== ERROR MESSAGE STACK FOLLOWS ==========
RMAN-00571: =================================================
RMAN-03002: failure of recover command at 05/18/2015 18:50:36
RMAN-06053: unable to perform media recovery because of missing log
RMAN-06025: no backup of archived log for thread 1 with sequence 30
RMAN-06025: no backup of archived log for thread 1 with sequence 29
RMAN-06025: no backup of archived log for thread 1 with sequence 27

RMAN&gt; alter database open;

RMAN-00571: =================================================
RMAN-00569: ========== ERROR MESSAGE STACK FOLLOWS ==========
RMAN-00571: =================================================
RMAN-03002: failure of alter db command at 05/18/2015 18:51:29
ORA-01113: file 1 needs media recovery
ORA-01110: data file 1: '/u02/oradata/DB01/system01.dbf'

You have lost your database. Even if you did a full online backup plus archivelog, it did not contain all the archivelogs and RMAN did not tell you.

There is more than one way to reduce this annoyance :

1) don’t backup archivelogs during full backup
2) don’t delete archivelog backups done during a full backup
3) report any archivelog backups that run during a full. Either by looking in the RMAN dictionary or in your house logfiles
4) do offline backup for long term archives
5) don’t trust RMAN

And as well :
Test your backups &#x1f600;

run sudo, ssh, password, su in simulated interactive mode

Fri, 2015-04-17 07:58

Some commands do not like non-interactive mode


$ passwd <<EOF
> oldpassword
> newpassword
> newpassword
> EOF
Changing password for user lsc.
Current password for lsc@example.com: passwd: Authentication token manipulation error
$ echo oraclepassword | su - oracle
standard in must be a tty
$ echo sudopassword | sudo su - oracle
[sudo] password for lsc:
sudo: sorry, you must have a tty to run sudo

But ok, if you really want to run those in a script, you have plenty of clean (no hack there) ways of doing this.

For instance, let’s use a screen called BAR.


$ xterm -e "screen -S BAR" &
[1]     31732

Now we have an interactive terminal. It could be redirected to a frame buffer device if no x server is started.

Not really a human device, but an interactive terminal.

Now let’s send stuff


$ CR="$(echo '\r')"
$ screen -S BAR -X stuff "sudo su - oracle$CR"
$ screen -S BAR -X stuff "sudopassword$CR"
$ screen -S BAR -X stuff "id > /tmp/xxx$CR"
$ screen -S BAR -X stuff "exit$CR"
$ screen -S BAR -X stuff "exit$CR"
[1] +  Done                    xterm -e "screen -S BAR" &
$ cat /tmp/xxx
uid=100(oracle) gid=100(dba) groups=100(dba)

Usual disclaimer: it is a bad security practice to hardcode your passwords in scripts. Use this only if you really understand security. Read man openssl about how to use openssl to encrypt your password. Ask your security friends before trying