❤❤❤❤❤ I’m proud of my grandma Laure Amstutz from la Ferrière who’s got one more digit in her age than yesterday ❤❤❤❤❤
This is probably ambitious and I’ll start with a disclaimer, there is no such thing.
But ok, we know that system/manager isn’t
Apart from system/manager, there are hundreds of weak passwords that are commonly used and easy to guess.
On your database server, after a few tries, the account is lock. And maybe the attacker is detected by then.
So the worst passwords are the default passwords and passwords like oracle.
To enforce good passwords, we have verify functions, like ora12c_strong_verify_function in 12c, that checks for mixed case, special characters, etc. One may prefer to write his own and not disclose what it exactly checks.
In that function in rdbms admin, it states The maximum length of any DB User password is 128 bytes. but it’s 30 character in most cases.
If you have failed login attends of 10, chosing eleven as a password does not make it safe. If the attacker got’s the user metadata, you are screwed in no time. In Oracle 4, it’s clear text. In 7-10, it’s a doubled-DES unsalted with a fixed disclosed key encryption. There any dictionary attack takes milliseconds, and a 6 character password in sub-second. It’s got better in 11, where SHA1 could take weeks to years to have a 8 char password. Depending on its complexity. In 12c, generating a hash cost lot’s of cpu cycle, so it is no longer possible to test millions of password per second, even with the strongest hardware.
But to get a good password it is recommended and often required to use digit / letters / special signs / mixed case and no dictionary word.
I have made a small password generator for my reader using dbms_random.string, which generates pseudorandom string. It is best to use the cryptographically secure dbms_crypto.randombytes, but then you must still get a password that you can type. It should also be possible to use unicode if you like. And depending where you are going to use it, it is sometimes safer to not use signs like * or ‘ because, who know’s, your password may produce an error and end up in a logfile.
Okay, I wrote a small function that generates a 10-char string and verify it with the 12c strong verifier. And loop until one is good enough.
The chance that a random password is manager is pretty low, but it is probably best to check you got not only safe random, but also strong string
-- @?/rdbms/admin/catpvf CREATE OR REPLACE FUNCTION pw (username VARCHAR2, old_password VARCHAR2 DEFAULT NULL) RETURN VARCHAR2 IS p VARCHAR2 (30); c BOOLEAN := FALSE; i NUMBER := 0; BEGIN WHILE NOT c AND i < 1000 LOOP p := DBMS_RANDOM.string ('P', 10); i := i + 1; BEGIN c := sys.ora12c_strong_verify_function (username, p, old_PASSWORD); EXCEPTION WHEN OTHERS THEN NULL; END; END LOOP; RETURN p; END; / SELECT pw ('SCOTT', 'TIGER') FROM DUAL; #gA~82NxBv
This could well be a good initial expired password for your user. Later the user will find something easier to remember
An easy way to generate a value string from the ssl is to use openssl
Let’s take a random salt of ABCDEFGHIJ. The length of 10 is important.
The hexadecimal representation is -41-42-43-44-45-46-47-48-49-4A-
$ echo "SafePassw0rDABCDEFGHIJ\c" | openssl dgst -sha1 (stdin)= 47cc4102144d6e479ef3d776ccd9e0d0158842bb
With this hash, I can construct my value
SQL> create user testuser identified by values 'S:47CC4102144D6E479EF3D776CCD9E0D0158842BB4142434445464748494A'; User created. SQL> grant create session to testuser; Grant succeeded. SQL> conn testuser/SafePassw0rD Connected.
If you prefer PL/SQL over shell, use DBMS_CRYPTO
SQL> exec dbms_output.put_line('S:'||dbms_crypto.hash(utl_raw.cast_to_raw('SafePassw0rDABCDEFGHIJ'),dbms_crypto.HASH_SH1)||utl_raw.cast_to_raw('ABCDEFGHIJ')) S:47CC4102144D6E479EF3D776CCD9E0D0158842BB4142434445464748494A PL/SQL procedure successfully completed.
In 12c there is also a “T” String. According to the doc
The cryptographic hash function used for generating the 12C verifier is based on a de-optimized algorithm involving PBKDF2 and SHA-512.
How do you delete all data? The simplistic approach would be to truncate all tables
SQL> select table_name from user_tables; TABLE_NAME ---------- T1 SQL> truncate table t1; Table truncated.
You cannot truncate if you have referential integrity constraints.
SQL> truncate table t2; ORA-02266: unique/primary keys in table referenced by enabled foreign keys
Ok, let’s disable the RIC
SQL> select table_name, constraint_name from user_constraints where constraint_type='R'; TAB CONSTRAINT --- ---------- T3 SYS_C00107 SQL> alter table t3 disable constraint SYS_C00107; Table altered. SQL> truncate table t2; Table truncated. SQL> truncate table t3; Table truncated.
You cannot truncate cluster tables
SQL> truncate table t4; ORA-03292: Table to be truncated is part of a cluster
Cluster tables could be dropped with TRUNCATE CLUSTER.
SQL> select cluster_name from user_clusters; CLUSTER_NAME ------------ C SQL> truncate cluster c; Cluster truncated.
The code above doesn’t work with Partitioned cluster (22.214.171.124) because it was not properly implemented at the time of the writing.
Check Bug 20284579 : CAN NOT QUERY DYNAMIC CLUSTER PARTITIONS
For reference partitioning, it is not possible to disable the foreign key
SQL> alter table t6 disable constraint fk; ORA-14650: operation not supported for reference-partitioned tables
In 12c, if the foreign key is defined with ON DELETE CASCADE, you can truncate cascade the parent.
SQL> select table_name, REF_PTN_CONSTRAINT_NAME from user_part_tables where partitioning_type='REFERENCE'; TAB REF --- --- T6 FK SQL> select r_constraint_name, delete_rule from user_constraints where constraint_name='FK'; R_CON DELETE_RULE ----- ----------- PK CASCADE SQL> select table_name from user_constraints where constraint_name='PK'; TAB --- T5 SQL> truncate table t5 cascade; Table truncated.
But if one of child or child-child table is using reference partitioning without the ON DELETE CASCADE, then the parent or grand-parent could not be truncated. And truncate cascade for reference partitioning is not documented (yet).
But there is very nice alternative to TRUNCATE called is DELETE
SQL> select table_name, REF_PTN_CONSTRAINT_NAME from user_part_tables where partitioning_type='REFERENCE'; TAB REF --- --- T8 FK SQL> select r_constraint_name, delete_rule from user_constraints where constraint_name='FK'; R_CON DELETE_RULE ----- ----------- PK NO ACTION SQL> select table_name from user_constraints where constraint_name='PK' TAB --- T7 SQL> truncate table t7 cascade; ORA-14705: unique or primary keys referenced by enabled foreign keys in table "SCOTT"."T8" SQL> truncate table t8; Table truncated. SQL> delete from t7; 2 rows deleted
To get the tables in the right order, parent tables after children, you can do some hierarchical query and then
order by rownum desc, a construct I’m using for the first time I confess. Note the leaf tables are truncable.
select c_owner owner, child table_name FROM ( SELECT p_OWNER, parent, nvl(c_owner, a.owner) c_owner, nvl(child, a.table_name ) child FROM ( SELECT PT.OWNER P_owner, pt.table_name parent, pt2.owner c_owner, pt2.table_name child FROM all_part_tables pt JOIN all_constraints c ON pt.OWNER = c.owner AND PT.TABLE_NAME = c.table_name AND c.constraint_type = 'P' AND c.status = 'ENABLED' JOIN all_constraints r ON r.r_owner = c.owner AND r.r_constraint_name = c.constraint_name AND r.constraint_type = 'R' AND r.status = 'ENABLED' JOIN all_part_tables pt2 ON r.owner = pt2.owner AND r.constraint_name = pt2.REF_PTN_CONSTRAINT_NAME AND pt2.partitioning_type = 'REFERENCE' ) t RIGHT JOIN all_tables a ON child = table_name and a.owner = T.c_OWNER ) where connect_by_isleaf=0 CONNECT BY parent = PRIOR child and p_owner=PRIOR c_owner start with parent is null order by rownum desc; OWNER TAB ----- --- SCOTT T10 SCOTT T9
Note the query above is very slow. If dictionary-performance is an issue, maybe we could delete all tables and catch exceptions and loop until all tables are empty
SQL> delete from t9; ORA-02292: integrity constraint (SCOTT.F10) violated - child record found SQL> delete from t10; ORA-02292: integrity constraint (SCOTT.F11) violated - child record found SQL> delete from t11; 1 row deleted. SQL> delete from t9; ORA-02292: integrity constraint (SCOTT.F10) violated - child record found SQL> delete from t10; 1 row deleted. SQL> delete from t11; 0 row deleted. SQL> delete from t9; 1 row deleted. SQL> delete from t10; 0 row deleted. SQL> delete from t11; 0 row deleted. SQL> delete from t9; 0 row deleted. SQL> delete from t10; 0 row deleted. SQL> delete from t11; 0 row deleted.
If you have close to zero reference-partitioning table, this approach will be more efficient.
In trace file, the time is recorded, this could be used to measure time between two timestamps
But how do you convert 31796862227375 to a human format?
This is how I proceeded :
SQL> oradebug SETMYPID Statement processed. SQL> oradebug tracefile_name /u01/log/oracle/diag/rdbms/db01/DB01/trace/DB01_ora_32440740.trc SQL> alter session set nls_date_format='YYYY-MM-DD_HH24:MI:SS' sql_trace=true; Session altered. SQL> select '&_DATE','Start' from dual; 2015-09-23_13:17:50 Start SQL> select '&_DATE','End' from dual; 2015-09-23_13:18:38 End SQL> alter session set sql_trace=false; Session altered.
I use the sqlplus variable _date and waited about a minute between the select’s.
Now let’s look at the trace
PARSING IN CURSOR #4859519800 tim=31796814530524 select '2015-09-23_13:17:50','Start' from dual PARSING IN CURSOR #4859511384 tim=31796862227375 select '2015-09-23_13:18:38','End' from dual
Note the tim=
Between 13:17:50 and 13:18:38 I have 48 seconds; the difference between 31796862227375 and 31796814530524 is 47696851, roughly 48 millions of microseconds.
So the time on this specific version of Oracle is in microseconds.
Now if I substracted 31796862227375 from 2015-09-23_13:17:50 and I get 2014-09-20_12:50:08 on this specific instance.
So to convert tim= to human readable date, I simply add tim microseconds to Sep 20, 2014, 12:50:08.
exec dbms_scheduler.create_job(job_name=>null,job_type=>'PLSQL_BLOCK',job_action=>'BEGIN NULL; END;') ORA-27451: JOB_NAME cannot be NULL ORA-06512: at "SYS.DBMS_ISCHED", line 146 ORA-06512: at "SYS.DBMS_SCHEDULER", line 288 ORA-06512: at line 1
This sounds like a proper error message. A bit less obvious is the drop_job message
SQL> exec dbms_scheduler.drop_job(job_name=>null) ORA-20001: comma-separated list invalid near ORA-06512: at "SYS.DBMS_UTILITY", line 236 ORA-06512: at "SYS.DBMS_UTILITY", line 272 ORA-06512: at "SYS.DBMS_SCHEDULER", line 743 ORA-06512: at line 1
comma-separated list invalid near what?
Ok, why would you create an empty job? Obviously you wouldn’t. But remember job_name could be a very long expression that won’t fit in your VARCHAR2(30) variable.
SQL> begin dbms_scheduler.create_job(job_name=> ' "SCOTT" '|| ' . '|| ' "JOB10000000000000000000001" ', job_type=>'PLSQL_BLOCK', job_action=>'BEGIN NULL; END;'); end; / PL/SQL procedure successfully completed. SQL> exec dbms_scheduler.drop_job('scott.job10000000000000000000001') PL/SQL procedure successfully completed.
If you use drop job in the exception clause without catching the exception of the exception, it could lead to this ORA-20001 if job name is null
For exception handling, we could improve
BEGIN CREATE JOB RUN JOB DROP JOB EXCEPTION WHEN OTHERS THEN DROP JOB output message RAISE END
BEGIN CREATE JOB RUN JOB DROP JOB EXCEPTION WHEN OTHERS THEN BEGIN DROP JOB EXCEPTION WHEN IS_RUNNING sleep WHEN OTHERS output message END LOOP output message RAISE END
Truncate is useful in some serial batch processing but it breaks the read-write consistency, generates stranges errors and results for running selects, and it needs DROP ANY TABLE when run over a table that you do not own.
But also, DELETE is faster in the following test case.
In 12c, you could have over one million partition in a table, but for the sake of the universe, I’ll try with 10000.
SQL> create table scott.t(x) partition by range(x) interval(1) (partition values less than (0)) as select rownum from dual connect by level<10001; SQL> select count(*) from scott.t; COUNT(*) ---------- 10000
The 10K rows table is created, each row is its partition
SQL> delete scott.t; 10000 rows deleted. Elapsed: 00:00:04.02 SQL> rollback; Rollback complete.
Not tuned or parallelized or whatever. It took 4 seconds for 10’000 rows. If you have one billion rows, it is doable in a few hours. But you better do it in chunks then.
Anyway, let’s truncate
SQL> truncate table scott.t; Table truncated. Elapsed: 00:05:19.24
Five minutes !!! to truncate that tiny table.
If you have one million partitions and underlying indexes and lobs, it will probably failed with out of memory errors after hours and a large impact on the dictionary, sysaux, undo.
The dictionary changes are here very slow.
Most permissions issues are due to a missing role or privilege.
But in the following test case you need to revoke the right to get more privileges.
create table tt(x number); create view v as select * from tt; create role rw; grant all on v to rw;
I’ve created a read-write role on a view. The owner of the role is the DBA, but the owner of the view is the application. Next release, the role may prevent an application upgrade
SQL> create or replace view v as select * from dual; ORA-01720: grant option does not exist for 'SYS.DUAL'
Ok, if I drop the role, it works
SQL> drop role r; Role dropped. SQL> create or replace view v as select * from dual; View created.
It is not always a good thing to grant privileges on a view, when you are not the owner of that view
If you got a one-note document, you may want to automatically go to the last page. This is possible with powershell.
First you create a ComObject. There are incredibly many ComObject that could be manipulated in powershell.
$o = New-Object -ComObject OneNote.Application
Now it get’s a bit confusing. First you open your document
[ref]$x = "" $o.OpenHierarchy("Z:\Reports.one", "", $x, "cftNone")
Now you get the XML
$o.GetHierarchy("", "hsPages", $x)
With the XML, you select the last page. For instance :
$p = (([xml]($x.value)).Notebooks.OpenSections.Section.Page | select -last 1).ID
And from the id, you generate an URL the GetHyperlinkToObject.
[ref]$h = "" $o.GetHyperlinkToObject($p,"",$h)
Now we can open the url
There are quite a few names to avoid in your scripts. Even if there are not reserved-words, keep away !
I’ll start with test
cd $HOME/bin vi test echo hello world chmod +x test ./test hello world
The problem is that it may break your other scripts
$ ssh localhost test 1 = 2 && echo WHAT??? hello world WHAT???
And it may break sooner or later, depending on your OS / version / default shell / default path / others.
There are quite a few filenames you should not use, like
make. The command
type lists some of those as reserved word, shell builtin, tracked alias, shell keyword. But again it is not consistent over Unix flavors.
$ uname -sr; type date SunOS 5.10 date is /usr/bin/date $ uname -sr; type date Linux 2.6 date is a tracked alias for /bin/date
Your sysadmin may also alias things for colors and safety in the common profile: for instance
rm. But if it annoys you, then use
\vi instead of