Gary Myers
My website is here
This blog is OPINION, SUGGESTION and DEBATE. Please correct anything I write that is misleading.SydOraclehttp://www.blogger.com/profile/08828771074492585943noreply@blogger.comBlogger262125
Whatever happened to SydOracle
So I haven't posted in mumble years. What happened ?
Firstly there was this thing called Twitter. There seemed to be less blogging and more tweeting, and the twitter thing never suited me. I'd consume my RSS feed collection at times that were convenient to me (mostly on the train), and the tweets all seemed to come overnight or while I was actually working, and by the time I saw them, the sequencing and lack of nuance in the content made it all too hard.
Secondly, back in 2014 I switched from gigs, either contracting or consulting, into a permanent, full-time job. Gigging meant never being entirely sure what was going to come next, so I consumed and played with stuff pretty widely. While I've still been doing a range of stuff in my job, there has been greater direction and focus, and a bunch of the rabbit holes I've followed have been deep and of little interest to a wider community.
Also (and this is probably 2b, rather than 3), it is quite an atypical workplace, especially from an Oracle perspective. I've worked in large telcos, utilities, government, logistics, insurance, defense... And this isn't a large organization with thousands or tens of thousands of users. It isn't enterprise-y, and some of the challenges would be almost incomprehensible to people used to larger organizations. And unlike the consulting organizations I worked for, it isn't pushing for exposure and I would be limited in what details I could include to explain the context of some of the work.
And of course, there's been family and similar grown-up stuff. Kids, house, medical things (a detached retina and resulting eye surgeries - recommend avoiding if possible), those ugly Trump years. Oh and that pandemic.
Which brings me to now-ish. I've been doing the work-from-home thing since the early days of the pandemic with every sign of that continuing. The youngest of my children finished school almost a year ago. And the end of some of those ties to location has led to the end of the 'Sydney' part of Syd-Oracle. Moved a bit further down south, close enough that the train will still get me to Sydney in 90 minutes, but the car will get me to the beach in less than 15 and the lake in half that. Plus I'm no longer working out of a corner of the bedroom, but have a room that I can call my 'Office'.
I don't plan to start blogging again. The permie job continues. Maybe next year, I'll do a post to explain Long Service Leave . Yes Twitter may be sinking slowly into the swamp, but there's still Trump.
Client support for WITH using PL/SQL
DELETE
, MERGE
, INSERT
, or UPDATE
statement, then it must have the WITH_PLSQL
hint." [Note: Scott in Perth noted the problems back in 2014.]
Multisessioning with Python
One problem with the architecture of most SQL clients is they connect to a database, send off a SQL statement and do nothing until the database responds back with an answer. That's a great model when it takes no more than a second or two to get the response. It is cumbersome when the statement can take minutes to complete. Complex clients, like SQL Developer, allow the user to have multiple sessions open, even against a single schema if you use "unshared" worksheets. But they don't co-ordinate those sessions in any way.
Recently I needed to run a task in a number of schemas. We're all nicely packaged up and all I needed to do was execute a procedure in each of the schemas and we can do that from a master schema with appropriate grants. However the tasks would take several minutes for each schema, and we had dozens of schemas to process. Running them consecutively in a single stream would have taken many hours and we also didn't want to set them all off at once through the job scheduler due to the workload. Ideally we wanted a few running concurrently, and when one finished another would start. I haven't found an easy way to do that in the database scheduler.
Python, on the other hand, makes it so darn simple.
[Credit to Stackoverflow, of course]
proc connects to the database, executes the procedure (in this demo just setting the client info with a delay so you can see it), and returns.
Strs is a collection of parameters.
pool tells it how many concurrent operation to run. And then it maps the strings to the pool, so A, B and C will start, then as they finish D,E,F and G will be processed as threads become available.
I could my collection was a list of the schema names, and the statement was more like 'begin ' + arg + '.task; end;'
#!/usr/bin/python
"""
Global variables
"""
db = 'host:port/service'
user = 'scott'
pwd = 'tiger'
def proc(arg):
con = cx_Oracle.connect(user + '/' + pwd + '@' + db)
cur = con.cursor()
cur.execute('begin sys.dbms_application_info.set_client_info(:info); end;',{'info':arg})
time.sleep(10)
cur.close()
con.close()
return
import cx_Oracle, time
from multiprocessing.dummy import Pool as ThreadPool
strs = [
'A', 'B', 'C', 'D', 'E', 'F', 'G'
]
# Make the Pool of workers
pool = ThreadPool(3)
# Pass the elements of the array to the procedure using the pool
# In this case no values are returned so the results is a dummy
results = pool.map(proc, strs)
#close the pool and wait for the work to finish
pool.close()
pool.join()
PS. In this case, I used cx_Oracle as the glue between Python and the database.
The pyOraGeek blog is a good starting point for that.
If/when I get around to blogging again, I'll discuss jaydebeapi / jpype as an alternative. In short, cx_Oracle goes through the OCI client (eg Instant Client) and jaydebeapi takes the JVM / JDBC route.
With PL/SQL and LONGs (and PRODUCT_USER_PROFILE)
with
FUNCTION char2000(i_tab in varchar2, i_part in varchar2)
RETURN VARCHAR2 IS
v_char varchar2(2000);
BEGIN
select high_value into v_char
from user_tab_partitions a
where a.table_name = i_tab
and a.partition_name = i_part;
--
if v_char like
'TO_DATE(''%'', ''SYYYY-MM-DD HH24:MI:SS'', ''NLS_CALENDAR=GREGORIAN'')'
then
v_char := regexp_substr(v_char,q'{'[^']+'}');
end if;
--
RETURN v_char;
END;
select table_name, partition_name,
char2000(table_name, partition_name) high_val,
partition_position, tablespace_name,
segment_created, num_rows, last_analyzed,
global_stats, user_stats
from user_tab_partitions ut
where segment_created='YES'
order by table_name, high_val;
/
You will want the last version of the SQL Plus client. For SQL, sqlplus treats the semi-colon as a "go off and execute this". PL/SQL has traditionally needed a period on an otherwise empty line to switch from the statement editor to the command prompt.
For example:
Having PL/SQL embedded in the SQL statement confuses the older clients, and we get a bout of premature execution.
Oracle things that piss me off (pt 2) - No Direction
As I developer, I understand where they are coming from. SQL Developer benefited from being able to run scripts built for the SQL*Plus command line tool. Then there's the temptation to add a few more useful titbits to the tool. And if it is built 'properly', then it would be relatively easy to decouple it from the GUI and have it as a stand-alone.
BUT.....
where's the big picture ?
I'm pretty sure (but happy to be corrected) that "SQL Developer" is part of the 12.1 database installation. It is certainly referenced in the guides. So I'd assume that the next 12.2 release will have "SQL Developer" and "sqlcl" command line tool and SQL Plus. I couldn't guess whether the sqlplus will be offered as a last gasp, "to be deprecated" option or whether the long term plan is to supply two SQL command line tools.
Unix/Linux users are probably used to something similar, as they generally have the options of different shells, such as bash, ksh, csh etc. But to remedy any confusion, scripts are generally written with a shebang so it can automatically work out which of the available shells it should use.
What DBAs are most likely to end up with is a script for which they'll have to guess whether it is aimed at sqlplus or sqlcl (or, if they are lucky, a comment at the start of the code).
Having the clients "sort of" compatible makes it worse. It is harder to tell what it is aimed at, and what might go wrong if the incorrect client is used. Plus opting for compatibility perpetuates some of the dumb crud that has accumulated in sqlplus over the decades.
For example:
This is an SQL statement:
SET ROLE ALL;
This is a directive to the SQLPlus client
SET TIMING ON
You could tell the subtle difference between the SET as SQL statement and SET as sqlplus directive by the semi-colon at the end. Except that both sqlplus and sqlcl will happily accept a semicolon on the end of a 'local' SET command.
If you think it is hard keeping track of what commands are processed by the database, and what are processed by the client, we also have commands that do both.
16:01:49 SQL> select sysdate, to_char(sysdate), cast(sys_context('USERENV','NLS_DATE_FORMAT') as varchar2(20)) dt_fmt,
2 cast(sys_context('USERENV','NLS_CALENDAR') as varchar2(20)) cal
3 from dual;
SYSDATE TO_CHAR(SYSDATE) DT_FMT CAL
----------------------- ------------------ -------------------- --------------------
10/MAY/15 10/MAY/15 DD/MON/RR GREGORIAN
16:02:35 SQL> alter session set nls_date_format = 'DD/Mon/YYYY';
Session altered.
16:02:40 SQL> select sysdate, to_char(sysdate), cast(sys_context('USERENV','NLS_DATE_FORMAT') as varchar2(20)) dt_fmt,
2 cast(sys_context('USERENV','NLS_CALENDAR') as varchar2(20)) cal
3 from dual;
SYSDATE TO_CHAR(SYSDATE) DT_FMT CAL
------------------ -------------------- -------------------- --------------------
10/May/2015 10/May/2015 DD/Mon/YYYY GREGORIAN
Oracle things that piss me off (pt 1)
A world of confusion
Mostly I've had it fairly easy in my career. When I worked in the UK, I just had the one time zone to work with. The only time things got complicated was when I was working at one of the power generation companies, and we had to make provision for the 23-hour and 25-hour days that go with Daylight Savings.
And in Australia we only have a handful of timezones, and when I start and finish work, it is the same day for any part of Australia. I did work on one system where the database clock was set to UTC, but dates weren't important on that application.
Now it is different. I'm dealing with events that happen all over the world. Again the database clock is UTC, with the odd effect that TRUNC(SYSDATE) 'flips over' around lunchtime. Now when I want to look at 'recent' entries (eg a log table) I've got into the habit of asking WHERE LOG_DATE > SYSDATE - INTERVAL '9' HOUR
And we also have columns that are TIMESTAMP WITH TIMEZONE. So I'm getting into the habit of selecting COL_TS AT TIME ZONE DBTIMEZONE . I could use sessiontimezone, but then the time component of DATE columns would be inconsistent. This becomes just a little more confusing this time of year as various places slip in and out of Daylight Savings.
Now things are getting even more complicated for me.
Again, during my career, I've been lucky enough to be pretty oblivious to character set issues. Most things have squeezed in to my databases without any significant trouble. Occasionally I've had to look for some accented characters in people's names, but that's been it.
In the past few months, I've been working with some European data where the issues have been more pronounced. Aside from a few issues in emails, I've been coping quite well (with a lot of help from Google Translate).
Now I get to work with some Japanese data. And things get complicated.
"The modern Japanese writing system is a combination of two character types: logographic kanji, which are adopted Chinese characters, and syllabic kana. Kana itself consists of a pair of syllabaries: hiragana, used for native or naturalised Japanese words and grammatical elements, and katakana, used for foreign words and names, loanwords, onomatopoeia, scientific names, and sometimes for emphasis. Almost all Japanese sentences contain a mixture of kanji and kana. Because of this mixture of scripts, in addition to a large inventory of kanji characters, the Japanese writing system is often considered to be the most complicated in use anywhere in the world.[1][2]"Japanese writing system
Firstly I hit katakana. With some tables, I can get syllables corresponding to the characters and work out something that I can eyeball and match up to some English data. As an extra complication, there are also half-width characters which are semantically equivalent but occupy different codepoints in Unicode. That has parallels to upper/lower case in English, but is a modern development that came about from trying to fit the previously squarish forms into print, typewriters and computer screens.
Kanji is a different order of shock. Primary school children in Japan learn the first 1000 or so characters. Another thousand plus get taught in high school. The character set is significantly larger in total.
I will have to see if the next few months cause my head to explode. In the mean time, I can recommend reading this article about the politics involved in getting characters (glyphs ? letters ?) into Unicode. I Can Text You A Pile of Poo, But I Can’t Write My Name
Oh, and I'm still trying to find the most useful character/font set I can have on my PC and use practically in SQL Developer. My current choice shows the Japanese characters when I click in the field in the dataset, but only little rectangles when I'm not in the field. The only one I've found that does show up all the time is really UGLY.
NVARCHAR2, UTL-16 and Emails
Until we get our shiny new 12c database running on its shiny new box (and all the data shifted to it), we are living with a mix of databases. To begin with, the data we managed was mostly AU/NZ and Europeans stuff, and the character set is set accordingly. By which I mean one of those Eurocentric things and not UTF-8. We also have a bunch of columns in NVARCHAR2 with AL16UTF16 as the alternative character set.
I'm pretty sure the new database will start with UTF-8. But in the mean time I was responsible for trying to get emails out of the current database with data in various European and non-European character sets. My paths through that forest went as follows...
- It should just work. Let me test it.....Oh bugger.
- Okay, maybe if I put "utf-8" in various bits of the message.
- And switch the code so it uses NVARCHAR2 rather than defaulting to VARCHAR2.
- Oh....UTF-16 isn't the same as UTF-8. I need to convert it somehow
- So I can't put UTF-8 values in either my Eurocentric VARCHAR2 or UTF-16 NVARCHAR2.
- And I have to get this through SMTP, where you can still see the exposed bones of 7-bit ASCII,
AHA ! HTML Entities. That means I can get away with using ASCIISTR to convert the UTF-16 strings into a sequence of Hex values for each two-byte character. Then I stick a &#x in front of each character, and I have an HTML representation of the string !
It stinks of an ugly solution.
I think there should be a way of sending utf-16 in the content, but I couldn't get to it.
It doesn't help that email HTML is less capable than browser HTML, and has to support a variety of older clients (plus presenting an HTML email body inside of the HTML of a webmail client is always going to be awkward).
SQL with Friends ?
Recently I'd been sent an URL as a chat message, with a picture from a holiday. It was quite a long URL, with a dubious few characters that may have been the number zero or the letter "O" etc. The chat doesn't allow copying, so rather than trying the variations manually, I took the geek road.
Starting with a Cheeky Monkey post, I learnt that the chat messages were probably in an SQLLite file for the application in a relatively inaccessible 'data/data' location on my Android phone or tablet.
Stackoverflow told me that I can pull the information from there using the Android debugger's backup command (adb). You may need to install a bunch of stuff, such as an up-to-date Java JDK, to get that running. I'd done that before so it was pretty painless. You also need to enable USB debugging on your device.
Those backups are almost, but not quite, a TAR file. I grabbed a Java tool to convert my backup file into a regular TAR, and then unzipped them with 7-Zip.
It wasn't too hard to find the relevant db file that contained the chat messages. I've got a newer version of the game than the one Cheeky Monkey used, so I had to dig a bit more. My package was called "com.zynga.wwf2.free" rather than the older "com.zynga.words" (but I still had the data from the older version on my phone).
Once I found the right package, the db file was in the db directory as "wf_database.sqlite". sqlite3 was conveniently in the same toolkit as the android debugger.
Back to Stackoverflow for some quick sqlite info and I had a set of CREATE TABLE and INSERT statements.
I could have simply grepped for the URL, but being a database person I couldn't resist a final stage.
A few find/replaces were need to switch the DDL to Oracle syntax (different data type names and Oracle is constrained by the 30 character column names). I then imported the users (players), games, moves and chat messages into my XE database and came up with a query to extract the chat messages and the player who posted it.
I think the chat for a deleted game would be a lot harder to recover. While you don't need to root your device, you will need to enable debugging and authorise the backup and so you need regular access to the device. If you can't get past the lock screen, this won't help.
I mentioned that I still had the data files for the older version of the game. I mistakenly opened these first, and was surprised to find that the user data included email addresses for many entries. None were for my regular opponents, but some were for people I recall playing once or twice. I don't recall many of the users, who may have been people I played as a random pick, or may have been on a 'leaderboard'. The data for the newer version of the game only had the email address for my user.
My player name (sydoracle) is pretty easy to track back to the "real" me, and I use a unique email address when I sign up to most services. But others might have been more concerned to find the email addresses were being shared, even in a concealed manner.
Latest Oracle allows SELECT without SELECT...FOR UPDATE
You all know that when you grant SELECT on a table to a user, they can do a SELECT FOR UPDATE, locking records in the table and preventing other updates or deletes. [Some client tools may do that in the background. ]
Well finally Oracle have cottoned on to that too, and there's a lighter-weight "READ" privilege in 12.1.0.2 which won't allow SELECT FOR UPDATE.
This will make DBAs very happy. Actually it won't. The natural state of a DBA is grumpy, at least when in the vicinity of a developer or salesman.
PS. Why would SELECT FOR UPDATE ever be a good idea for a user with no UPDATE privilege ?
If I had to guess, I'd say it went back to a 'pre-read consistency' model when you might use a SELECT FOR UPDATE to try to select data that wasn't being updated.
Putting my DB / Apex install through the wringer
This was just a proof-of-concept, not something I intend to actually leave running.
EPG on Port 8080
I do other testing on the home network too, so I already had my router configured to forward port 80 to another environment. That meant the router's web admin had been shifted to port 8080, and it wouldn't let me use that. Yes, I should find a open source firmware, but OpenWRT says it is unsupported and will "brick the router" and I can't see anything for Tomato.
So I figured I'd just use any incoming router port and forward it to the PC's 8080. I chose 6000. This was not a good choice. Looks like Chrome comes with a list of ports which it thinks shouldn't be talking http. 6000 is one of them, since it is supposed to be used for X11 traffic so Chrome told me it was unsafe and refused to co-operate.
Since it is a black-list of ports to avoid, I just happened to be unlucky (or stupid) in picking a bad one. Once I selected another, I got past that issue.
My task list was:
Server
- Install Oracle XE 11gR2 (Windows 64-bit)
- Configure the EPG for Apex. I ran apex_epg_config.sql as, I had switched straight from the pre-installed Apex 4.0 to 4.2.5 rather than upgrading a version I had actively used.
- Unlocked the ANONYMOUS database account
- Checked DBMS_XDB.GETHTTPPORT returned 8080
- Enabled external access by setting DBMS_XDB.SETLISTENERLOCALACCESS(false);
- I got a handy Dynamic DNS via NoIP because my home IP can potentially change (though it is very rare). [Yes, there was a whole mess about Microsoft temporarily hijackinging some noip domains, but I'm not using this for anything important.] This was an option in my router setup.
- The machine that runs XE / Apex should be assigned a specific 192.168.1.nnn IP address by the router (based on it's MAC address). This configuration is specific to the router hardware, so I won't go into my details here. But it is essential for the next step.
- Configure the port forwarding on the router to push incoming traffic on the router's port 8088 off to port 8080 for the IP address of the machine running XE / Apex. This is also router specific.
Apex Listener
This one is relatively easy to set up, especially since I stuck with "standalone" mode for this test.
A colleague had pointed me to this OBE walkthrough on Apex PDF reports via RDS, so I took a spin through that and it all worked seamlessly.
My next step would be a regular web server/container for RDS rather than standalone. I'm tempted to give Jetty a try as the web server and container for the listener rather than Tomcat etc, but the Jetty documentation seems pretty sketchy. I'm used to the thoroughness of the documentation for Apache (as well as Oracle).
Literally speaking
The construct date '1900-01-01' is an example of a literal, in the same way as '01-01' is string literal and 1900 is a numeric literal. We even have use some more exotic numeric literals such as 1e3 and 3d .
Oracle is pretty generous with implicit conversions from strings to numbers and vice versa, so it doesn't object when we assign a numeric literal to a CHAR or VARCHAR2 variable, or a string to a NUMBER variable (as long as the content is appropriate). We are allowed to assign the string literal '1e3' to a number since the content is numeric, albeit in scientific notation.
So there are no problems with executing the following:
declare
v number := '1e3';
begin
dbms_output.put_line(v);
end;
/
However while 3d and 4.5f can be used as numeric literals, Oracle will object to converting the strings '3d' or '4.5f' into a number because the 'f' and 'd' relate to the data type (Binary Float and Binary Double) and not to the content.
Similarly, we're not allowed to try to use string expressions (or varchar2/char variables) within a date literal, or the related timestamp literal. It must be the correct sequence of numbers and separators enclosed by single quotes. It doesn't complain if you use the alternative quoting mechanism, such as date q'[1902-05-01]' but I'd recommend against it as being undocumented and superfluous.
Going further, we have interval literals such as interval '15' minute .In these constructs we are not allowed to omit the quotes around the numeric component. And we're not allowed to use scientific notation for the 'number' either (but again the alternative quoting mechanism is permitted).
I've built an affection for interval literals, which are well suited to flashback queries.
select versions_operation, a.*
from test versions between timestamp sysdate - interval '1' minute and sysdate a;
Confusingly the TIMESTAMP keyword in the query above is part of the flashback syntax, and you have to repeat the word if you are using a timestamp literal in a flashback query.
select versions_operation, a.*
from test versions between timestamp timestamp '2014-06-21 12:50:00'
and sysdate a
Apex theme fun
Digging deeper, the CSS for more than a dozen of the built-in themes utilise the ID selector "#notification-message" in the CSS. About half a dozen have only a class selector, and another three have both (with the prefix of t followed by the theme number). Finally three just have the ID selector with the theme prefix.
My gut feel is that they switched from the ID to the class selectors stopping in various places on the way. And some of those places aren't very pretty.
This was all checked on a fresh Apex 4.0 instance because I just installed the new Windows 64-bit version of Oracle Express Edition. I'll do an upgrade of that default to the latest 4.2 this weekend too.
I Love Logs
I think of myself as a developer, but my current role is in a small team running a small system. And by running, I mean that we are
- 'root' and 'Administrator' on our Linux and Windows servers
- 'oracle / sysdba' on the database side,
- the apex administrator account and the apex workspace administrators,
- the developers and testers,
- the people who set up (and revoke) application users and
- the people on the receiving end of the support email
All this metadata makes it a lot easier to find the cause of problems. It isn't voyeurism or spying. Honest.
Unique identifiers - but what do they identify
Through normalisation, the tendency is to start with a data set, and by a methodical process extract candidate keys and their dependent attributes. In many cases there isn't a genuine or usable candidate key and artificial / surrogate keys need to be generated. While your bank can generally work out who you are based on your name and address, either of those could change and so they assign you a more permanent customer or account number.
The difficulty comes when those identifiers take on a life of their own.
In the dark ages, the phone number would indicate a particular exchange and a copper wire leading out of that exchange hard wired to a receiver (or a set of receivers in the case of Party Lines). Now all the routing is electronic, telephones can be mobile and the routing for calls to a particular number can be changed in an instant. A phone number no longer identifies a device, but a service, and a new collection of other identifiers have risen up to support the implementation of that service. An IMEI can identify a mobile handset and the IMSI indicates a SIM card from a network provider, and we can change the SIM card / IMSI that corresponds to a phone number, or swap SIM cards between handsets. Outside the cellular world, VOIP can shunt 'phone number' calls around innumerable devices using IP addresses.
Time is another factor. While I may 'own' a given phone number at a particular time, I may give that up and someone else might take it over. That may get represented by adding dates, or date ranges to the key, or it can be looked at as a sequence. For example, Elizabeth Taylor's husband may indicate one of seven men depending on context. The "fourth husband" or "her husband on 1st Jan 1960" would be Eddie Fisher.
Those without a data modelling background that includes normalisation may flinch at the proliferation of entities and tables in a relational environment. As developers and architects look at newer technologies some of the discipline of the relational model will be passed over. Ephemeral transactions can cluster the attributes together in XML or JSON formats with no need for consistency of data definitions beyond the period of processing. Data warehousing quickly discarded relational formats in favour of 'facts' and 'dimensions'.
The burden of managing a continuous and connected set of data extending over a long period of time, during which the identifiers and attributes morph, is an ongoing challenge in database design.
Pre-digested authentication
The fun thing about doing the hard yards to code up the algorithm is that you get a deeper level of understanding about what's going on. Take these lines:
I'm an amateur in this field. I checked around and it does seem this is a recognized limitation of digest authentication. EG: This Q&A and this comparison of Digest and Basic.
PL/SQL, UTL_HTTP and Digest Authentication
However I had a recent requirement to use a web service that makes use of Digest Authentication. If you have look at the UTL_HTTP SET_AUTHENTICATION subprogram, it only addresses Basic authentication (and, apparently, Amazon S3 which looks intriguing).
In Basic authentication, the username and password get sent across as part of the request. Going through SSL, that doesn't seem too bad, as it is encrypted over the transfer and the certificates should ensure you are talking to the legitimate destination. However if that destination has been compromised, you've handed over your username and password. In an ideal world, the server shouldn't need to know your password, which is why database should only have hashed versions of passwords.
Outside of SSL, you might as well just print the username and password on the back of a postcard.
In Digest authentication, you get a more complex interaction that keeps the password secret. You ask for a page, the server responds with an "Authentication Required" plus some bits of information including a nonce. You come up with a hashed value based on the server nonce, your own nonce and a hash of your username and password and send it back with the next request. The server has its own record of your username/password hash and can duplicate the calculations. If everyone is happy, the server can fulfill your request and nobody ever actually needs to know the password.
Our server used SSL, and thanks to Tim's article on SSL and UTL_HTTP, it was a simple set up. I've done it before, but that was in the days when it seemed a lot hard to get certificates OUT of a browser to put them in your Oracle Wallet.
The Interwebs were a lot less forthcoming on a PL/SQL implementation of Digest authentication though. The closest I got was this discussion, which can be summed up as "This may be complex, but I do not see these offhand as being impossible to do in PL/SQL....No Digest configured web server nearby or I would definitely have had a bash at this"
A read through the Wikipedia article, and I came up with the code below:
Firstly, after the initial request, go through the header to get the 'WWW-Authenticate' item. Take the value associated with that header, and pass it to the "auth_digest" procedure.
l_max := UTL_HTTP.GET_HEADER_COUNT(l_http_response);
l_ind := 1;
l_name := '-';
while l_ind <= l_max AND l_name != 'WWW-Authenticate' LOOP
UTL_HTTP.GET_HEADER(l_http_response, l_ind, l_name, l_value);
IF l_name = 'WWW-Authenticate'
AND l_http_response.status_code = UTL_HTTP.HTTP_UNAUTHORIZED THEN
--
-- Unauthorized. Using the Authorization response header, we can come up with the
-- required values to allow a re-request with the authentication/authorisation details
--
dbms_application_info.set_action('auth:'||$$PLSQL_LINE);
UTL_HTTP.END_RESPONSE(l_http_response);
--
dbms_application_info.set_action('auth_req:'||$$PLSQL_LINE);
l_http_request := UTL_HTTP.BEGIN_REQUEST(l_server||l_method);
auth_digest (io_http_request => l_http_request, i_auth_value => l_value,
i_username => nvl(i_username,'xxxx'), i_password => nvl(i_password,'xxxx'),
i_req_path => l_method, i_client_nonce => null);
dbms_output.put_line($$PLSQL_LINE||':Get Response from authenticated request');
dbms_application_info.set_action('auth_resp:'||$$PLSQL_LINE);
l_http_response := UTL_HTTP.GET_RESPONSE(l_http_request);
dump_resp (l_http_response);
dump_hdr (l_http_response);
END IF;
l_ind := l_ind + 1;
END LOOP;
procedure extract_auth_items
(i_text in varchar2,
o_realm out varchar2, o_qop out varchar2, o_nonce out varchar2, o_opaque out varchar2) is
begin
o_realm := substr(regexp_substr(i_text, 'realm="[^"]+' ),8);
o_qop := substr(regexp_substr(i_text, 'qop="[^"]+' ),6);
o_nonce := substr(regexp_substr(i_text, 'nonce="[^"]+' ),8);
o_opaque := substr(regexp_substr(i_text, 'opaque="[^"]+'),9);
end extract_auth_items;
procedure auth_digest
(io_http_request in out UTL_HTTP.REQ, i_auth_value in varchar2,
i_username in varchar2, i_password in varchar2,
i_req_path in varchar2, i_qop in varchar2 default 'auth',
i_req_cnt in number default 1, i_client_nonce in varchar2 default null)
is
l_realm varchar2(400);
l_qop varchar2(30);
l_server_nonce VARCHAR2(400);
l_opaque varchar2(100);
--
l_response varchar2(40);
l_value VARCHAR2(1024);
--
l_client_nonce varchar2(30);
--
begin
--
-- Apply the username / password for Digest authentication
--
extract_auth_items (i_auth_value,
l_realm, l_qop, l_server_nonce, l_opaque);
--
IF i_client_nonce is not null then
l_client_nonce := i_client_nonce;
ELSE
l_client_nonce := lower(utl_raw.cast_to_raw(DBMS_OBFUSCATION_TOOLKIT.md5(
input_string=>dbms_random.value)));
END IF;
--
l_response := digest_auth_md5_calcs
(i_username => i_username, i_password => i_password, i_req_path => i_req_path,
i_realm => l_realm, i_server_nonce => l_server_nonce,
i_client_nonce => l_client_nonce);
--i_qop default to auth, i_req_type default to GET and i_req_cnt default to 1
--
l_value := 'Digest username="' ||i_username ||'",'||
' realm="' ||l_realm ||'",'||
' nonce="' ||l_server_nonce ||'",'||
' uri="' ||i_req_path ||'",'||
' response="' ||l_response ||'",'||
' qop=' ||i_qop ||',' ||
' nc=' ||lpad(i_req_cnt,8,0) ||',' ||
' cnonce="' ||i_client_nonce ||'"'
;
--
IF l_opaque is not null then
l_value := l_value||',opaque="'||l_opaque||'"';
END IF;
dbms_output.put_line(l_value);
UTL_HTTP.SET_HEADER(io_http_request, 'Authorization', l_value);
--
end auth_digest;
Chrome's HOST-RULE flag
Oracle 11g XE installed on Windows 8.1
In third place is a reference to foreign keys referencing unique constraints rather than a primary key.
In second place, and heading towards its second birthday, was my desire for SQL Developer to have a "keep awake" function that stops sessions being killed. It still doesn't but will restore connections.
In first place, was an almost throwaway article from when I migrated my netbook from XP to Windows 8. I installed Oracle 11g XE on it, and apparently a lot of people are (or were) interested in that. I guess people get Windows 8 on a machine and wonder if it is worth trying an XE install. The XE install on Windows is trivially simple:
- Download
- Unzip
- Run the setup program
- Choose a directory
- Enjoy a cup of coffee while it churns through. You have time for a biscuit too.
That article is nearing its first anniversary. In honour of that occasion, I upgraded to Windows 8.1. Oracle XE broke. Specifically I couldn't find the services that ran the listener or the database or anything Oracle related. A repair install of XE didn't fix things. Disclaimer: I don't use Oracle a lot on that machine, and it possible that something other than the 8.1 upgrade broke it.
I uninstalled it (that would have been the 'repaired' install), blew away all the database files as I had nothing I wanted to keep, and re-installed (instructions above - my biscuit was a Tim Tam, but people in the UK may substitute a p-p-p-penguin). It seems to work fine now.
So Oracle 11g XE works on Windows 8.1, as far as I can see. Bear in mind that XE doesn't have any real support anyway, so the difference between a 'supported' and 'unsupported' configuration is purely imaginary. I don't do anything like RMAN backups and restores, let alone between OS upgrades. I can't see why they wouldn't work, but I'm not a DBA.
'Medalling' in Humour
Partly, I'll blame Yuri for this. He recommended Toastmasters as a way of getting some extra public speaking skills. Luckily for me, there's a group that meets one lunchtime a week at work, which makes attendance relatively easy. Much easier than trying to fit in an evening on top of my kids' busy social and sporting calendars. I started going just before Christmas and became a paid up member in January this year.
Next, I'll blame Neil Sequeira who prompted me to enter the club contest a month ago on the basis that attendees regularly laughed at my speeches....in a nice way. I won that, and then it dawned on me that I'd be going into the Area level contest. Our club was hosting that event, so I had a slight 'home ground' advantage, but I was still in front of a bunch of people I didn't know, most of whom have been honing their speaking skills for YEARS.
I won that, which meant going up to the Division level contest last night. That was in a church hall, unfamiliar territory for me. We were speaking from an elevated stage, and with a headset microphone. Getting into the big leagues.
I was a bit ruffled because my trip there was delayed with my phone unaccountably deciingd it couldn't find any GPS signal, and refusing to tell me where I was or how to get where I was supposed to be. My destination was the other side of Middle Harbour so my regular tactic of pointing the car vaguely in the right direction and hoping for the best was foiled by its inability to fly across the water. Resorting to my trusty and dusty Gregory's Street Directory I made the 40 minute journey in a mere 80 minutes.
My speech starts with the other Gary Myers, multi-time champion of Summernats (which I've mentioned before in my blog ) and land speed record holder. Oh, and candidate for one of the NSW Senate spots for Federal parliament. He didn't win, but a candidate for the same party did get a spot for Victoria. I suspect this came as somewhat a surprise to him, as the Senate voting system is complex and highly unpredictable. An unemployed ex-sawmill worker who didn't own a suit and has kangaroo poo chucking as a hobby will be an interesting addition to the chamber.
I was more than happy with my third place finish in the contest. The winner, who also took the Table Topics contest top prize, was excellent. And second place went to a Dave the Happy Singer who took the mick out of homoeopathy and similar rubbish, so I won't criticise that. I get a small trophy and a certificate. And an update to my LinkedIn profile.
And, for posterity, my certificates: