It is an English word that sounds very funny in Spanish, almost unreal.
Well, just wanted to brand the blog with a first entry to introduce myself.
I work as an Oracle and SQL Server DBA (mixed profile, they call it -
concerns twice for the same pay, on me).
Here I will write "how to" articles as they emerge in my day to day.
If you can read Spanish, check out the blog I maintain for a little more time http://blog.davidlozanolucas.com/.
Here is a script to start/stop Oracle databases running in Unix:
I saw one of those really interesting pieces of code recently. A guy wanted to run his transaction and make it fail when it was all done. Normally one would put ROLLBACK at the end of the transaction in order to undo a transaction's work. But this guy did not want to do that. He wanted to keep his transaction code unchanged, commit at the end and everything. He had several reasons for this, among them being that he did not have access to all the code he was working with and thus could not put the ROLLBACK where it was needed, and indeed suspected (as we eventually found to be true), that somewhere in the code stream there was a commit being done without his permission thus splitting his transaction in ways he did not intend. So he wanted a way to FOREORDAIN (determine ahead of time) that his transaction would fail no matter even if it went to conclusion without error. For this he came up with a I think a clever hack. Seems to me this might have some use, if I can figure out what that use might be. So here is the cool solution.
import done in US7ASCII character set and AL16UTF16 NCHAR character setSegmentation Fault (coredump)Submitted by rajabaskar on Wed, 2009-09-30 19:25
Last week I migrated some schemas from 11g database (184.108.40.206) to 9i database (220.127.116.11).
I used to export the 11g schemas using oracle 9i binary & exported successfully.
While importing 11g schema’s into 9i database I faced below issues.
Error: import done in US7ASCII character set and AL16UTF16 NCHAR character setSegmentation Fault (coredump)
Operating system: Sun Solaris 10 / 64 bit processor
I exported the 11g schema’s using 10g binary and imported into 10g database
Recently a friend asked me for this. I see it a lot on OraFaq as a question in the forums so here are the basics of working with delimited strings. I will show the various common methods for creating them and for unpacking them. Its not like I invented this stuff so I will also post some links for additional reading.
Star Schemas are proliferating with warehouses these days. Many practitioners I have met in this space are a bit new to the concept of star schemas and as such keep falling back to old habits. But this is only hurting them. So I'll try to give my simplistic view of how it works in the hopes of granting some clarity on the practice of Star Modeling and overcoming our previous training to resist its concepts.
You might face a situation where you need to interchange the values of two columns in an Oracle database table. This article will explore ways to achieve this.
Last week, I worked on schema refresh in 11g database from one box to another box using data pump.
Schema size: around 90 GB.
Normally we follow below steps: (I have sufficient space in file system, so I am using below steps)
1.Export the schema using expdp/exp utility.
2.Compress the dump file.
3.Transfer the data through ftp to another box.
4.Just import it.
Step 2, depend upon dump size.
I have sufficient space in file system, so I am using method 1.
1.Export the schema using expdp/exp utility (during export they used
During a quite evening of my last on-call bout I was alerted from our monitors that the UNDO tablespace was running out of free space. Thought of adding of a new data file and be done with it; When I checked the current allocation for this tablespace it was already at 40G - couldn't believe what I was seeing. The undo_retention was set to 7200 and max query length in v$undostat was not that high. One column that did caught my eye was the tuned_undoretention, its value was way very high.
Database tables are structured in columns and rows. However, some data lends itself to switching row data as column data for expository purposes. The pivot operation in SQL allows the developer to arrange row data as column fields. For example, if there are two customers who have both visited a store exactly four times, and you want to compare the amount of money spent by each customer on each visit, you can implement the pivot operation.
Checks to be performed at the machine level (note the example is Red Hat Linux)
run queue should be ideally not more than the number of CPU’s on the machine
At the maximum it should never be more than twice the number of CPU’s.
This is denoted by the column ‘r’ in the vmstat output shown below
vmstat – 5
procs memory swap io system cpu
r b swpd free buff cache si so bi bo in cs us sy id wa
4 1 488700 245704 178276 12513572 0 1 10 17 48 1365 40 12 43 5