Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Mailing Lists -> Oracle-L -> RE: Characterset question?
If you are not planning on using NCHAR/ NVARCHAR2 columns anyway (and
why would you if they used the same character set as CHAR/ VARCHAR2
columns), I can't see any benefit to having the database and national
character set match. Probably no downside, though.
In a Unicode database, though, your database character set has to be a strict binary superset of ASCII, so you'll have to use UTF8 (or AL32UTF8). This is a variable-width character set, so English characters generally require 1 byte to encode, European characters generally require 2 bytes to encode, and Asian characters generally require 3 bytes to encode (I'm ignoring the Unicode 3.1 oddballs). The National character set, on the other hand, is generally a UTF16 character set, which means that every character requires 2 bytes of storage. This means that Asian text will require 50% more space in a VARCHAR2 column than it would in an NVARCHAR2 column, which can be a hefty penalty. Searching a fixed-width string tends to be more efficient than searching a variable-width string as well, which may recommend the occasional NVARCHAR2 column.
Justin Cave <jcave_at_ddbcinc.com>
Distributed Database Consulting, Inc.
http://www.ddbcinc.com
-----Original Message-----
From: oracle-l-bounce_at_freelists.org
[mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Tim Gorman
Sent: Thursday, November 10, 2005 6:45 PM
To: oracle-l_at_freelists.org
Cc: mary.crystal_at_echostar.com
Subject: Characterset question?
Is there any need or advantage (or danger or disadvantage) to making
NLS_CHARACTERSET the same as NLS_NCHAR_CHARACTERSET when dealing with
multi-byte charactersets or unicode charactersets?
--
http://www.freelists.org/webpage/oracle-l
--
http://www.freelists.org/webpage/oracle-l
Received on Thu Nov 10 2005 - 18:05:03 CST
![]() |
![]() |