RE: : RE: bytes vs chars
From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: Tue, 15 Mar 2016 22:06:40 +0000
Message-ID: <CE70217733273F49A8A162EE074F64D9282C814C_at_EXMBX01.thus.corp>
Date: Tue, 15 Mar 2016 22:06:40 +0000
Message-ID: <CE70217733273F49A8A162EE074F64D9282C814C_at_EXMBX01.thus.corp>
Shortly after I created my example I discovered I'd done it before (in a remarkably similar fashion) and written it up in a blog post: https://jonathanlewis.wordpress.com/2014/12/23/just-in-case/ It resulted in quite a lot of follow-up comments as people kicked the idea around and came up with other considerations, ideas, and side effects. Regards Jonathan Lewis http://jonathanlewis.wordpress.com _at_jloracle ________________________________________ From: Mark W. Farnham [mwf_at_rsiz.com] Sent: 15 March 2016 22:03 To: Jonathan Lewis; 'Oracle-l' Subject: RE: : RE: bytes vs chars Thanks for this, both because it demonstrates things very nicely and because it reminds us that it does not matter if a wrong argument against something is debunked if other reasons against that something are valid. (I hope that is not too hard to parse.) mwf -----Original Message----- From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce@freelists.org] On Behalf Of Jonathan Lewis Sent: Tuesday, March 15, 2016 2:40 PM To: Oracle-l Subject: RE: : RE: bytes vs chars No, Neil's comment is based on the fact that if the definition of the table ALLOWS a row to be larger than a single block then the run-time engines plays safe and assumes it will have to process a row that IS larger than a single block and therefore uses single row processing to handle what would otherwise be array processing methods. Try this: ==== drop table t1 purge; create table t1 ( v1 varchar2(4000), v2 varchar2(4000) ); insert into t1 values('x','x'); commit; execute dbms_stats.gather_table_stats(user,'t1'); execute snap_redo.start_snap insert into t1 select object_type, object_name from all_objects where rownum <= 1000 ; commit; execute snap_redo.end_snap ==== (The snap_redo package is something that let's me take snapshots of my session stats restricted to redo - you may have to use Tanel Poder's Snapper, or some other method to find the redo generated). Repeat the experiment with varchar2(40). My figures for redo entries and redo size: varchar2(4000) -------------- redo entries 924 redo size 268,412 varchar2(40) ------------ redo entries 1 redo size 36,304 Regards Jonathan Lewis http://jonathanlewis.wordpress.com _at_jloracle ________________________________________ From: oracle-l-bounce_at_freelists.org [oracle-l-bounce@freelists.org] on behalf of Dave Morgan [oracle_at_1001111.com] Sent: 15 March 2016 18:13 To: Neil Chandler; Oracle-l Subject: Re: : RE: bytes vs chars On 03/15/2016 10:57 AM, Neil Chandler wrote: > Your case seems to be "People aren't very good at data modelling so lets just over-allocate everything and not worry about it." > > I don't agree with that. > > Your assertion "This means there is no negative operational impact due > to overly large VARCHAR2 columns." is false. There are very real performance impacts in relation to being unable to do set-based operations efficiently. Your argument seems to be based on the claim multiple blocks are used because of the table definition. This is not the case. Space for VARCHAR2s is not allocated until the data is inserted. SQL> create table testsize ( 2 col1 varchar2(4000), 3 col2 varchar2(4000), 4 col3 varchar2(4000) 5 ); Table created. SQL> select blocks from user_extents where segment_name='TESTSIZE'; BLOCKS ---------- 8 SQL> create table testsize1 ( 2 col1 varchar2(40), 3 col2 varchar2(40), 4 col3 varchar2(40) 5 ); Table created. SQL> select blocks from user_extents where segment_name='TESTSIZE1'; BLOCKS ---------- 8 SQL> insert into testsize values('a', 'b','c'); 1 row created. SQL> insert into testsize1 values('a', 'b','c'); 1 row created. SQL> commit; Commit complete. SQL> select blocks from user_extents where segment_name='TESTSIZE'; BLOCKS ---------- 8 SQL> select blocks from user_extents where segment_name='TESTSIZE1'; BLOCKS ---------- 8 SQL> If the data is large enough that 1 row requires multiple blocks then it will use multiple blocks. But until that time there is NO negative impact. And if the data is that large then you have no choice but to swallow it. My wants and desires have nothing to do with it. Dave -- Dave Morgan Senior Consultant, 1001111 Alberta Limited dave.morgan_at_1001111.com 403 399 2442 -- Dave Morgan Senior Consultant, 1001111 Alberta Limited dave.morgan_at_1001111.com 403 399 2442 -- http://www.freelists.org/webpage/oracle-l -- http://www.freelists.org/webpage/oracle-l --http://www.freelists.org/webpage/oracle-l Received on Tue Mar 15 2016 - 23:06:40 CET