Home » SQL & PL/SQL » SQL & PL/SQL » SQL Macro using UTL_FILE (Oracle 19.0.0.0.0 Windows 64bit)
SQL Macro using UTL_FILE [message #690053] Thu, 26 September 2024 09:37 Go to next message
Duane
Messages: 581
Registered: December 2002
Senior Member
I have this SQL Macro that creates CSV output but I would like to have the Macro also write those results to a file on the OS system.

The problem I have is I need the results to be transformed before those results are written to the file.  

Any suggestions?



Desired Output written to the file:  


"ID","NAME"
1,"AMSTERDAM"
2,"UTRECHT"
3,"THE HAGUE"
4,"ROTTERDAM"



Test Case:


CREATE TABLE CITIES
(
  ID    NUMBER(5),
  NAME  VARCHAR2(30 CHAR)
)

SET DEFINE OFF;
Insert into CITIES
   (ID, NAME)
 Values
   (1, 'AMSTERDAM');
Insert into CITIES
   (ID, NAME)
 Values
   (2, 'UTRECHT');
Insert into CITIES
   (ID, NAME)
 Values
   (3, 'THE HAGUE');
Insert into CITIES
   (ID, NAME)
 Values
   (4, 'ROTTERDAM');
COMMIT;


SQL Macro


CREATE OR REPLACE function CSVDataFile (InputData     in dbms_tf.table_t,
                                        DirectoryName in varchar2 default 'DIR1',
                                        FileName      in varchar2 default null,
                                        CreateFile    in varchar2 default 'Y',
                                        Delimiter     in varchar2 default ',',
                                        Quote         in varchar2 default '"',
                                        Escape        in varchar2 default '\') return clob sql_macro as
     
  pragma          autonomous_transaction;
  
  ChunkSize       pls_integer  default 3000;
  DelimiterRecord varchar2(24) default '||Delimiter||';
  ColumnName      varchar2(200);
  ColumnRecord    varchar2(300);
  HeaderRecord    clob;
  DataRecord      clob;
  SQLStatement    clob;
  
  CSVFile         utl_file.file_type;
  
  
  begin
    for i in 1..InputData.column.count 
      loop
        ColumnName := InputData.column(i).description.name;
        
        case
          when InputData.column(i).description.type in (dbms_tf.type_varchar2, 
                                                        dbms_tf.type_char, 
                                                        dbms_tf.type_clob)
            then
              if Quote is not null
                then
                  ColumnRecord := 'replace('||ColumnName||','''||Quote||''','''||Escape||Quote||''')';
                  ColumnRecord := ''''||Quote||'''||'||ColumnRecord||'||'''||Quote||'''';
              
                else
                  ColumnRecord := ColumnName;
              end if;
              
          when InputData.column(i).description.type = dbms_tf.type_number
            then
              ColumnRecord := 'to_char('||ColumnName||')';
              
          when InputData.column(i).description.type = dbms_tf.type_date 
            then
              ColumnRecord := 'to_char('||ColumnName||',''YYYY-MM-DD'')';
              
          when InputData.column(i).description.type = dbms_tf.type_timestamp
            then
              ColumnRecord := 'to_char('||ColumnName||',''YYYY-MM-DD"T"HH24:MI:SS.FF6'')';
              
            else
              null;
        end case;
   
        HeaderRecord := HeaderRecord||'||Delimiter||'||''''||ColumnName||'''';
        DataRecord := DataRecord||DelimiterRecord||ColumnRecord;
   
      end loop;
    
    HeaderRecord := substr(HeaderRecord, length('||Delimiter||') + 1);
    DataRecord := substr(DataRecord, length(DelimiterRecord) + 1);
    
    SQLStatement := q'[
                       select @COLUMN_HEADER@ as csv_row from dual
                       union all
                       select @DATA_RECORD@ as csv_row from CSVData.InputData
                      ]';
   
    SQLStatement := replace(replace(SQLStatement, '@DATA_RECORD@', DataRecord), '@COLUMN_HEADER@', HeaderRecord);
    
    if CreateFile = 'Y'
     then
       CSVFile := utl_file.fopen(nvl(DirectoryName, 'DIR1'), nvl(FileName, 'csv_file_'||to_char(sysdate, 'yyyymmddhh24miss')||'.txt'), 'wb', max_linesize => 32767);

       for i in 1..ceil(length(HeaderRecord)/ChunkSize)
         loop
           utl_file.put_raw(CSVFile, utl_raw.cast_to_raw(substr(HeaderRecord, (i - 1) * ChunkSize + 1, ChunkSize)));
           utl_file.fflush(CSVFile);
         end loop;
         
       ChunkSize := 3000;
       
       for i in 1..ceil(length(DataRecord)/ChunkSize)
         loop
           utl_file.put_raw(CSVFile, utl_raw.cast_to_raw(substr(DataRecord, (i - 1) * ChunkSize + 1, ChunkSize)));
           utl_file.fflush(CSVFile);
         end loop;

       utl_file.fclose(CSVFile);
    end if;

    return SQLStatement;
  
  end;


SQL Statement


with Cities1 as (select *
                 from cities)
                 
  select * from dnettles10.CSVDataFile(InputData => Cities1);


File Contents


'"ID"'||Delimiter||'"NAME"'to_char("ID")||Delimiter||'"'||replace("NAME",'"','\"')||'"'

Re: SQL Macro using UTL_FILE [message #690054 is a reply to message #690053] Fri, 27 September 2024 16:40 Go to previous messageGo to next message
Solomon Yakobson
Messages: 3303
Registered: January 2010
Location: Connecticut, USA
Senior Member
Very bad idea. And not really possible without exploiting loopholes that can be closed in newer releases. SQL macro doesn't execute SQL statement. It just produces to be executed SQL statement text. So obviously it is text:

'"ID"'||Delimiter||'"NAME"'to_char("ID")||Delimiter||'"'||replace("NAME",'"','\"')||'"'
that is written to file. So SQL macro would need to produce SQL text and then execute it. Issue is SQL macro creates SQL text with placeholders. If you add:

dbms_output.put_line(SQLStatement);
You'll see:

select '"ID"'||Delimiter||'"NAME"' as csv_row from dual
union all
select
to_char("ID")||Delimiter||'"'||replace("NAME",'"','\"')||'"' as csv_row from
CSVData.InputData
And then Delimiter is replaced with parameter Delimiter value and CSVData.InputData with table name. And very nasty thing in SQL macro - string parameter values are not available aouside RETURN statement. We will get NULL, not comma if we reference parameter Delimter. Fortunately (and this can change in future releases) UDT values are not masked, so we can change Delimiter type to sys.OdciVarchar2List and add parameter TblName of same sys.OdciVarchar2List type. Now we can construct SQL without placeholders:

CREATE OR REPLACE function CSVDataFile (
                                        TblName       in sys.OdciVarchar2List,
                                        InputData     in dbms_tf.table_t,
                                        DirectoryName in varchar2 default 'TEMP',
                                        FileName      in varchar2 default null,
                                        CreateFile    in varchar2 default 'Y',
                                        Delimiter     in sys.OdciVarchar2List default sys.OdciVarchar2List(','),
                                        Quote         in varchar2 default '"',
                                        Escape        in varchar2 default '\'
                                      ) return clob sql_macro as
     
  pragma          autonomous_transaction;
  
  ChunkSize       pls_integer  default 3000;
  DelimiterRecord varchar2(24) default '||''' || Delimiter(1) || '''||';
  ColumnName      varchar2(200);
  ColumnRecord    varchar2(300);
  HeaderRecord    clob;
  DataRecord      clob;
  SQLStatement    clob;
  v_cur           sys_refcursor;
  v_rec           clob;
  
  CSVFile         utl_file.file_type;
  
  
  begin
    for i in 1..InputData.column.count 
      loop
        ColumnName := InputData.column(i).description.name;
        
        case
          when InputData.column(i).description.type in (dbms_tf.type_varchar2, 
                                                        dbms_tf.type_char, 
                                                        dbms_tf.type_clob)
            then
              if Quote is not null
                then
                  ColumnRecord := 'replace('||ColumnName||','''||Quote||''','''||Escape||Quote||''')';
                  ColumnRecord := ''''||Quote||'''||'||ColumnRecord||'||'''||Quote||'''';
              
                else
                  ColumnRecord := ColumnName;
              end if;
              
          when InputData.column(i).description.type = dbms_tf.type_number
            then
              ColumnRecord := 'to_char('||ColumnName||')';
              
          when InputData.column(i).description.type = dbms_tf.type_date 
            then
              ColumnRecord := 'to_char('||ColumnName||',''YYYY-MM-DD'')';
              
          when InputData.column(i).description.type = dbms_tf.type_timestamp
            then
              ColumnRecord := 'to_char('||ColumnName||',''YYYY-MM-DD"T"HH24:MI:SS.FF6'')';
              
            else
              null;
        end case;
   
        HeaderRecord := HeaderRecord||DelimiterRecord||''''||ColumnName||'''';
        DataRecord := DataRecord||DelimiterRecord||ColumnRecord;
   
      end loop;
    
    HeaderRecord := substr(HeaderRecord,length(DelimiterRecord) + 1);
    DataRecord := substr(DataRecord,length(DelimiterRecord) + 1);
    
    SQLStatement := '
                      select @COLUMN_HEADER@ as csv_row from dual
                     union all
                      select @DATA_RECORD@ as csv_row from ' || TblName(1);
   
    SQLStatement := replace(replace(SQLStatement, '@DATA_RECORD@', DataRecord), '@COLUMN_HEADER@', HeaderRecord);

    if CreateFile = 'Y'
     then
       CSVFile := utl_file.fopen(nvl(DirectoryName, 'TEMP'), nvl(FileName, 'csv_file_'||to_char(sysdate, 'yyyymmddhh24miss')||'.txt'), 'wb', max_linesize => 32767);
       open v_cur
         for SQLStatement;
       loop
         fetch v_cur
           into v_rec;
         exit when v_cur%notfound;
         for i in 1..ceil(length(v_rec)/ChunkSize)
         loop
           utl_file.put_raw(CSVFile, utl_raw.cast_to_raw(substr(v_rec, (i - 1) * ChunkSize + 1, ChunkSize)));
           utl_file.fflush(CSVFile);
         end loop;
         utl_file.put_raw(CSVFile, utl_raw.cast_to_raw(chr(10)));
         utl_file.fflush(CSVFile);

       end loop;
       utl_file.fflush(CSVFile);
       utl_file.fclose(CSVFile);
    end if;
    return SQLStatement;
  
  end;
/

Function created.

SQL> set serveroutput on
SQL> host dir c:\temp\csv_file*.txt
 Volume in drive C is OSDisk
 Volume Serial Number is 08EC-009C

 Directory of c:\temp

File Not Found

SQL> select * from CSVDataFile(TblName => sys.OdciVarchar2List('CITIES'),InputData => CITIES)
  2  /

CSV_ROW
--------------------------------------------------------------------------------
"ID","NAME"
1,"AMSTERDAM"
2,"UTRECHT"
3,"THE HAGUE"
4,"ROTTERDAM"

SQL> host dir c:\temp\csv_file*.txt
 Volume in drive C is OSDisk
 Volume Serial Number is 08EC-009C

 Directory of c:\temp

09/27/2024  05:32 PM                66 csv_file_20240927173233.txt
               1 File(s)             66 bytes
               0 Dir(s)  24,827,981,824 bytes free

SQL> host type c:\temp\csv_file*.txt

c:\temp\csv_file_20240927173233.txt


"ID","NAME"
1,"AMSTERDAM"
2,"UTRECHT"
3,"THE HAGUE"
4,"ROTTERDAM"

SQL>
But now is the tricky part. Each time we issue SQL statement Oracle checks if we already have it in shared pool where it is already hard parsed. Therefore if we issue the above select second time Oracle will not call SQL macro and therefore no file will be created:

SQL> host del c:\temp\csv_file*.txt

SQL> host dir c:\temp\csv_file*.txt
 Volume in drive C is OSDisk
 Volume Serial Number is 08EC-009C

 Directory of c:\temp

File Not Found

SQL> select * from CSVDataFile(TblName => sys.OdciVarchar2List('CITIES'),InputData => CITIES)
  2  /

CSV_ROW
--------------------------------------------------------------------------------
"ID","NAME"
1,"AMSTERDAM"
2,"UTRECHT"
3,"THE HAGUE"
4,"ROTTERDAM"

SQL> host dir c:\temp\csv_file*.txt
 Volume in drive C is OSDisk
 Volume Serial Number is 08EC-009C

 Directory of c:\temp

File Not Found

SQL>
That is why I said it is very bad idea. We would have to issue slightly different at least by 1 character SQL that calls SQL macro each time to force Oracle executing it each time. For example, I will add a space before 'CITIES':

SQL> select * from CSVDataFile(TblName => sys.OdciVarchar2List( 'CITIES'),InputData => CITIES)
  2  /

CSV_ROW
--------------------------------------------------------------------------------
"ID","NAME"
1,"AMSTERDAM"
2,"UTRECHT"
3,"THE HAGUE"
4,"ROTTERDAM"

SQL> host dir c:\temp\csv_file*.txt
 Volume in drive C is OSDisk
 Volume Serial Number is 08EC-009C

 Directory of c:\temp

09/27/2024  05:38 PM                66 csv_file_20240927173856.txt
               1 File(s)             66 bytes
               0 Dir(s)  24,835,670,016 bytes free

SQL>
SY.

[Updated on: Fri, 27 September 2024 16:42]

Report message to a moderator

Re: SQL Macro using UTL_FILE [message #690055 is a reply to message #690054] Fri, 27 September 2024 17:24 Go to previous messageGo to next message
Duane
Messages: 581
Registered: December 2002
Senior Member
Solomon Yakobson wrote on Fri, 27 September 2024 21:40


And then Delimiter is replaced with parameter Delimiter value and CSVData.InputData with table name. And very nasty thing in SQL macro - string parameter values are not available aouside RETURN statement. We will get NULL, not comma if we reference parameter Delimter. Fortunately (and this can change in future releases) UDT values are not masked, so we can change Delimiter type to sys.OdciVarchar2List and add parameter TblName of same sys.OdciVarchar2List type. Now we can construct SQL without placeholders:

I did printed out the SQLStatement text but didn't know how to transform the values.  You resolved that part.


Solomon Yakobson wrote on Fri, 27 September 2024 21:40


But now is the tricky part. Each time we issue SQL statement Oracle checks if we already have it in shared pool where it is already hard parsed. Therefore if we issue the above select second time Oracle will not call SQL macro and therefore no file will be created:

I saw that while testing.  I had noticed that a file would be created but then running the statement again didn't create another file.  I would have to change something in the Function and then compile again for a new file to be created.  That explains it now.

Anyway, I still think it's a cool idea but doesn't sound like it's practical if you have to change something in the query statement each time.  That won't work.

Thanks again.
Re: SQL Macro using UTL_FILE [message #690056 is a reply to message #690055] Fri, 27 September 2024 19:30 Go to previous messageGo to next message
Duane
Messages: 581
Registered: December 2002
Senior Member
Just out of curiosity, no way to make it work with "CSVData.InputData"?  It will only work with a defined table if results are to be written to a file?

select @DATA_RECORD@ as csv_row from CSVData.InputData;
Re: SQL Macro using UTL_FILE [message #690057 is a reply to message #690056] Sat, 28 September 2024 07:04 Go to previous messageGo to next message
Solomon Yakobson
Messages: 3303
Registered: January 2010
Location: Connecticut, USA
Senior Member
No way, at least I'm not aware of it. If you add:

dbms_output.put_line('CSVData.InputData = "' || CSVData.InputData || '"');
macro wouldn't even compile:

SQL> show err
Errors for FUNCTION CSVDATAFILE:

LINE/COL ERROR
-------- -----------------------------------------------------------------
98/1     PL/SQL: Statement ignored
98/49    PLS-00201: identifier 'CSVDATA.INPUTDATA' must be declared
SQL>
SY.
Re: SQL Macro using UTL_FILE [message #690058 is a reply to message #690057] Sat, 28 September 2024 10:44 Go to previous messageGo to next message
Duane
Messages: 581
Registered: December 2002
Senior Member
Same here.  Tried different versions and it wouldn't work.

My other thought is passing in a RefCursor (with Cities1 as (select * from cities)) and keeping that same code you have with the RefCursor (OPEN...FETCH...LOOP).  Maybe try that on Monday if you don't think that would even work.
Re: SQL Macro using UTL_FILE [message #690061 is a reply to message #690058] Mon, 30 September 2024 15:39 Go to previous messageGo to next message
Duane
Messages: 581
Registered: December 2002
Senior Member
Solomon,

I have the function creating a file each time but the SQL is returning an error.  Do you see anything I'm doing wrong?  

It has to do with the "DateTime" parameter since removing that parameter allows the results to be displayed and the file is created one time.  Adding the "DateTime" parameter returns a file each time but produces an error in the SQL results.



CREATE OR REPLACE function CSVDataFile (InputData     in dbms_tf.table_t,
                                        DateTime      in varchar2 default to_char(sysdate, 'yyyymmddhh24miss'),
                                        DirectoryName in varchar2 default 'DIR1',
                                        FileName      in varchar2 default null,
                                        CreateFile    in varchar2 default 'Y',
                                        Delimiter     in varchar2 default ',',
                                        Quote         in varchar2 default '"',
                                        Escape        in varchar2 default '\') return clob sql_macro as
     
  pragma          autonomous_transaction;
  
  ChunkSize       pls_integer  default 3000;
  DelimiterRecord varchar2(24) default '||Delimiter||';
  ColumnName      varchar2(200);
  ColumnRecord    varchar2(300);
  HeaderRecord    clob;
  HeaderRrd       clob;
  DataRecord      clob;
  DataRrd         clob;
  SQLStatement    clob;
  
  CSVFile         utl_file.file_type;
  
  
  begin
    for i in 1..InputData.column.count 
      loop
        ColumnName := InputData.column(i).description.name;
        
        case
          when InputData.column(i).description.type in (dbms_tf.type_varchar2, 
                                                        dbms_tf.type_char, 
                                                        dbms_tf.type_clob)
            then
              if Quote is not null
                then
                  ColumnRecord := 'replace('||ColumnName||','''||Quote||''','''||Escape||Quote||''')';
                  ColumnRecord := ''''||Quote||'''||'||ColumnRecord||'||'''||Quote||'''';
              
                else
                  ColumnRecord := ColumnName;
              end if;
              
          when InputData.column(i).description.type = dbms_tf.type_number
            then
              ColumnRecord := 'to_char('||ColumnName||')';
              
          when InputData.column(i).description.type = dbms_tf.type_date 
            then
              ColumnRecord := 'to_char('||ColumnName||',''YYYY-MM-DD'')';
              
          when InputData.column(i).description.type = dbms_tf.type_timestamp
            then
              ColumnRecord := 'to_char('||ColumnName||',''YYYY-MM-DD"T"HH24:MI:SS.FF6'')';
              
            else
              null;
        end case;
   
        HeaderRecord := HeaderRecord||'||Delimiter||'||''''||ColumnName||'''';
        HeaderRrd := HeaderRrd||Delimiter||''||ColumnName||'';
        DataRecord := DataRecord||DelimiterRecord||ColumnRecord;
        DataRrd := DataRrd||Delimiter||ColumnRecord;
      end loop;
    
    HeaderRecord := substr(HeaderRecord, length('||Delimiter||') + 1);
    DataRecord := substr(DataRecord, length(DelimiterRecord) + 1);
    
    SQLStatement := q'[
                       select @COLUMN_HEADER@ as csv_row from dual
                       union all
                       select @DATA_RECORD@ as csv_row from CSVData.InputData
                      ]';
   
    SQLStatement := replace(replace(SQLStatement, '@DATA_RECORD@', DataRecord), '@COLUMN_HEADER@', HeaderRecord);
    
    if CreateFile = 'Y'
     then
       CSVFile := utl_file.fopen(nvl(DirectoryName, 'JOBSUB'), nvl(FileName, 'csv_file_'||to_char(sysdate, 'yyyymmddhh24miss')||'.txt'), 'wb', max_linesize => 32767);
       
       HeaderRrd := trim(leading ',' from HeaderRrd);
       DataRrd := trim(leading ',' from DataRrd);
       
       for i in 1..ceil(length(HeaderRrd)/ChunkSize)
         loop
           utl_file.put_raw(CSVFile, utl_raw.cast_to_raw(substr(HeaderRrd, (i - 1) * ChunkSize + 1, ChunkSize)));
           utl_file.put_raw(CSVFile, utl_raw.cast_to_raw(chr(10)));
           utl_file.fflush(CSVFile);
         end loop;
         
       for i in 1..ceil(length(DataRrd)/ChunkSize)
         loop
           utl_file.put_raw(CSVFile, utl_raw.cast_to_raw(substr(DataRrd, (i - 1) * ChunkSize + 1, ChunkSize)));
           utl_file.put_raw(CSVFile, utl_raw.cast_to_raw(chr(10)));
           utl_file.fflush(CSVFile);
         end loop;
         
       utl_file.fflush(CSVFile);
       utl_file.fclose(CSVFile);
    end if;
    
    return SQLStatement;
    
  end;


SQL Statement


with Cities1 as (select *
                 from cities)
                 
  select * from CSVDataFile(InputData => Cities1);


No "DateTime" parameter


"ID","NAME"
1,"AMSTERDAM"
2,"UTRECHT"
3,"THE HAGUE"
4,"ROTTERDAM"


"DateTime" parameter included and Error from SQL Statement


[Error] Execution (1: 1): ORA-64626: invalid SQL text returned from SQL macro: 
ORA-00907: missing right parenthesis

Re: SQL Macro using UTL_FILE [message #690062 is a reply to message #690061] Tue, 01 October 2024 04:38 Go to previous messageGo to next message
cookiemonster
Messages: 13961
Registered: September 2008
Location: Rainy Manchester
Senior Member
In the function you've posted DateTime isn't used anywhere.
Re: SQL Macro using UTL_FILE [message #690063 is a reply to message #690062] Tue, 01 October 2024 07:23 Go to previous messageGo to next message
Solomon Yakobson
Messages: 3303
Registered: January 2010
Location: Connecticut, USA
Senior Member
Works fine for me:

  1  CREATE OR REPLACE function CSVDataFile (InputData     in dbms_tf.table_t,
  2                                          DateTime      in varchar2 default to_char(sysdate, 'yyyymmddhh24miss'),
  3                                          DirectoryName in varchar2 default 'TEMP',
  4                                          FileName      in varchar2 default null,
  5                                          CreateFile    in varchar2 default 'Y',
  6                                          Delimiter     in varchar2 default ',',
  7                                          Quote         in varchar2 default '"',
  8                                          Escape        in varchar2 default '\') return clob sql_macro as
  9    pragma          autonomous_transaction;
 10    ChunkSize       pls_integer  default 3000;
 11    DelimiterRecord varchar2(24) default '||Delimiter||';
 12    ColumnName      varchar2(200);
 13    ColumnRecord    varchar2(300);
 14    HeaderRecord    clob;
 15    HeaderRrd       clob;
 16    DataRecord      clob;
 17    DataRrd         clob;
 18    SQLStatement    clob;
 19    CSVFile         utl_file.file_type;
 20    begin
 21      for i in 1..InputData.column.count
 22        loop
 23          ColumnName := InputData.column(i).description.name;
 24          case
 25            when InputData.column(i).description.type in (dbms_tf.type_varchar2,
 26                                                          dbms_tf.type_char,
 27                                                          dbms_tf.type_clob)
 28              then
 29                if Quote is not null
 30                  then
 31                    ColumnRecord := 'replace('||ColumnName||','''||Quote||''','''||Escape||Quote||''')';
 32                    ColumnRecord := ''''||Quote||'''||'||ColumnRecord||'||'''||Quote||'''';
 33                  else
 34                    ColumnRecord := ColumnName;
 35                end if;
 36            when InputData.column(i).description.type = dbms_tf.type_number
 37              then
 38                ColumnRecord := 'to_char('||ColumnName||')';
 39            when InputData.column(i).description.type = dbms_tf.type_date
 40              then
 41                ColumnRecord := 'to_char('||ColumnName||',''YYYY-MM-DD'')';
 42            when InputData.column(i).description.type = dbms_tf.type_timestamp
 43              then
 44                ColumnRecord := 'to_char('||ColumnName||',''YYYY-MM-DD"T"HH24:MI:SS.FF6'')';
 45              else
 46                null;
 47          end case;
 48          HeaderRecord := HeaderRecord||'||Delimiter||'||''''||ColumnName||'''';
 49          HeaderRrd := HeaderRrd||Delimiter||''||ColumnName||'';
 50          DataRecord := DataRecord||DelimiterRecord||ColumnRecord;
 51          DataRrd := DataRrd||Delimiter||ColumnRecord;
 52        end loop;
 53      HeaderRecord := substr(HeaderRecord, length('||Delimiter||') + 1);
 54      DataRecord := substr(DataRecord, length(DelimiterRecord) + 1);
 55      SQLStatement := q'[
 56                         select @COLUMN_HEADER@ as csv_row from dual
 57                         union all
 58                         select @DATA_RECORD@ as csv_row from CSVData.InputData
 59                        ]';
 60      SQLStatement := replace(replace(SQLStatement, '@DATA_RECORD@', DataRecord), '@COLUMN_HEADER@', HeaderRecord);
 61      if CreateFile = 'Y'
 62       then
 63         CSVFile := utl_file.fopen(nvl(DirectoryName, 'TEMP'), nvl(FileName, 'csv_file_'||to_char(sysdate, 'yyyymmddhh24miss')||'.txt'), 'wb', max_linesize => 32767);
 64         HeaderRrd := trim(leading ',' from HeaderRrd);
 65         DataRrd := trim(leading ',' from DataRrd);
 66         for i in 1..ceil(length(HeaderRrd)/ChunkSize)
 67           loop
 68             utl_file.put_raw(CSVFile, utl_raw.cast_to_raw(substr(HeaderRrd, (i - 1) * ChunkSize + 1, ChunkSize)));
 69             utl_file.put_raw(CSVFile, utl_raw.cast_to_raw(chr(10)));
 70             utl_file.fflush(CSVFile);
 71           end loop;
 72         for i in 1..ceil(length(DataRrd)/ChunkSize)
 73           loop
 74             utl_file.put_raw(CSVFile, utl_raw.cast_to_raw(substr(DataRrd, (i - 1) * ChunkSize + 1, ChunkSize)));
 75             utl_file.put_raw(CSVFile, utl_raw.cast_to_raw(chr(10)));
 76             utl_file.fflush(CSVFile);
 77           end loop;
 78         utl_file.fflush(CSVFile);
 79         utl_file.fclose(CSVFile);
 80      end if;
 81      return SQLStatement;
 82*   end;
 83  /

Function created.

SQL> select * from CSVDataFile(InputData => Cities);

CSV_ROW
--------------------------------------------------------------------------------
"ID","NAME"
1,"AMSTERDAM"
2,"UTRECHT"
3,"THE HAGUE"
4,"ROTTERDAM"

SQL> with Cities1 as (select *
  2                   from cities)
  3  select * from CSVDataFile(InputData => Cities1);

CSV_ROW
--------------------------------------------------------------------------------
"ID","NAME"
1,"AMSTERDAM"
2,"UTRECHT"
3,"THE HAGUE"
4,"ROTTERDAM"

SQL>
SY.
Re: SQL Macro using UTL_FILE [message #690064 is a reply to message #690062] Tue, 01 October 2024 17:01 Go to previous messageGo to next message
Duane
Messages: 581
Registered: December 2002
Senior Member
cookiemonster wrote on Tue, 01 October 2024 09:38
In the function you've posted DateTime isn't used anywhere.
That's true.  I was trying to make the SQL unique each time.
Re: SQL Macro using UTL_FILE [message #690065 is a reply to message #690063] Tue, 01 October 2024 17:10 Go to previous messageGo to next message
Duane
Messages: 581
Registered: December 2002
Senior Member
Solomon Yakobson wrote on Tue, 01 October 2024 12:23
Works fine for me:


SY.
Well, darn.  I was playing around with different iterations so I must have done something where it worked and then I changed something and it didn't work.

I'm starting to wonder if Polymorphic Table Functions is the answer.  I sure would like to get something that works.

Seems simple enough.  Return the data from a result set as comma delimited data and write that data to a file on the OS system.

I'll tinker around some more.  Anyone care to come up with a solution?
Re: SQL Macro using UTL_FILE [message #690075 is a reply to message #690065] Fri, 04 October 2024 11:59 Go to previous messageGo to next message
Solomon Yakobson
Messages: 3303
Registered: January 2010
Location: Connecticut, USA
Senior Member
Duane wrote on Tue, 01 October 2024 18:10

I'm starting to wonder if Polymorphic Table Functions is the answer.
Sounded like fun. Here it is:

create or replace
  package polymorphic_csv_pkg
    is
      function csv_data_and_file(
                                 p_tbl          in table,
                                 p_dir_name     in varchar2 := 'TEMP',
                                 p_file_name    in varchar2 := null,
                                 p_create_file  in varchar2 := 'Y',
                                 p_delimiter    in varchar2 := ',',
                                 p_quote        in varchar2 := '"',
                                 p_escape       in varchar2 := '\',
                                 p_show_csv_col in varchar2 := 'Y'
                                )
        return table pipelined
        row polymorphic
        using polymorphic_csv_pkg;
      function describe(
                        p_tbl          in out dbms_tf.table_t,
                        p_dir          in     varchar2 := 'TEMP',
                        p_file_name    in     varchar2 := null,
                        p_create_file  in     varchar2 := 'Y',
                        p_delimiter    in     varchar2 := ',',
                        p_quote        in     varchar2 := '"',
                        p_escape       in     varchar2 := '\',
                        p_show_csv_col in     varchar2 := 'Y'
                       ) 
        return dbms_tf.describe_t;
      procedure fetch_rows(
                           p_dir_name     in varchar2 := 'TEMP',
                           p_file_name    in varchar2 := null,
                           p_create_file  in varchar2 := 'Y',
                           p_delimiter    in varchar2 := ',',
                           p_quote        in varchar2 := '"',
                           p_escape       in varchar2 := '\',
                           p_show_csv_col in varchar2 := 'Y'
                          );
end polymorphic_csv_pkg;
/
create or replace
  package body polymorphic_csv_pkg
    is
      function describe(
                        p_tbl          in out dbms_tf.table_t,
                        p_dir          in     varchar2 := 'TEMP',
                        p_file_name    in     varchar2 := null,
                        p_create_file  in     varchar2 := 'Y',
                        p_delimiter    in     varchar2 := ',',
                        p_quote        in     varchar2 := '"',
                        p_escape       in     varchar2 := '\',
                        p_show_csv_col in     varchar2 := 'Y'
                       )
        return dbms_tf.describe_t
        is
        begin
            for i in 1 .. p_tbl.column.count loop
              continue when not dbms_tf.supported_type(p_tbl.column(i).description.type);
              p_tbl.column(i).for_read := true;
            end loop;
            if p_show_csv_col = 'Y'
              then
                return dbms_tf.describe_t(
                                          new_columns => dbms_tf.columns_new_t(
                                                                               1 => dbms_tf.column_metadata_t(
                                                                                                              name => 'csv_row',
                                                                                                              type => dbms_tf.type_varchar2
                                                                                                             )
                                                                              )
                                         );
              else
                return null;
            end if;
      end;
      procedure fetch_rows(
                           p_dir_name     in varchar2 := 'TEMP',
                           p_file_name    in varchar2 := null,
                           p_create_file  in varchar2 := 'Y',
                           p_delimiter    in varchar2 := ',',
                           p_quote        in varchar2 := '"',
                           p_escape       in varchar2 := '\',
                           p_show_csv_col in varchar2 := 'Y'
                          )
        is
            v_row_set    dbms_tf.row_set_t;
            v_new_col    dbms_tf.tab_varchar2_t;
            v_row_count  pls_integer;
            v_col_count  pls_integer;
            v_csv_row    clob;
            v_csv_file   utl_file.file_type;
            v_chunk_size pls_integer := 3000;
        begin
            dbms_tf.get_row_set(v_row_set,v_row_count,v_col_count);
            for v_row_num in 1..v_row_count loop
              v_csv_row := null;
              for v_col_num in 1..v_col_count loop
                case
                  when v_row_set(v_col_num).description.type = dbms_tf.type_char
                    then
                      if p_quote is not null
                        then
                          v_csv_row := v_csv_row || p_delimiter || p_quote || replace(v_row_set(v_col_num).tab_char(v_row_num),p_quote,p_escape || p_quote) || p_quote;
                        else
                          v_csv_row := v_csv_row || p_delimiter || v_row_set(v_col_num).tab_char(v_row_num);
                      end if;
                  when v_row_set(v_col_num).description.type = dbms_tf.type_varchar2
                    then
                      if p_quote is not null
                        then
                          v_csv_row := v_csv_row || p_delimiter || p_quote || replace(v_row_set(v_col_num).tab_varchar2(v_row_num),p_quote,p_escape || p_quote) || p_quote;
                        else
                          v_csv_row := v_csv_row || p_delimiter || v_row_set(v_col_num).tab_varchar2(v_row_num);
                      end if;
                  when v_row_set(v_col_num).description.type = dbms_tf.type_clob
                    then
                      if p_quote is not null
                        then
                          v_csv_row := v_csv_row || p_delimiter || p_quote || replace(v_row_set(v_col_num).tab_clob(v_row_num),p_quote,p_escape || p_quote) || p_quote;
                        else
                          v_csv_row := v_csv_row || p_delimiter || v_row_set(v_col_num).tab_clob(v_row_num);
                      end if;
                  when v_row_set(v_col_num).description.type = dbms_tf.type_binary_double
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_binary_double(v_row_num));
                  when v_row_set(v_col_num).description.type = dbms_tf.type_binary_float
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_binary_float(v_row_num));
                  when v_row_set(v_col_num).description.type = dbms_tf.type_number
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_number(v_row_num));
                  when v_row_set(v_col_num).description.type = dbms_tf.type_date
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_date(v_row_num),'YYYY-MM-DD"T"HH24:MI:SS');
                  when v_row_set(v_col_num).description.type = dbms_tf.type_timestamp
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_timestamp(v_row_num),'YYYY-MM-DD"T"HH24:MI:SS.FF6');
                end case;
              end loop;
              v_new_col(v_row_num) := substr(v_csv_row,length(p_delimiter) + 1);
            end loop;
            if p_show_csv_col = 'Y'
              then
                dbms_tf.put_col(1,v_new_col);
            end if;
            if p_create_file = 'Y'
              then
                v_csv_file := utl_file.fopen(
                                             nvl(p_dir_name,'TEMP'),
                                             nvl(p_file_name,'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt'),
                                             'wb',
                                             max_linesize => 32767
                                            );
                for v_row_num in 1..v_row_count loop
                  for v_i in 1..ceil(length(v_new_col(v_row_num)) / v_chunk_size) loop
                    utl_file.put_raw(
                                     v_csv_file,
                                     utl_raw.cast_to_raw(substr(v_new_col(v_row_num),(v_i - 1) * v_chunk_size + 1,v_chunk_size)));
                    utl_file.fflush(v_csv_file);
                  end loop;
                  utl_file.put_raw(v_csv_file,utl_raw.cast_to_raw(chr(10)));
                  utl_file.fflush(v_csv_file);
                end loop;
                utl_file.fflush(v_csv_file);
                utl_file.fclose(v_csv_file);
            end if;
      end;
end polymorphic_csv_pkg;
/
Now:

SQL> select *
  2  from polymorphic_csv_pkg.csv_data_and_file(cities)
  3  /

        ID NAME                           CSV_ROW
---------- ------------------------------ --------------------------------------------------
         1 AMSTERDAM                      1,"AMSTERDAM"
         2 UTRECHT                        2,"UTRECHT"
         3 THE HAGUE                      3,"THE HAGUE"
         4 ROTTERDAM                      4,"ROTTERDAM"

SQL> host del c:\temp\cities.txt

SQL> select *
  2  from polymorphic_csv_pkg.csv_data_and_file(cities,p_quote => null,p_file_name => 'cities.txt')
  3  /

        ID NAME                           CSV_ROW
---------- ------------------------------ --------------------------------------------------
         1 AMSTERDAM                      1,AMSTERDAM
         2 UTRECHT                        2,UTRECHT
         3 THE HAGUE                      3,THE HAGUE
         4 ROTTERDAM                      4,ROTTERDAM

SQL> host type c:\temp\cities.txt
1,AMSTERDAM
2,UTRECHT
3,THE HAGUE
4,ROTTERDAM

SQL> host del c:\temp\cities.txt

SQL> select *
  2  from polymorphic_csv_pkg.csv_data_and_file(cities,p_quote => null,p_file_name => 'cities.txt',p_show_csv_col => 'N')
  3  /

        ID NAME
---------- ------------------------------
         1 AMSTERDAM
         2 UTRECHT
         3 THE HAGUE
         4 ROTTERDAM

SQL> host type c:\temp\cities.txt
1,AMSTERDAM
2,UTRECHT
3,THE HAGUE
4,ROTTERDAM

SQL> host del c:\temp\emp.txt

SQL> select *
  2  from polymorphic_csv_pkg.csv_data_and_file(emp,p_quote => null,p_file_name => 'emp.txt',p_show_csv_col => 'N')
  3  /

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30
      7566 JONES      MANAGER         7839 02-APR-81       2975                    20
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30
      7782 CLARK      MANAGER         7839 09-JUN-81       2450          0         10
      7788 SCOTT      ANALYST         7566 19-APR-87       3000                    20
      7839 KING       PRESIDENT            17-NOV-81       5000                    10
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30
      7876 ADAMS      CLERK           7788 23-MAY-87       1100                    20
      7900 JAMES      CLERK           7698 03-DEC-81        950                    30
      7902 FORD       ANALYST         7566 03-DEC-81       3000                    20
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10

14 rows selected.

SQL> host type c:\temp\emp.txt
7369,SMITH,CLERK,7902,1980-12-17T00:00:00,800,,20
7499,ALLEN,SALESMAN,7698,1981-02-20T00:00:00,1600,300,30
7521,WARD,SALESMAN,7698,1981-02-22T00:00:00,1250,500,30
7566,JONES,MANAGER,7839,1981-04-02T00:00:00,2975,,20
7654,MARTIN,SALESMAN,7698,1981-09-28T00:00:00,1250,1400,30
7698,BLAKE,MANAGER,7839,1981-05-01T00:00:00,2850,,30
7782,CLARK,MANAGER,7839,1981-06-09T00:00:00,2450,0,10
7788,SCOTT,ANALYST,7566,1987-04-19T00:00:00,3000,,20
7839,KING,PRESIDENT,,1981-11-17T00:00:00,5000,,10
7844,TURNER,SALESMAN,7698,1981-09-08T00:00:00,1500,0,30
7876,ADAMS,CLERK,7788,1987-05-23T00:00:00,1100,,20
7900,JAMES,CLERK,7698,1981-12-03T00:00:00,950,,30
7902,FORD,ANALYST,7566,1981-12-03T00:00:00,3000,,20
7934,MILLER,CLERK,7782,1982-01-23T00:00:00,1300,,10

SQL>
SY.

[Updated on: Fri, 04 October 2024 12:37]

Report message to a moderator

Re: SQL Macro using UTL_FILE [message #690076 is a reply to message #690075] Fri, 04 October 2024 19:36 Go to previous messageGo to next message
Duane
Messages: 581
Registered: December 2002
Senior Member
That is awesome!!!  Thank you so much.  I've been going through the Polymorphic Table Functions examples I have found online and the examples Oracle has included in it's documentation.

You solved the puzzle.  Thanks again.
Re: SQL Macro using UTL_FILE [message #690091 is a reply to message #690075] Mon, 07 October 2024 12:00 Go to previous messageGo to next message
Duane
Messages: 581
Registered: December 2002
Senior Member
Solomon Yakobson wrote on Fri, 04 October 2024 16:59

Sounded like fun. Here it is:

Solomon,

I have something very odd going on with your solution.  Just wondering if you could see if the same situation exists in your environment.

I'm using TOAD so keep that mind.


I compiled your package into my schema and created a query that fetched the first 100,000 rows.  The select statement runs and produces the data.


Problem:

Not all of the records are written to the file.  Maybe a third of the data is written to the file.  Now the odd part.  Within TOAD, if you click to go to the bottom of the result set, the OS system is updated with a bunch of files ("csv_file_").  I'm using the default file name so the files are "csv_file_" with a Date/Time stamp.  It appears that the rest of the 100,000 rows are being written to the OS system.

I don't understand why all the rows (100,000) are not being written to the OS system.

If you run a query with 100,000 rows, are all the rows written to the OS system file?  I would be curious to know if you have the same problem or not.
Re: SQL Macro using UTL_FILE [message #690092 is a reply to message #690091] Mon, 07 October 2024 14:01 Go to previous messageGo to next message
Duane
Messages: 581
Registered: December 2002
Senior Member
So it appears that the ROW_SET is capped at 1,024 rows.  Any reason for that?  Any work around?


procedure fetch_rows ......

  env dbms_tf.env_t := dbms_tf.get_env();
        
  begin
    dbms_tf.get_row_set(v_row_set,v_row_count,v_col_count);

    dbms_output.put_line('Row Count: '||v_row_count);
    dbms_output.put_line('Column Count: '||v_col_count);
    dbms_output.put_line('ENV Row Count: '||env.row_count);


Row Count: 1024
Column Count: 38
ENV Row Count: 1024

Re: SQL Macro using UTL_FILE [message #690093 is a reply to message #690091] Mon, 07 October 2024 16:15 Go to previous messageGo to next message
Solomon Yakobson
Messages: 3303
Registered: January 2010
Location: Connecticut, USA
Senior Member
Interesting. It appears dbms_tf fetches rows in batches. If we add

dbms_output.put_line('Fetched ' || v_row_count || ' rows.');
to fetch_rows procedure and issue:

SQL> with t as (select table_name from polymorphic_csv_pkg.csv_data_and_file(dba_tables)) select count(*) from t;

  COUNT(*)
----------
      2506

Fetched 1024 rows.
Fetched 1024 rows.
Fetched 458 rows.
SQL>
As you can see it fetches 1024 rows a pop. As a result we ended up with 3 files because we used default file name that is SYSDATE based. It woulod be worse if we'd provide file name explicitly - each call to fetch rows would overlay previous call created file. So first I thought we need to change package to create write file first time fetch_rows is called and create append file next times it is called.But issue is there is no way we can determine this is first call to fetch_rows - at least I'm not aware how. The best I can come up with is enforcing file name when calling the package. This will solve all the issues if file doen't exists but it will append to file if it does exist. Here is modified code:

create or replace
  package polymorphic_csv_pkg
    is
      function csv_data_and_file(
                                 p_tbl          in table,
                                 p_dir_name     in varchar2 := 'TEMP',
                                 p_file_name    in varchar2 := null,
                                 p_create_file  in varchar2 := 'Y',
                                 p_delimiter    in varchar2 := ',',
                                 p_quote        in varchar2 := '"',
                                 p_escape       in varchar2 := '\',
                                 p_show_csv_col in varchar2 := 'Y'
                                )
        return table pipelined
        row polymorphic
        using polymorphic_csv_pkg;
      function describe(
                        p_tbl          in out dbms_tf.table_t,
                        p_dir_name     in     varchar2 := 'TEMP',
                        p_file_name    in     varchar2 := null,
                        p_create_file  in     varchar2 := 'Y',
                        p_delimiter    in     varchar2 := ',',
                        p_quote        in     varchar2 := '"',
                        p_escape       in     varchar2 := '\',
                        p_show_csv_col in     varchar2 := 'Y'
                       ) 
        return dbms_tf.describe_t;
      procedure fetch_rows(
                           p_dir_name     in varchar2 := 'TEMP',
                           p_file_name    in varchar2 := null,
                           p_create_file  in varchar2 := 'Y',
                           p_delimiter    in varchar2 := ',',
                           p_quote        in varchar2 := '"',
                           p_escape       in varchar2 := '\',
                           p_show_csv_col in varchar2 := 'Y'
                          );
end polymorphic_csv_pkg;
/
create or replace
  package body polymorphic_csv_pkg
    is
      function describe(
                        p_tbl          in out dbms_tf.table_t,
                        p_dir_name     in     varchar2 := 'TEMP',
                        p_file_name    in     varchar2 := null,
                        p_create_file  in     varchar2 := 'Y',
                        p_delimiter    in     varchar2 := ',',
                        p_quote        in     varchar2 := '"',
                        p_escape       in     varchar2 := '\',
                        p_show_csv_col in     varchar2 := 'Y'
                       )
        return dbms_tf.describe_t
        is
        begin
            for i in 1 .. p_tbl.column.count loop
              continue when not dbms_tf.supported_type(p_tbl.column(i).description.type);
              p_tbl.column(i).for_read := true;
            end loop;
            if p_show_csv_col = 'Y'
              then
                return dbms_tf.describe_t(
                                          new_columns => dbms_tf.columns_new_t(
                                                                               1 => dbms_tf.column_metadata_t(
                                                                                                              name => 'csv_row',
                                                                                                              type => dbms_tf.type_varchar2
                                                                                                             )
                                                                              )
                                         );
              else
                return null;
            end if;
      end;
      procedure fetch_rows(
                           p_dir_name     in varchar2 := 'TEMP',
                           p_file_name    in varchar2 := null,
                           p_create_file  in varchar2 := 'Y',
                           p_delimiter    in varchar2 := ',',
                           p_quote        in varchar2 := '"',
                           p_escape       in varchar2 := '\',
                           p_show_csv_col in varchar2 := 'Y'
                          )
        is
            v_row_set    dbms_tf.row_set_t;
            v_new_col    dbms_tf.tab_varchar2_t;
            v_row_count  pls_integer;
            v_col_count  pls_integer;
            v_csv_row    clob;
            v_csv_file   utl_file.file_type;
            v_chunk_size pls_integer := 3000;
        begin
            dbms_tf.get_row_set(v_row_set,v_row_count,v_col_count);
            for v_row_num in 1..v_row_count loop
              v_csv_row := null;
              for v_col_num in 1..v_col_count loop
                case
                  when v_row_set(v_col_num).description.type = dbms_tf.type_char
                    then
                      if p_quote is not null
                        then
                          v_csv_row := v_csv_row || p_delimiter || p_quote || replace(v_row_set(v_col_num).tab_char(v_row_num),p_quote,p_escape || p_quote) || p_quote;
                        else
                          v_csv_row := v_csv_row || p_delimiter || v_row_set(v_col_num).tab_char(v_row_num);
                      end if;
                  when v_row_set(v_col_num).description.type = dbms_tf.type_varchar2
                    then
                      if p_quote is not null
                        then
                          v_csv_row := v_csv_row || p_delimiter || p_quote || replace(v_row_set(v_col_num).tab_varchar2(v_row_num),p_quote,p_escape || p_quote) || p_quote;
                        else
                          v_csv_row := v_csv_row || p_delimiter || v_row_set(v_col_num).tab_varchar2(v_row_num);
                      end if;
                  when v_row_set(v_col_num).description.type = dbms_tf.type_clob
                    then
                      if p_quote is not null
                        then
                          v_csv_row := v_csv_row || p_delimiter || p_quote || replace(v_row_set(v_col_num).tab_clob(v_row_num),p_quote,p_escape || p_quote) || p_quote;
                        else
                          v_csv_row := v_csv_row || p_delimiter || v_row_set(v_col_num).tab_clob(v_row_num);
                      end if;
                  when v_row_set(v_col_num).description.type = dbms_tf.type_binary_double
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_binary_double(v_row_num));
                  when v_row_set(v_col_num).description.type = dbms_tf.type_binary_float
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_binary_float(v_row_num));
                  when v_row_set(v_col_num).description.type = dbms_tf.type_number
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_number(v_row_num));
                  when v_row_set(v_col_num).description.type = dbms_tf.type_date
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_date(v_row_num),'YYYY-MM-DD"T"HH24:MI:SS');
                  when v_row_set(v_col_num).description.type = dbms_tf.type_timestamp
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_timestamp(v_row_num),'YYYY-MM-DD"T"HH24:MI:SS.FF6');
                end case;
              end loop;
              v_new_col(v_row_num) := substr(v_csv_row,length(p_delimiter) + 1);
            end loop;
            if p_show_csv_col = 'Y'
              then
                dbms_tf.put_col(1,v_new_col);
            end if;
            if p_create_file = 'Y'
              then
                if p_file_name is null
                  then raise_application_error(
                                               -20500,
                                               q'[Can't create file with null file name.]'
                                              );
                end if;
                v_csv_file := utl_file.fopen(
                                             nvl(p_dir_name,'TEMP'),
                                             p_file_name,
                                             'ab',
                                             max_linesize => 32767
                                            );
                for v_row_num in 1..v_row_count loop
                  for v_i in 1..ceil(length(v_new_col(v_row_num)) / v_chunk_size) loop
                    utl_file.put_raw(
                                     v_csv_file,
                                     utl_raw.cast_to_raw(substr(v_new_col(v_row_num),(v_i - 1) * v_chunk_size + 1,v_chunk_size)));
                    utl_file.fflush(v_csv_file);
                  end loop;
                  utl_file.put_raw(v_csv_file,utl_raw.cast_to_raw(chr(10)));
                  utl_file.fflush(v_csv_file);
                end loop;
                utl_file.fflush(v_csv_file);
                utl_file.fclose(v_csv_file);
            end if;
      end;
end polymorphic_csv_pkg;
/
SY.

Re: SQL Macro using UTL_FILE [message #690094 is a reply to message #690093] Mon, 07 October 2024 16:48 Go to previous messageGo to next message
Duane
Messages: 581
Registered: December 2002
Senior Member
I came back to report what I had found and a possible solution.  I see that you beat me to it.

Within the "utl_file.fopen" statement I changed the "wb" to be "ab" and added "csv_file_default.txt" as a default file name if the file name is null.  I see you are doing the same thing but with a file name check.  I think either solution would work.

This is actually some pretty cool stuff.  I also modified the code to not display any of the columns except for the "csv_row" column.  That's all that is really needed.

Thanks again for your time.
Re: SQL Macro using UTL_FILE [message #690095 is a reply to message #690094] Mon, 07 October 2024 17:05 Go to previous messageGo to next message
Solomon Yakobson
Messages: 3303
Registered: January 2010
Location: Connecticut, USA
Senior Member
It would not work 100%. As I mentioned, it will append data to existing file. So, for example, you call csv_data_and_file(table1) it will create csv_file_default.txt with table1 data. Then you call csv_data_and_file(table1) it will append table2 data to already existing file csv_file_default.txt with table1 data.

SY.

[Updated on: Mon, 07 October 2024 17:06]

Report message to a moderator

Re: SQL Macro using UTL_FILE [message #690096 is a reply to message #690095] Mon, 07 October 2024 17:27 Go to previous messageGo to next message
Duane
Messages: 581
Registered: December 2002
Senior Member
You're right.  When I first did this I fetched 2 million rows and it appeared to write all that data to one explicit file name.  I did another test with a smaller result set and it appears to be doing what you described as appending the fetched data over and over into the file.  Bummer.
Re: SQL Macro using UTL_FILE [message #690097 is a reply to message #690096] Mon, 07 October 2024 20:04 Go to previous messageGo to next message
Solomon Yakobson
Messages: 3303
Registered: January 2010
Location: Connecticut, USA
Senior Member
We can resolve it by using UDT (user defined type) for defining csv file. It has user defined constructor that deletes file if it exists and default constructor that keeps the file (it simply doesn't attempt to delete it):

create or replace
  type file_obj_type
    as object(
              dir_name              varchar2(4000),
              file_name             varchar2(4000),
              keep_file_exists_flag varchar2(1),
              constructor function file_obj_type(
                                                 self                    in out nocopy file_obj_type,
                                                 p_dir_name              in            varchar2,
                                                 p_file_name             in            varchar2 := 'n'
                                                )
                return self as result
             )
    final;
/
create or replace
  type body file_obj_type
    as
      constructor function file_obj_type(
                                         self                    in out nocopy file_obj_type,
                                         p_dir_name              in            varchar2,
                                         p_file_name             in            varchar2
                                        )
        return self as result
        is
            e_nonexistent_file_or_path exception;
            pragma exception_init(e_nonexistent_file_or_path,-29283);
        begin
            if p_dir_name is null
              then
                raise_application_error(
                                        -20500,
                                        'Directory name must not be null.'
                                       );
              else
                self.dir_name := p_dir_name;
            end if;
            if p_file_name is null
              then
                raise_application_error(
                                        -20501,
                                        'File name must not be null.'
                                       );
              else
                self.file_name := p_file_name;
            end if;
            self.keep_file_exists_flag := 'n';
            utl_file.fremove(dir_name,file_name);
            return;
          exception
            when e_nonexistent_file_or_path
              then
                return;
    end;
end;
/
create or replace
  package polymorphic_csv_pkg
    is
      function csv_data_and_file(
                                 p_tbl          in table,
                                 p_file         in file_obj_type := file_obj_type(
                                                                                  'TEMP',
                                                                                  'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt'
                                                                                 ),
                                 p_create_file  in varchar2 := 'Y',
                                 p_delimiter    in varchar2 := ',',
                                 p_quote        in varchar2 := '"',
                                 p_escape       in varchar2 := '\',
                                 p_show_csv_col in varchar2 := 'Y'
                                )
        return table pipelined
        row polymorphic
        using polymorphic_csv_pkg;
      function describe(
                        p_tbl          in out dbms_tf.table_t,
                        p_file         in file_obj_type := file_obj_type(
                                                                         'TEMP',
                                                                         'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt'
                                                                        ),
                        p_create_file  in     varchar2 := 'Y',
                        p_delimiter    in     varchar2 := ',',
                        p_quote        in     varchar2 := '"',
                        p_escape       in     varchar2 := '\',
                        p_show_csv_col in     varchar2 := 'Y'
                       ) 
        return dbms_tf.describe_t;
      procedure fetch_rows(
                           p_file         in file_obj_type := file_obj_type(
                                                                            'TEMP',
                                                                            'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt'
                                                                           ),
                           p_create_file  in varchar2 := 'Y',
                           p_delimiter    in varchar2 := ',',
                           p_quote        in varchar2 := '"',
                           p_escape       in varchar2 := '\',
                           p_show_csv_col in varchar2 := 'Y'
                          );
end polymorphic_csv_pkg;
/
create or replace
  package body polymorphic_csv_pkg
    is
      function describe(
                        p_tbl          in out dbms_tf.table_t,
                        p_file         in file_obj_type := file_obj_type(
                                                                         'TEMP',
                                                                         'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt'
                                                                        ),
                        p_create_file  in     varchar2 := 'Y',
                        p_delimiter    in     varchar2 := ',',
                        p_quote        in     varchar2 := '"',
                        p_escape       in     varchar2 := '\',
                        p_show_csv_col in     varchar2 := 'Y'
                       )
        return dbms_tf.describe_t
        is
        begin
            for i in 1 .. p_tbl.column.count loop
              continue when not dbms_tf.supported_type(p_tbl.column(i).description.type);
              p_tbl.column(i).for_read := true;
            end loop;
            if p_show_csv_col = 'Y'
              then
                return dbms_tf.describe_t(
                                          new_columns => dbms_tf.columns_new_t(
                                                                               1 => dbms_tf.column_metadata_t(
                                                                                                              name => 'csv_row',
                                                                                                              type => dbms_tf.type_varchar2
                                                                                                             )
                                                                              )
                                         );
              else
                return null;
            end if;
      end;
      procedure fetch_rows(
                           p_file         in file_obj_type := file_obj_type(
                                                                            'TEMP',
                                                                            'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt'
                                                                           ),
                           p_create_file  in varchar2 := 'Y',
                           p_delimiter    in varchar2 := ',',
                           p_quote        in varchar2 := '"',
                           p_escape       in varchar2 := '\',
                           p_show_csv_col in varchar2 := 'Y'
                          )
        is
            v_row_set    dbms_tf.row_set_t;
            v_new_col    dbms_tf.tab_varchar2_t;
            v_row_count  pls_integer;
            v_col_count  pls_integer;
            v_csv_row    clob;
            v_csv_file   utl_file.file_type;
            v_chunk_size pls_integer := 3000;
        begin
            dbms_tf.get_row_set(v_row_set,v_row_count,v_col_count);
            for v_row_num in 1..v_row_count loop
              v_csv_row := null;
              for v_col_num in 1..v_col_count loop
                case
                  when v_row_set(v_col_num).description.type = dbms_tf.type_char
                    then
                      if p_quote is not null
                        then
                          v_csv_row := v_csv_row || p_delimiter || p_quote || replace(v_row_set(v_col_num).tab_char(v_row_num),p_quote,p_escape || p_quote) || p_quote;
                        else
                          v_csv_row := v_csv_row || p_delimiter || v_row_set(v_col_num).tab_char(v_row_num);
                      end if;
                  when v_row_set(v_col_num).description.type = dbms_tf.type_varchar2
                    then
                      if p_quote is not null
                        then
                          v_csv_row := v_csv_row || p_delimiter || p_quote || replace(v_row_set(v_col_num).tab_varchar2(v_row_num),p_quote,p_escape || p_quote) || p_quote;
                        else
                          v_csv_row := v_csv_row || p_delimiter || v_row_set(v_col_num).tab_varchar2(v_row_num);
                      end if;
                  when v_row_set(v_col_num).description.type = dbms_tf.type_clob
                    then
                      if p_quote is not null
                        then
                          v_csv_row := v_csv_row || p_delimiter || p_quote || replace(v_row_set(v_col_num).tab_clob(v_row_num),p_quote,p_escape || p_quote) || p_quote;
                        else
                          v_csv_row := v_csv_row || p_delimiter || v_row_set(v_col_num).tab_clob(v_row_num);
                      end if;
                  when v_row_set(v_col_num).description.type = dbms_tf.type_binary_double
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_binary_double(v_row_num));
                  when v_row_set(v_col_num).description.type = dbms_tf.type_binary_float
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_binary_float(v_row_num));
                  when v_row_set(v_col_num).description.type = dbms_tf.type_number
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_number(v_row_num));
                  when v_row_set(v_col_num).description.type = dbms_tf.type_date
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_date(v_row_num),'YYYY-MM-DD"T"HH24:MI:SS');
                  when v_row_set(v_col_num).description.type = dbms_tf.type_timestamp
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_timestamp(v_row_num),'YYYY-MM-DD"T"HH24:MI:SS.FF6');
                end case;
              end loop;
              v_new_col(v_row_num) := substr(v_csv_row,length(p_delimiter) + 1);
            end loop;
            if p_show_csv_col = 'Y'
              then
                dbms_tf.put_col(1,v_new_col);
            end if;
            if p_create_file = 'Y'
              then
                v_csv_file := utl_file.fopen(
                                             p_file.dir_name,
                                             p_file.file_name,
                                             'ab',
                                             max_linesize => 32767
                                            );
                for v_row_num in 1..v_row_count loop
                  for v_i in 1..ceil(length(v_new_col(v_row_num)) / v_chunk_size) loop
                    utl_file.put_raw(
                                     v_csv_file,
                                     utl_raw.cast_to_raw(substr(v_new_col(v_row_num),(v_i - 1) * v_chunk_size + 1,v_chunk_size)));
                    utl_file.fflush(v_csv_file);
                  end loop;
                  utl_file.put_raw(v_csv_file,utl_raw.cast_to_raw(chr(10)));
                  utl_file.fflush(v_csv_file);
                end loop;
                utl_file.fflush(v_csv_file);
                utl_file.fclose(v_csv_file);
            end if;
      end;
end polymorphic_csv_pkg;
/
Now:

SQL> host del c:\temp\csv_file*.txt

SQL> select *
  2  from polymorphic_csv_pkg.csv_data_and_file(cities)
  3  /

        ID NAME                           CSV_ROW
---------- ------------------------------ ------------------------------
         1 AMSTERDAM                      1,"AMSTERDAM"
         2 UTRECHT                        2,"UTRECHT"
         3 THE HAGUE                      3,"THE HAGUE"
         4 ROTTERDAM                      4,"ROTTERDAM"

SQL> host dir c:\temp\csv_file*.txt
 Volume in drive C is OSDisk
 Volume Serial Number is 08EC-009C

 Directory of c:\temp

10/07/2024  08:55 PM                54 csv_file_20241007205558.txt
               1 File(s)             54 bytes
               0 Dir(s)  21,911,334,912 bytes free

SQL> exec dbms_session.sleep(1)

PL/SQL procedure successfully completed.

SQL> select *
  2  from polymorphic_csv_pkg.csv_data_and_file(cities)
  3  /

        ID NAME                           CSV_ROW
---------- ------------------------------ ------------------------------
         1 AMSTERDAM                      1,"AMSTERDAM"
         2 UTRECHT                        2,"UTRECHT"
         3 THE HAGUE                      3,"THE HAGUE"
         4 ROTTERDAM                      4,"ROTTERDAM"

SQL> host dir c:\temp\csv_file*.txt
 Volume in drive C is OSDisk
 Volume Serial Number is 08EC-009C

 Directory of c:\temp

10/07/2024  08:55 PM                54 csv_file_20241007205558.txt
10/07/2024  08:55 PM                54 csv_file_20241007205559.txt
               2 File(s)            108 bytes
               0 Dir(s)  21,911,257,088 bytes free

SQL> host del c:\temp\cities.txt
Could Not Find c:\temp\cities.txt

SQL> select *
  2  from polymorphic_csv_pkg.csv_data_and_file(cities,p_file => file_obj_type('TEMP','cities.txt'))
  3  /

        ID NAME                           CSV_ROW
---------- ------------------------------ ------------------------------
         1 AMSTERDAM                      1,"AMSTERDAM"
         2 UTRECHT                        2,"UTRECHT"
         3 THE HAGUE                      3,"THE HAGUE"
         4 ROTTERDAM                      4,"ROTTERDAM"

SQL> host type c:\temp\cities.txt
1,"AMSTERDAM"
2,"UTRECHT"
3,"THE HAGUE"
4,"ROTTERDAM"

SQL> select *
  2  from polymorphic_csv_pkg.csv_data_and_file(dept,p_file => file_obj_type('TEMP','cities.txt'))
  3  /

    DEPTNO DNAME          LOC           CSV_ROW
---------- -------------- ------------- ------------------------------
        10 ACCOUNTING     NEW YORK      10,"ACCOUNTING","NEW YORK"
        20 RESEARCH       DALLAS        20,"RESEARCH","DALLAS"
        30 SALES          CHICAGO       30,"SALES","CHICAGO"
        40 OPERATIONS     BOSTON        40,"OPERATIONS","BOSTON"

SQL> host type c:\temp\cities.txt
10,"ACCOUNTING","NEW YORK"
20,"RESEARCH","DALLAS"
30,"SALES","CHICAGO"
40,"OPERATIONS","BOSTON"

SQL> select *
  2  from polymorphic_csv_pkg.csv_data_and_file(cities,p_file => file_obj_type('TEMP','cities.txt','y'))
  3  /

        ID NAME                           CSV_ROW
---------- ------------------------------ ------------------------------
         1 AMSTERDAM                      1,"AMSTERDAM"
         2 UTRECHT                        2,"UTRECHT"
         3 THE HAGUE                      3,"THE HAGUE"
         4 ROTTERDAM                      4,"ROTTERDAM"

SQL> host type c:\temp\cities.txt
10,"ACCOUNTING","NEW YORK"
20,"RESEARCH","DALLAS"
30,"SALES","CHICAGO"
40,"OPERATIONS","BOSTON"
1,"AMSTERDAM"
2,"UTRECHT"
3,"THE HAGUE"
4,"ROTTERDAM"

SQL>
As you can see have full control and can either overlay file or append to it.

SY.
Re: SQL Macro using UTL_FILE [message #690098 is a reply to message #690053] Mon, 07 October 2024 20:38 Go to previous messageGo to next message
Duane
Messages: 581
Registered: December 2002
Senior Member
That's slick.  Thank you.
Re: SQL Macro using UTL_FILE [message #690099 is a reply to message #690098] Tue, 08 October 2024 06:33 Go to previous messageGo to next message
Solomon Yakobson
Messages: 3303
Registered: January 2010
Location: Connecticut, USA
Senior Member
Actually, it is better to overload system-defined constructor:

create or replace
  type file_obj_type
    as object(
              dir_name              varchar2(4000),
              file_name             varchar2(4000),
              keep_file_exists_flag varchar2(1),
              -- user-defined constructor
              constructor function file_obj_type(
                                                 self                  in out nocopy file_obj_type,
                                                 dir_name              in            varchar2,
                                                 file_name             in            varchar2
                                                )
                return self as result,
              -- user-defined constructor overloading system-defined (attribute-value) constructor
              constructor function file_obj_type(
                                                 self                  in out nocopy file_obj_type,
                                                 dir_name              in            varchar2,
                                                 file_name             in            varchar2,
                                                 keep_file_exists_flag in            varchar2
                                                )
                return self as result
             )
    final;
/
create or replace
  type body file_obj_type
    as
      -- user-defined constructor
      constructor function file_obj_type(
                                         self                  in out nocopy file_obj_type,
                                         dir_name              in            varchar2,
                                         file_name             in            varchar2
                                        )
        return self as result
        is
            e_nonexistent_file_or_path exception;
            pragma exception_init(e_nonexistent_file_or_path,-29283);
        begin
            if dir_name is null
              then
                raise_application_error(
                                        -20500,
                                        'Directory name must not be null.'
                                       );
              else
                self.dir_name := dir_name;
            end if;
            if file_name is null
              then
                raise_application_error(
                                        -20501,
                                        'File name must not be null.'
                                       );
              else
                self.file_name := file_name;
            end if;
            self.keep_file_exists_flag := 'n';
            utl_file.fremove(dir_name,file_name);
            return;
          exception
            when e_nonexistent_file_or_path
              then
                return;
    end;
      -- user-defined constructor overloading system-defined (attribute-value) constructor
      constructor function file_obj_type(
                                         self                  in out nocopy file_obj_type,
                                         dir_name              in            varchar2,
                                         file_name             in            varchar2,
                                         keep_file_exists_flag in            varchar2
                                        )
        return self as result
        is
            e_nonexistent_file_or_path exception;
            pragma exception_init(e_nonexistent_file_or_path,-29283);
        begin
            if dir_name is null
              then
                raise_application_error(
                                        -20500,
                                        'Directory name must not be null.'
                                       );
              else
                self.dir_name := dir_name;
            end if;
            if file_name is null
              then
                raise_application_error(
                                        -20501,
                                        'File name must not be null.'
                                       );
              else
                self.file_name := file_name;
            end if;
            self.keep_file_exists_flag := keep_file_exists_flag;
            if nvl(lower(keep_file_exists_flag),'n') = 'n'
              then
                utl_file.fremove(dir_name,file_name);
            end if;
            return;
          exception
            when e_nonexistent_file_or_path
              then
                return;
    end;
end;
/

Now we can either specify explicitly to delete file if it already exists:

file_obj_type('TEMP','csv_file_20241007205559.txt','n')

or keep it:

file_obj_type('TEMP','csv_file_20241007205559.txt','y')
file_obj_type('TEMP','csv_file_20241007205559.txt','anything-but-n-or-N')

Or we can request to delete file if it already exists implicitly which saves a little bit of typing - I assume delete file if it already exists will be the case most of the time:


file_obj_type('TEMP','csv_file_20241007205559.txt')

SY.

Re: SQL Macro using UTL_FILE [message #690100 is a reply to message #690099] Tue, 08 October 2024 08:27 Go to previous messageGo to next message
Duane
Messages: 581
Registered: December 2002
Senior Member
That would be correct in deleting the file.  If the statement is run multiple times then the file should contain data from the most recent run.  I would think deleting the file and starting over each time would be preferred.
Re: SQL Macro using UTL_FILE [message #690101 is a reply to message #690100] Tue, 08 October 2024 09:35 Go to previous messageGo to next message
Duane
Messages: 581
Registered: December 2002
Senior Member
It would appear that the last solution doesn't resolve the 1,024 row limit.  I ran this a few different times with different values and the file only contains 1,024 rows.

Anyway to work around this 1,024 row limit and write all the records to the file?
Re: SQL Macro using UTL_FILE [message #690102 is a reply to message #690101] Tue, 08 October 2024 17:15 Go to previous messageGo to next message
Solomon Yakobson
Messages: 3303
Registered: January 2010
Location: Connecticut, USA
Senior Member
Ooops, you'ree right. It appears fetch_rows calls file_obj_type constructor each time so each call deletes previous call created file. I thought type constructor would be called once only.

So it seems "before type" solution where we have to delete file if it exists and only then issue select to avoid appending data to existing file is best I can come up with...

SY.
Re: SQL Macro using UTL_FILE [message #690103 is a reply to message #690102] Tue, 08 October 2024 19:15 Go to previous messageGo to next message
Duane
Messages: 581
Registered: December 2002
Senior Member
Just thinking out loud.  What about some global variable that is set the first time fetch _rows is called that creates the file.  Of course, I'm not sure how you would close the file when all the records have been read.  It would seem you would want to close the file at the last iteration.  Not open/close the file each time fetch_rows is called.
Re: SQL Macro using UTL_FILE [message #690105 is a reply to message #690102] Wed, 09 October 2024 05:55 Go to previous messageGo to next message
Solomon Yakobson
Messages: 3303
Registered: January 2010
Location: Connecticut, USA
Senior Member
OK. RTFM is what I have to tell myself. From DBMS_TF docs:

The PTF client interface can have up to four subprograms as follow :

Quote:
DESCRIBE function (Required)

OPEN procedure (Optional)

FETCH_ROWS procedure (Optional)

CLOSE procedure (Optional)

OPEN procedure is generally invoked before calling the FETCH_ROWS procedure.
The FETCH_ROWS procedure produces an output rowset that it sends to the database. The number of invocations of this function and the size of each rowset are data dependent and determined during query execution.
The CLOSE procedure is called at the end of the PTF execution.

Now, when I did RTFM it is all logical and simple:


create or replace
  package polymorphic_csv_pkg
    is
      function csv_data_and_file(
                                 p_tbl          in table,
                                 p_dir_name     in varchar2 := 'TEMP',
                                 p_file_name    in varchar2 := 'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt',
                                 p_create_file  in varchar2 := 'Y',
                                 p_delimiter    in varchar2 := ',',
                                 p_quote        in varchar2 := '"',
                                 p_escape       in varchar2 := '\',
                                 p_show_csv_col in varchar2 := 'Y',
                                 p_append_flag  in varchar2 := 'N'
                                )
        return table pipelined
        row polymorphic
        using polymorphic_csv_pkg;
      function describe(
                        p_tbl          in out dbms_tf.table_t,
                        p_dir_name     in     varchar2 := 'TEMP',
                        p_file_name    in     varchar2 := 'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt',
                        p_create_file  in     varchar2 := 'Y',
                        p_delimiter    in     varchar2 := ',',
                        p_quote        in     varchar2 := '"',
                        p_escape       in     varchar2 := '\',
                        p_show_csv_col in     varchar2 := 'Y',
                        p_append_flag  in     varchar2 := 'N'
                       ) 
        return dbms_tf.describe_t;
      procedure open(
                     p_create_file  in varchar2 := 'Y',
                     p_dir_name     in varchar2 := 'TEMP',
                     p_file_name    in varchar2 := 'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt',
                     p_append_flag  in varchar2 := 'N'
                    );
      procedure fetch_rows(
                           p_dir_name     in varchar2 := 'TEMP',
                           p_file_name    in varchar2 := 'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt',
                           p_create_file  in varchar2 := 'Y',
                           p_delimiter    in varchar2 := ',',
                           p_quote        in varchar2 := '"',
                           p_escape       in varchar2 := '\',
                           p_show_csv_col in varchar2 := 'Y',
                           p_append_flag  in varchar2 := 'N'
                          );
      procedure close;
end polymorphic_csv_pkg;
/
create or replace
  package body polymorphic_csv_pkg
    is
      g_csv_file   utl_file.file_type;
      function describe(
                        p_tbl          in out dbms_tf.table_t,
                        p_dir_name     in     varchar2 := 'TEMP',
                        p_file_name    in     varchar2 := 'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt',
                        p_create_file  in     varchar2 := 'Y',
                        p_delimiter    in     varchar2 := ',',
                        p_quote        in     varchar2 := '"',
                        p_escape       in     varchar2 := '\',
                        p_show_csv_col in     varchar2 := 'Y',
                        p_append_flag  in     varchar2 := 'N'
                       )
        return dbms_tf.describe_t
        is
        begin
            for i in 1 .. p_tbl.column.count loop
              continue when not dbms_tf.supported_type(p_tbl.column(i).description.type);
              p_tbl.column(i).for_read := true;
            end loop;
            if p_show_csv_col = 'Y'
              then
                return dbms_tf.describe_t(
                                          new_columns => dbms_tf.columns_new_t(
                                                                               1 => dbms_tf.column_metadata_t(
                                                                                                              name => 'csv_row',
                                                                                                              type => dbms_tf.type_varchar2
                                                                                                             )
                                                                              )
                                         );
              else
                return null;
            end if;
      end;
      procedure open(
                     p_create_file  in varchar2 := 'Y',
                     p_dir_name     in varchar2 := 'TEMP',
                     p_file_name    in varchar2 := 'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt',
                     p_append_flag  in varchar2 := 'N'
                    )
        is
        begin
            if p_create_file = 'Y'
              then
                if p_dir_name is null
                  then
                    raise_application_error(
                                            -20500,
                                            'Directory name must not be null.'
                                           );
                end if;
                if p_file_name is null
                  then
                    raise_application_error(
                                            -20501,
                                            'File name must not be null.'
                                           );
                end if;
                g_csv_file := utl_file.fopen(
                                             p_dir_name,
                                             p_file_name,
                                             case lower(p_append_flag) when 'y' then 'ab' else 'wb' end,
                                             max_linesize => 32767
                                            );
            end if;
      end;
      procedure fetch_rows(
                           p_dir_name     in varchar2 := 'TEMP',
                           p_file_name    in varchar2 := 'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt',
                           p_create_file  in varchar2 := 'Y',
                           p_delimiter    in varchar2 := ',',
                           p_quote        in varchar2 := '"',
                           p_escape       in varchar2 := '\',
                           p_show_csv_col in varchar2 := 'Y',
                           p_append_flag  in varchar2 := 'N'
                          )
        is
            v_row_set    dbms_tf.row_set_t;
            v_new_col    dbms_tf.tab_varchar2_t;
            v_row_count  pls_integer;
            v_col_count  pls_integer;
            v_csv_row    clob;
            v_chunk_size pls_integer := 3000;
        begin
            dbms_tf.get_row_set(v_row_set,v_row_count,v_col_count);
            for v_row_num in 1..v_row_count loop
              v_csv_row := null;
              for v_col_num in 1..v_col_count loop
                case
                  when v_row_set(v_col_num).description.type = dbms_tf.type_char
                    then
                      if p_quote is not null
                        then
                          v_csv_row := v_csv_row || p_delimiter || p_quote || replace(v_row_set(v_col_num).tab_char(v_row_num),p_quote,p_escape || p_quote) || p_quote;
                        else
                          v_csv_row := v_csv_row || p_delimiter || v_row_set(v_col_num).tab_char(v_row_num);
                      end if;
                  when v_row_set(v_col_num).description.type = dbms_tf.type_varchar2
                    then
                      if p_quote is not null
                        then
                          v_csv_row := v_csv_row || p_delimiter || p_quote || replace(v_row_set(v_col_num).tab_varchar2(v_row_num),p_quote,p_escape || p_quote) || p_quote;
                        else
                          v_csv_row := v_csv_row || p_delimiter || v_row_set(v_col_num).tab_varchar2(v_row_num);
                      end if;
                  when v_row_set(v_col_num).description.type = dbms_tf.type_clob
                    then
                      if p_quote is not null
                        then
                          v_csv_row := v_csv_row || p_delimiter || p_quote || replace(v_row_set(v_col_num).tab_clob(v_row_num),p_quote,p_escape || p_quote) || p_quote;
                        else
                          v_csv_row := v_csv_row || p_delimiter || v_row_set(v_col_num).tab_clob(v_row_num);
                      end if;
                  when v_row_set(v_col_num).description.type = dbms_tf.type_binary_double
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_binary_double(v_row_num));
                  when v_row_set(v_col_num).description.type = dbms_tf.type_binary_float
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_binary_float(v_row_num));
                  when v_row_set(v_col_num).description.type = dbms_tf.type_number
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_number(v_row_num));
                  when v_row_set(v_col_num).description.type = dbms_tf.type_date
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_date(v_row_num),'YYYY-MM-DD"T"HH24:MI:SS');
                  when v_row_set(v_col_num).description.type = dbms_tf.type_timestamp
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_timestamp(v_row_num),'YYYY-MM-DD"T"HH24:MI:SS.FF6');
                end case;
              end loop;
              v_new_col(v_row_num) := substr(v_csv_row,length(p_delimiter) + 1);
            end loop;
            if p_show_csv_col = 'Y'
              then
                dbms_tf.put_col(1,v_new_col);
            end if;
            if p_create_file = 'Y'
              then
                for v_row_num in 1..v_row_count loop
                  for v_i in 1..ceil(length(v_new_col(v_row_num)) / v_chunk_size) loop
                    utl_file.put_raw(
                                     g_csv_file,
                                     utl_raw.cast_to_raw(substr(v_new_col(v_row_num),(v_i - 1) * v_chunk_size + 1,v_chunk_size)));
                    utl_file.fflush(g_csv_file);
                  end loop;
                  utl_file.put_raw(g_csv_file,utl_raw.cast_to_raw(chr(10)));
                  utl_file.fflush(g_csv_file);
                end loop;
                utl_file.fflush(g_csv_file);
            end if;
      end;
      procedure close
        is
        begin
                utl_file.fclose(g_csv_file);
      end;
end polymorphic_csv_pkg;
/

SY.
Re: SQL Macro using UTL_FILE [message #690108 is a reply to message #690105] Wed, 09 October 2024 09:09 Go to previous messageGo to next message
Duane
Messages: 581
Registered: December 2002
Senior Member
That's awesome!!!  I figured you were smart enough to come up with a solution.

By the way, I don't see that "p_append_flag" is ever assigned anything except "N".  The file still contains 1,024 rows.  I'm going to tinker around with it to see if I can get this solution to write all the rows to the file.
Re: SQL Macro using UTL_FILE [message #690110 is a reply to message #690108] Wed, 09 October 2024 09:26 Go to previous messageGo to next message
Solomon Yakobson
Messages: 3303
Registered: January 2010
Location: Connecticut, USA
Senior Member
Works fine for me:

SQL> create or replace
  2    package polymorphic_csv_pkg
  3      is
  4        function csv_data_and_file(
  5                                   p_tbl          in table,
  6                                   p_dir_name     in varchar2 := 'TEMP',
  7                                   p_file_name    in varchar2 := 'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt',
  8                                   p_create_file  in varchar2 := 'Y',
  9                                   p_delimiter    in varchar2 := ',',
 10                                   p_quote        in varchar2 := '"',
 11                                   p_escape       in varchar2 := '\',
 12                                   p_show_csv_col in varchar2 := 'Y',
 13                                   p_append_flag  in varchar2 := 'N'
 14                                  )
 15          return table pipelined
 16          row polymorphic
 17          using polymorphic_csv_pkg;
 18        function describe(
 19                          p_tbl          in out dbms_tf.table_t,
 20                          p_dir_name     in     varchar2 := 'TEMP',
 21                          p_file_name    in     varchar2 := 'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt',
 22                          p_create_file  in     varchar2 := 'Y',
 23                          p_delimiter    in     varchar2 := ',',
 24                          p_quote        in     varchar2 := '"',
 25                          p_escape       in     varchar2 := '\',
 26                          p_show_csv_col in     varchar2 := 'Y',
 27                          p_append_flag  in     varchar2 := 'N'
 28                         )
 29          return dbms_tf.describe_t;
 30        procedure open(
 31                       p_create_file  in varchar2 := 'Y',
 32                       p_dir_name     in varchar2 := 'TEMP',
 33                       p_file_name    in varchar2 := 'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt',
 34                       p_append_flag  in varchar2 := 'N'
 35                      );
 36        procedure fetch_rows(
 37                             p_dir_name     in varchar2 := 'TEMP',
 38                             p_file_name    in varchar2 := 'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt',
 39                             p_create_file  in varchar2 := 'Y',
 40                             p_delimiter    in varchar2 := ',',
 41                             p_quote        in varchar2 := '"',
 42                             p_escape       in varchar2 := '\',
 43                             p_show_csv_col in varchar2 := 'Y',
 44                             p_append_flag  in varchar2 := 'N'
 45                            );
 46        procedure close;
 47  end polymorphic_csv_pkg;
 48  /

Package created.

SQL> create or replace
  2    package body polymorphic_csv_pkg
  3      is
  4        g_csv_file   utl_file.file_type;
  5        function describe(
  6                          p_tbl          in out dbms_tf.table_t,
  7                          p_dir_name     in     varchar2 := 'TEMP',
  8                          p_file_name    in     varchar2 := 'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt',
  9                          p_create_file  in     varchar2 := 'Y',
 10                          p_delimiter    in     varchar2 := ',',
 11                          p_quote        in     varchar2 := '"',
 12                          p_escape       in     varchar2 := '\',
 13                          p_show_csv_col in     varchar2 := 'Y',
 14                          p_append_flag  in     varchar2 := 'N'
 15                         )
 16          return dbms_tf.describe_t
 17          is
 18          begin
 19              for i in 1 .. p_tbl.column.count loop
 20                continue when not dbms_tf.supported_type(p_tbl.column(i).description.type);
 21                p_tbl.column(i).for_read := true;
 22              end loop;
 23              if p_show_csv_col = 'Y'
 24                then
 25                  return dbms_tf.describe_t(
 26                                            new_columns => dbms_tf.columns_new_t(
 27                                                                                 1 => dbms_tf.column_metadata_t(
 28                                                                                                                name => 'csv_row',
 29                                                                                                                type => dbms_tf.type_varchar2
 30                                                                                                               )
 31                                                                                )
 32                                           );
 33                else
 34                  return null;
 35              end if;
 36        end;
 37        procedure open(
 38                       p_create_file  in varchar2 := 'Y',
 39                       p_dir_name     in varchar2 := 'TEMP',
 40                       p_file_name    in varchar2 := 'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt',
 41                       p_append_flag  in varchar2 := 'N'
 42                      )
 43          is
 44          begin
 45              if p_create_file = 'Y'
 46                then
 47                  if p_dir_name is null
 48                    then
 49                      raise_application_error(
 50                                              -20500,
 51                                              'Directory name must not be null.'
 52                                             );
 53                  end if;
 54                  if p_file_name is null
 55                    then
 56                      raise_application_error(
 57                                              -20501,
 58                                              'File name must not be null.'
 59                                             );
 60                  end if;
 61                  g_csv_file := utl_file.fopen(
 62                                               p_dir_name,
 63                                               p_file_name,
 64                                               case lower(p_append_flag) when 'y' then 'ab' else 'wb' end,
 65                                               max_linesize => 32767
 66                                              );
 67              end if;
 68        end;
 69        procedure fetch_rows(
 70                             p_dir_name     in varchar2 := 'TEMP',
 71                             p_file_name    in varchar2 := 'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt',
 72                             p_create_file  in varchar2 := 'Y',
 73                             p_delimiter    in varchar2 := ',',
 74                             p_quote        in varchar2 := '"',
 75                             p_escape       in varchar2 := '\',
 76                             p_show_csv_col in varchar2 := 'Y',
 77                             p_append_flag  in varchar2 := 'N'
 78                            )
 79          is
 80              v_row_set    dbms_tf.row_set_t;
 81              v_new_col    dbms_tf.tab_varchar2_t;
 82              v_row_count  pls_integer;
 83              v_col_count  pls_integer;
 84              v_csv_row    clob;
 85              v_chunk_size pls_integer := 3000;
 86          begin
 87              dbms_tf.get_row_set(v_row_set,v_row_count,v_col_count);
 88              for v_row_num in 1..v_row_count loop
 89                v_csv_row := null;
 90                for v_col_num in 1..v_col_count loop
 91                  case
 92                    when v_row_set(v_col_num).description.type = dbms_tf.type_char
 93                      then
 94                        if p_quote is not null
 95                          then
 96                            v_csv_row := v_csv_row || p_delimiter || p_quote || replace(v_row_set(v_col_num).tab_char(v_row_num),p_quote,p_escape || p_quote) || p_quote;
 97                          else
 98                            v_csv_row := v_csv_row || p_delimiter || v_row_set(v_col_num).tab_char(v_row_num);
 99                        end if;
100                    when v_row_set(v_col_num).description.type = dbms_tf.type_varchar2
101                      then
102                        if p_quote is not null
103                          then
104                            v_csv_row := v_csv_row || p_delimiter || p_quote || replace(v_row_set(v_col_num).tab_varchar2(v_row_num),p_quote,p_escape || p_quote) || p_quote;
105                          else
106                            v_csv_row := v_csv_row || p_delimiter || v_row_set(v_col_num).tab_varchar2(v_row_num);
107                        end if;
108                    when v_row_set(v_col_num).description.type = dbms_tf.type_clob
109                      then
110                        if p_quote is not null
111                          then
112                            v_csv_row := v_csv_row || p_delimiter || p_quote || replace(v_row_set(v_col_num).tab_clob(v_row_num),p_quote,p_escape || p_quote) || p_quote;
113                          else
114                            v_csv_row := v_csv_row || p_delimiter || v_row_set(v_col_num).tab_clob(v_row_num);
115                        end if;
116                    when v_row_set(v_col_num).description.type = dbms_tf.type_binary_double
117                      then
118                        v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_binary_double(v_row_num));
119                    when v_row_set(v_col_num).description.type = dbms_tf.type_binary_float
120                      then
121                        v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_binary_float(v_row_num));
122                    when v_row_set(v_col_num).description.type = dbms_tf.type_number
123                      then
124                        v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_number(v_row_num));
125                    when v_row_set(v_col_num).description.type = dbms_tf.type_date
126                      then
127                        v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_date(v_row_num),'YYYY-MM-DD"T"HH24:MI:SS');
128                    when v_row_set(v_col_num).description.type = dbms_tf.type_timestamp
129                      then
130                        v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_timestamp(v_row_num),'YYYY-MM-DD"T"HH24:MI:SS.FF6');
131                  end case;
132                end loop;
133                v_new_col(v_row_num) := substr(v_csv_row,length(p_delimiter) + 1);
134              end loop;
135              if p_show_csv_col = 'Y'
136                then
137                  dbms_tf.put_col(1,v_new_col);
138              end if;
139              if p_create_file = 'Y'
140                then
141                  for v_row_num in 1..v_row_count loop
142                    for v_i in 1..ceil(length(v_new_col(v_row_num)) / v_chunk_size) loop
143                      utl_file.put_raw(
144                                       g_csv_file,
145                                       utl_raw.cast_to_raw(substr(v_new_col(v_row_num),(v_i - 1) * v_chunk_size + 1,v_chunk_size)));
146                      utl_file.fflush(g_csv_file);
147                    end loop;
148                    utl_file.put_raw(g_csv_file,utl_raw.cast_to_raw(chr(10)));
149                    utl_file.fflush(g_csv_file);
150                  end loop;
151                  utl_file.fflush(g_csv_file);
152              end if;
153        end;
154        procedure close
155          is
156          begin
157                  utl_file.fclose(g_csv_file);
158        end;
159  end polymorphic_csv_pkg;
160  /

Package body created.

SQL> with t as (
  2             select  *
  3               from  polymorphic_csv_pkg.csv_data_and_file(dba_objects,p_file_name => 'dba_objects.txt')
  4            )
  5  select  count(*)
  6    from  t
  7  /

  COUNT(*)
----------
     73665

SQL> host type c:\temp\dba_objects.txt | find /v /c ""
73665

SQL> host dir c:\temp\dba_objects.txt
 Volume in drive C is OSDisk
 Volume Serial Number is 08EC-009C

 Directory of c:\temp

10/09/2024  10:19 AM        14,433,983 dba_objects.txt
               1 File(s)     14,433,983 bytes
               0 Dir(s)  39,604,686,848 bytes free

SQL> -- and now I am appending
SQL> with t as (
  2             select  *
  3               from  polymorphic_csv_pkg.csv_data_and_file(dba_objects,p_file_name => 'dba_objects.txt',p_append_flag => 'Y')
  4            )
  5  select  count(*)
  6    from  t
  7  /

  COUNT(*)
----------
     73665

SQL> host type c:\temp\dba_objects.txt | find /v /c ""
147330

SQL> host dir c:\temp\dba_objects.txt
 Volume in drive C is OSDisk
 Volume Serial Number is 08EC-009C

 Directory of c:\temp

10/09/2024  10:23 AM        28,867,966 dba_objects.txt
               1 File(s)     28,867,966 bytes
               0 Dir(s)  39,584,460,800 bytes free

SQL>
SY.
Re: SQL Macro using UTL_FILE [message #690111 is a reply to message #690110] Wed, 09 October 2024 10:09 Go to previous messageGo to next message
Solomon Yakobson
Messages: 3303
Registered: January 2010
Location: Connecticut, USA
Senior Member
Ah, I missed we need to close file only if p_create_file = 'Y':

create or replace
  package polymorphic_csv_pkg
    is
      function csv_data_and_file(
                                 p_tbl          in table,
                                 p_dir_name     in varchar2 := 'TEMP',
                                 p_file_name    in varchar2 := 'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt',
                                 p_create_file  in varchar2 := 'Y',
                                 p_delimiter    in varchar2 := ',',
                                 p_quote        in varchar2 := '"',
                                 p_escape       in varchar2 := '\',
                                 p_show_csv_col in varchar2 := 'Y',
                                 p_append_flag  in varchar2 := 'N'
                                )
        return table pipelined
        row polymorphic
        using polymorphic_csv_pkg;
      function describe(
                        p_tbl          in out dbms_tf.table_t,
                        p_dir_name     in     varchar2 := 'TEMP',
                        p_file_name    in     varchar2 := 'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt',
                        p_create_file  in     varchar2 := 'Y',
                        p_delimiter    in     varchar2 := ',',
                        p_quote        in     varchar2 := '"',
                        p_escape       in     varchar2 := '\',
                        p_show_csv_col in     varchar2 := 'Y',
                        p_append_flag  in     varchar2 := 'N'
                       ) 
        return dbms_tf.describe_t;
      procedure open(
                     p_create_file  in varchar2 := 'Y',
                     p_dir_name     in varchar2 := 'TEMP',
                     p_file_name    in varchar2 := 'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt',
                     p_append_flag  in varchar2 := 'N'
                    );
      procedure fetch_rows(
                           p_dir_name     in varchar2 := 'TEMP',
                           p_file_name    in varchar2 := 'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt',
                           p_create_file  in varchar2 := 'Y',
                           p_delimiter    in varchar2 := ',',
                           p_quote        in varchar2 := '"',
                           p_escape       in varchar2 := '\',
                           p_show_csv_col in varchar2 := 'Y',
                           p_append_flag  in varchar2 := 'N'
                          );
      procedure close(
                      p_create_file  in varchar2 := 'Y'
                     );
end polymorphic_csv_pkg;
/
create or replace
  package body polymorphic_csv_pkg
    is
      g_csv_file   utl_file.file_type;
      function describe(
                        p_tbl          in out dbms_tf.table_t,
                        p_dir_name     in     varchar2 := 'TEMP',
                        p_file_name    in     varchar2 := 'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt',
                        p_create_file  in     varchar2 := 'Y',
                        p_delimiter    in     varchar2 := ',',
                        p_quote        in     varchar2 := '"',
                        p_escape       in     varchar2 := '\',
                        p_show_csv_col in     varchar2 := 'Y',
                        p_append_flag  in     varchar2 := 'N'
                       )
        return dbms_tf.describe_t
        is
        begin
            for i in 1 .. p_tbl.column.count loop
              continue when not dbms_tf.supported_type(p_tbl.column(i).description.type);
              p_tbl.column(i).for_read := true;
            end loop;
            if p_show_csv_col = 'Y'
              then
                return dbms_tf.describe_t(
                                          new_columns => dbms_tf.columns_new_t(
                                                                               1 => dbms_tf.column_metadata_t(
                                                                                                              name => 'csv_row',
                                                                                                              type => dbms_tf.type_varchar2
                                                                                                             )
                                                                              )
                                         );
              else
                return null;
            end if;
      end;
      procedure open(
                     p_create_file  in varchar2 := 'Y',
                     p_dir_name     in varchar2 := 'TEMP',
                     p_file_name    in varchar2 := 'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt',
                     p_append_flag  in varchar2 := 'N'
                    )
        is
        begin
            if p_create_file = 'Y'
              then
                if p_dir_name is null
                  then
                    raise_application_error(
                                            -20500,
                                            'Directory name must not be null.'
                                           );
                end if;
                if p_file_name is null
                  then
                    raise_application_error(
                                            -20501,
                                            'File name must not be null.'
                                           );
                end if;
                g_csv_file := utl_file.fopen(
                                             p_dir_name,
                                             p_file_name,
                                             case lower(p_append_flag) when 'y' then 'ab' else 'wb' end,
                                             max_linesize => 32767
                                            );
            end if;
      end;
      procedure fetch_rows(
                           p_dir_name     in varchar2 := 'TEMP',
                           p_file_name    in varchar2 := 'csv_file_' || to_char(sysdate,'yyyymmddhh24miss') || '.txt',
                           p_create_file  in varchar2 := 'Y',
                           p_delimiter    in varchar2 := ',',
                           p_quote        in varchar2 := '"',
                           p_escape       in varchar2 := '\',
                           p_show_csv_col in varchar2 := 'Y',
                           p_append_flag  in varchar2 := 'N'
                          )
        is
            v_row_set    dbms_tf.row_set_t;
            v_new_col    dbms_tf.tab_varchar2_t;
            v_row_count  pls_integer;
            v_col_count  pls_integer;
            v_csv_row    clob;
            v_chunk_size pls_integer := 3000;
        begin
            dbms_tf.get_row_set(v_row_set,v_row_count,v_col_count);
            for v_row_num in 1..v_row_count loop
              v_csv_row := null;
              for v_col_num in 1..v_col_count loop
                case
                  when v_row_set(v_col_num).description.type = dbms_tf.type_char
                    then
                      if p_quote is not null
                        then
                          v_csv_row := v_csv_row || p_delimiter || p_quote || replace(v_row_set(v_col_num).tab_char(v_row_num),p_quote,p_escape || p_quote) || p_quote;
                        else
                          v_csv_row := v_csv_row || p_delimiter || v_row_set(v_col_num).tab_char(v_row_num);
                      end if;
                  when v_row_set(v_col_num).description.type = dbms_tf.type_varchar2
                    then
                      if p_quote is not null
                        then
                          v_csv_row := v_csv_row || p_delimiter || p_quote || replace(v_row_set(v_col_num).tab_varchar2(v_row_num),p_quote,p_escape || p_quote) || p_quote;
                        else
                          v_csv_row := v_csv_row || p_delimiter || v_row_set(v_col_num).tab_varchar2(v_row_num);
                      end if;
                  when v_row_set(v_col_num).description.type = dbms_tf.type_clob
                    then
                      if p_quote is not null
                        then
                          v_csv_row := v_csv_row || p_delimiter || p_quote || replace(v_row_set(v_col_num).tab_clob(v_row_num),p_quote,p_escape || p_quote) || p_quote;
                        else
                          v_csv_row := v_csv_row || p_delimiter || v_row_set(v_col_num).tab_clob(v_row_num);
                      end if;
                  when v_row_set(v_col_num).description.type = dbms_tf.type_binary_double
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_binary_double(v_row_num));
                  when v_row_set(v_col_num).description.type = dbms_tf.type_binary_float
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_binary_float(v_row_num));
                  when v_row_set(v_col_num).description.type = dbms_tf.type_number
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_number(v_row_num));
                  when v_row_set(v_col_num).description.type = dbms_tf.type_date
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_date(v_row_num),'YYYY-MM-DD"T"HH24:MI:SS');
                  when v_row_set(v_col_num).description.type = dbms_tf.type_timestamp
                    then
                      v_csv_row := v_csv_row || p_delimiter || to_char(v_row_set(v_col_num).tab_timestamp(v_row_num),'YYYY-MM-DD"T"HH24:MI:SS.FF6');
                end case;
              end loop;
              v_new_col(v_row_num) := substr(v_csv_row,length(p_delimiter) + 1);
            end loop;
            if p_show_csv_col = 'Y'
              then
                dbms_tf.put_col(1,v_new_col);
            end if;
            if p_create_file = 'Y'
              then
                for v_row_num in 1..v_row_count loop
                  for v_i in 1..ceil(length(v_new_col(v_row_num)) / v_chunk_size) loop
                    utl_file.put_raw(
                                     g_csv_file,
                                     utl_raw.cast_to_raw(substr(v_new_col(v_row_num),(v_i - 1) * v_chunk_size + 1,v_chunk_size)));
                    utl_file.fflush(g_csv_file);
                  end loop;
                  utl_file.put_raw(g_csv_file,utl_raw.cast_to_raw(chr(10)));
                  utl_file.fflush(g_csv_file);
                end loop;
                utl_file.fflush(g_csv_file);
            end if;
      end;
      procedure close(
                      p_create_file  in varchar2 := 'Y'
                     )
        is
        begin
            dbms_output.put_line('Closing');
            if p_create_file = 'Y'
              then
                utl_file.fclose(g_csv_file);
            end if;
      end;
end polymorphic_csv_pkg;
/

SY.
Re: SQL Macro using UTL_FILE [message #690112 is a reply to message #690111] Wed, 09 October 2024 10:52 Go to previous messageGo to next message
Duane
Messages: 581
Registered: December 2002
Senior Member
Ah, I see the issue.  Since I'm using TOAD I just click "F9" (Execute/Compile Statement) to execute the query.  That indeed only produces 1,024 rows to be written to the file.  If I click "F5" (Execute Script) then I get all the rows written to the file.  Very interesting.

I'll see how this works within an actual procedure.

Thanks again.
Re: SQL Macro using UTL_FILE [message #690114 is a reply to message #690112] Wed, 09 October 2024 16:37 Go to previous messageGo to next message
Solomon Yakobson
Messages: 3303
Registered: January 2010
Location: Connecticut, USA
Senior Member
Toad F9 fetches just first batch of rows (500 rows in my Toad). And then you are in control by using blue arrows. For example, when I check file size after hitting F9 I see 475kb. Then I click "last row" arrow ( >| blue triangle with pipe sign) and Toad continues fetching rest of the rows. I can see file size growing. When you use F5 Toad runs it as a script and all rows are fetched in one shot.

SY.

[Updated on: Wed, 09 October 2024 16:38]

Report message to a moderator

Re: SQL Macro using UTL_FILE [message #690115 is a reply to message #690114] Wed, 09 October 2024 20:34 Go to previous message
Duane
Messages: 581
Registered: December 2002
Senior Member
You are correct.  500 rows for Toad.

Thanks again.
Previous Topic: PLS-00172: string literal too long while passing large XML as input to a procedure
Next Topic: how to select Sub_Query fields
Goto Forum:
  


Current Time: Sat Dec 21 09:48:45 CST 2024