DBA Blogs
I'm using a UDT with dataset inside there as a table and i want to use that UDT on Merge statement. When using that UDT, i put that UDT inside "ON (" as following: "... ON (SELECT * FROM TABLE (UDT)) ..." then, how can i to use that UDT inside INSERT statment putting a only variable in the "VALUES" statment? For Example:
MERGE INTO TABLE1 T1
USING (SELECT * FROM TABLE(UDT)) T2
ON (T1.X = T2.Y)
WHEN NOT MATCHED THEN
INSERT VALUES UDT; //<-------- here, i want only use UDT and that's all, how can i create something like this?
The Table1 (T1) and UDT (T2) has the same structure
Hi Tom,
first at all, thanks for all your support. I wish a blessed christmas time for you, your team and families.
I'm looking forward for the best practise of a cursor to fetch just one (or a few) row(s) on my own sorting and lock them. Without locking the whole resultset or include all rows with the condition.
<code>
select *
from MyTable
where < MyCondition >
order By < MyOrderby >
fetch first 1 row only
for update skip locked;
</code>
Unfortunately the combination of fetch-first-only and for update causes a ORA-02014 :(
- without fetch-first-only it will lock the whole resultset
- a rownum condition instead of fetch-first-only will not consider my orderby
- an inner subselect with fetch-first-only will not skip the locked rows and finally fetch zero rows if the inner sql only fetch already locked rows
<code>
select *
from MyTable
where Rowid in (select Rowid
from MyTable
where < MyCondition >
order By < MyOrderby >
fetch first 1 row only)
for update skip locked;
</code>
Whats your best practice or advice?
Thanks, Norman
We have a production DB that is used for reporting is been attempted to clone with NetApp Snapmirror technology. The current clone is done with old method placing the tablespaces of PROD DB in backup mode and then copying the files using OS ?cp? tool to the designated clone DB directories and run the necessary DB layer clone steps. This works fine but now taking extra long time to complete and impacting business functions.
With NetApp SNapmirror technology all steps are working up to the DB layer clone steps. There is an issue with very last steps of recovering the DB after SnapMirror has performed. The step used is ?alter database recover automatic using backup controlfile until cancel? as that seems to be the only option available according Oracle Doc ID: DATAFILE NEEDS MORE RECOVERY TO BE CONSISTENT ORA-1194 ORA-1547 ORA-1110 (Doc ID 1528788.1) - Scenario 2: Backup Controlfile is Used for recovery.
Doing this recovery interactively via SQL*PLUS needs placing the redo log file path when prompted. Then DB recover successfully. See below for output.
SQL> RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL ;
ORA-00279: change 75786588505 generated at 12/06/2024 13:41:35 needed for
thread 1
ORA-00289: suggestion : /arch_poct/archive/OFPOC/arch_1186748276_1_18.arc
ORA-00280: change 75786588505 for thread 1 is in sequence #18
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
'/redo_poct/oradata/OFPOC/redo06.log'
Log applied.
Media recovery complete.
SQL> alter database open resetlogs ;
Database altered.
But this clone process has to be done on daily basis and must be automated. When scripted there is no way with SQL*PLUS to be read from a file containing `/redo_poct/oradata/OFPOC/redo06.log' for it?s standard input.
Below is the script
#! /usr/bin/ksh
sqlplus "/ as sysdba" < /projects/of/poc/spt/log/setup_sql_redo_20241217.log <<EOF
set echo on
alter database recover automatic using backup controlfile until cancel;
EOF
Where
cat /projects/of/poc/spt/log/setup_sql_redo_20241217.log
'/redo_poct/oradata/OFPOC/redo06.log'
Output
? /projects/gen/ksh/working> ./TestRecovery.sh
SQL*Plus: Release 11.2.0.2.0 Production on Tue Dec 17 17:29:00 2024
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> SQL> alter database recover automatic using backup controlfile until cancel
*
ERROR at line 1:
ORA-00279: change 75787505640 generated at 12/16/2024 11:21:36 needed for
thread 1
ORA-00289: suggestion : /arch_poct/archive/OFPOC/arch_1186748276_1_157.arc
ORA-00280: change 75787505640 for thread 1 is in sequence #157
ORA-00278: log file '/arch_poct/archive/OFPOC/arch_1186748276_1_157.arc' no
longer needed for this recovery
ORA-00308: cannot open archived log
'/arch_poct/archive/OFPOC/arch_1186748276_1_15...
I'm using external tables everyday to load csv files but I'm having great difficulty loading my first FIXED WIDTH file. I've tried several different access parameter variations and the CREATE TABLE command compiles without error but when I select the table, I get the following error. In my several iterations, I may get a different "KUP-01005: syntax error" but I can't seem to get past this:
Error Message:
<code>select * from rzedelq;</code>
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "badfile": expecting one of: "all, column, convert_error, csv, date_format, enclosed, ignore_chars_after_eor, (, ltrim, lrtrim, ldrtrim, missing, notrim, nullif, number_format, numeric_characters, optionally, rtrim, remove_quotes, reject, terminated, truncate_columns"
KUP-01007: at line 2 column 13
29913. 00000 - "error in executing %s callout"
*Cause: The execution of the specified callout caused an error.
*Action: Examine the error messages take appropriate action.
--
<code>
CREATE TABLE rzedelq
(
rzedelq_type VARCHAR2(1),
rzedelq_dob DATE,
rzedelq_last_name VARCHAR2(30),
rzedelq_first_name VARCHAR2(30),
rzedelq_balance NUMBER(6)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY ext_data
ACCESS PARAMETERS
(
RECORDS FIXED 76 FIELDS
BADFILE 'rzedelq.bad'
LOGFILE 'rzedelq.log'
MISSING FIELD VALUES ARE NULL
(
rzedelq_record_type (01:01) CHAR(1),
rzedelq_dob (02:09) CHAR(8) DATE 'yyyymmdd' NULLIF rzedelq_dob='00000000',
rzedelq_last_name (10:39) CHAR(30),
rzedelq_first_name (40:69) CHAR(30),
rzedelq_balance (70:75) NUMBER(6)
)
)
LOCATION ('rzedelq.dat')
)
REJECT LIMIT UNLIMITED
;
</code>
Sample data file:
<code>
--'rzedelq.dat'--
119811218ANDERSEN AIMEE 366910
219121006COWGER AMANDA 030900
119030707GESLIN ANDREA 150910
319041125HATFIELD CARRIE 055900
119150913MERRELL CONNIE 018920
419761024MESSINGER JASON 010960
119170708PRIMROSE JOHN 030920
519980721REEVES KAILYNN 018930
119690511SAFARIK ROBERT 021980
</code>
--
Any Ideas? Any help?
I am trying to create an sql script that creates a user and if a password is supplied as a parameter to use it and if no parameter is passed have the sql prompt for the password.
This is what i tried but I am obviously a bit off since it asks for the Value of &1 immediately if it is not passed.
If I do pass it a password it gives errors and then prompts for Password and creates user using that password.
Create_TEST_User.sql
<code>define Password = '&1'
SET VERIFY OFF;
if &Password is NULL
THEN
ACCEPT Password PROMPT 'Enter value for Password: ' HIDE
end if;
CREATE USER "TEST" IDENTIFIED BY "&Password"
DEFAULT TABLESPACE "USERS"
TEMPORARY TABLESPACE "TEMP";
GRANT "CONNECT" TO "TEST"; </code>
@Create_TEST_User.sql testpassword-1234
Error starting at line : 3 File @ /opt/oracle/sql/Create_TEST_User.sql
In command -
if &Password is NULL
Error report -
Unknown Command
Error starting at line : 4 File @ /opt/oracle/sql/Create_TEST_User.sql
In command -
THEN
Error report -
Unknown Command
Enter value for Password: ********
Error starting at line : 6 File @ /opt/oracle/sql/Create_TEST_User.sql
In command -
end if
Error report -
Unknown Command
User "TEST" created.
Have you ever tried to retrieve a value using SYS_CONTEXT from a user-defined context accessed globally? Chances are that you will only retrieve NULLs. After some testing, following a series of inexplicable results, a bell rang in my head pointing to something dealing with a few required values that a procedure launched from an APEX application needed in order to process the data. It turns out that APEX is setting the CLIENT_IDENTIFIER value in USERENV, that is the value that you can retrieve using function SYS_CONTEXT('USERENV', 'CLIENT_IDENTIFIER'). If you have a context defined as ACCESSED GLOBALLY, there are some rules explained in the documentation about what you could get given certain conditions (link for Oracle 19c). At the end of the discussion, the documentation says that if you are getting NULLs, then you might have a problem with the CLIENT_IDENTIFIER and you need to clear it using: DBMS_SESSION.CLEAR_IDENTIFIER;
It turned out that after clearing the CLIENT_IDENTIFIER, the missing values started finally to appear because, indeed, APEX is setting the CLIENT_IDENTIFIER with a value composed like this: <APP_USER>:<APP_SESSION>.
Now, in my case it was sound and safe to clear the CLIENT_IDENTIFIER because my process was running in background and the associated Oracle session is closed when the job terminates, but if you need to retrieve these values as part of an interactive process run inside an APEX page, I'd say that you would be better off creating a wrapper function for SYS_CONTEXT where you :
- save the current CLIENT_IDENTIFIER in a variable
- clear it using DBMS_SESSION.CLEAR_IDENTIFIER
- assign the desired value retrieved from the global context to another variable
- restore the original CLIENT_IDENTIFIER using DBMS_SESSION.SET_IDENTIFIER
- return the desired value.
I didn't test this suggestion yet as I am busy with other urgent stuff, so caveat emptor!
I have been logging in to Oracle cloud using multi-factor authentication using 2FA with Oracle Mobile Authenticator and it was going fine until I had to change my phone. Both of my phones are android and I THOUGHT that I will simply migrate the apps and keep using the accounts in my Oracle mobile authenticator same way, but it seems that after migration I lost all the accounts. Multi-Factor Authentication (MFA) is a security process that requires a user to provide two or more authentication factors to access a system, network, or application. Two-Factor Authentication (2FA) is a type of Multi-Factor Authentication that requires a user to provide two authentication factors: - Something you know (password, PIN)
- Something you have (smartphone, token, or a one-time password sent via SMS or authenticator app)
So I was using 2FA with this Oracle Mobile Authenticator. I tried with my older codes , QR codes, the password, PIN and stuff but nothing worked. No matter, what I tried I simply couldn't log in to Oracle Cloud since the page asked me for a code generated by the authenticator. Eventually following is the only way I could find to resolve this issue: I talked in Oracle live chat, and they asked me to find an engineer to send me a bypass code. If you don't know what Oracle Mobile Authenticator app is then as per docs: Oracle Mobile Authenticator enables you to securely verify your identity by using your mobile device as a authentication factor. The app generates one-time passwords for login. Or it can receive notifications for login, which can be approved with a simple tap. When this authentication is used on top of username-password, it adds an additional layer of security that is essential for today's online applications. Features: - Generate one-time passwords even when the device is offline
- Push Notification based approval
- App PIN for app protection
- Set up via QR code, Config URL, or by entering key manually
- Multiple account support
- Generate OTP for other applications that make use of One-Time Password as per RFC 6238
- Generate one-time passwords even when the device is offline
- Push Notification based approval
- App PIN for app protection
I hope this helps.
Hello,
I've been struggling with "strange" behavior with handling "extreme " timestamp values.
Our applications in .NET execute stored procedures with timestamp parameter and in some cases pass "31/12/9999 23:59:59.9999999" - it is value of Datetime.MaxValue function
After upgrade to 19.25 we get "ORA-01841: (full) year must be between -4713 and +9999, and not be 0" for this value.
Lets look on my examples
Following statement returns same value for oracle 19.25 and < 19.25 versions
<code>select to_timestamp('12/31/9999 23:59:59.9999999','MM/DD/YYYY HH24:MI:SS.FF') from dual;</code>
31-DEC-99 11.59.59.999999900 PM
Lets create simple procedure
<code>
CREATE OR REPLACE PROCEDURE testmh( p_data in timestamp )
IS
vdata timestamp(9);
BEGIN
dbms_output.put_line('a');
END;
</code>
and execute like
<code>execute testmh(to_timestamp('12/31/9999 23:59:59.9999999','MM/DD/YYYY HH24:MI:SS.FF'));</code>
on 19.25 I get
ORA-01841: (full) year must be between -4713 and +9999, and not be 0
on < 19.25
PL/SQL procedure successfully completed.
Another test - create the function
<code>
CREATE OR REPLACE FUNCTION testmhf (
p_data TIMESTAMP )
RETURN TIMESTAMP
IS
BEGIN
RETURN p_data;
END testmhf;
</code>
and execute
<code>select testmhf(to_timestamp('12/31/9999 23:59:59.9999999','MM/DD/YYYY HH24:MI:SS.FF')) from dual;</code>
for <19.25 result is
01-JAN-00 12.00.00.000000000 AM
for 19.25 result is
ORA-01841: (full) year must be between -4713 and +9999, and not be 0
Workaround for this problem is create the subtype and use that as type for procedure parameter
<code>
CREATE OR REPLACE PACKAGE XY
AS
SUBTYPE timestamp_t IS TIMESTAMP(7);
END;
/
CREATE OR REPLACE PROCEDURE testmh_t( p_data in XY.timestamp_t )
IS
BEGIN
insert into zone(zone) values(p_data);
END;
/
execute testmh_t(to_timestamp('12/31/9999 23:59:59.9999999','MM/DD/YYYY HH24:MI:SS.FF'));
</code>
Do you know any other workaround or why passing this "extreme" value is causing that this value is rounding up?
M.
Oracle Digital Assistant (ODA) provides a comprehensive platform for creating conversational interfaces. This article will guide you through integrating ChatGPT with ODA using the bots-node-sdk and openai libraries. Prerequisites: - Oracle Digital Assistant instance
- ChatGPT API key
- Node.js environment
Configuration: Create a new file named services.js and add the following code: const OracleBot = require('@oracle/bots-node-sdk'); const { WebhookClient, WebhookEvent } = OracleBot.Middleware; const express = require('express'); const { Configuration, OpenIApi } = require("openai"); const configuration = new Configuration({ apiKey: "YOUR_CHATGPT_API_KEY", }); const openai = new OpenIApi(configuration); const textGeneration = async (prompt) => { try { const response = await openai.createCompletion({ model: 'text-davinci-003', prompt: `Human: ${prompt}\nAI: `, temperature: 0.9, max_tokens: 500, top_p: 1, frequency_penalty: 0, presence_penalty: 0.6, stop: ['Human:', 'AI:'], }); return { status: 1, response: `${response.data.choices[0].text}`, }; } catch (error) { return { status: 0, response: '', }; } }; module.exports = (app) => { const logger = console; // Initialize Oracle Digital Assistant OracleBot.init(app, { logger, }); // Set up webhook integration const webhook = new WebhookClient({ channel: { url: "YOUR_ODA_WEBHOOK_URL", secret: "YOUR_ODA_WEBHOOK_SECRET", }, }); // Handle incoming messages webhook.on(WebhookEvent.MESSAGE_RECEIVED, (message) => { const action = message.queryResult.action; const queryText = message.queryResult.queryText; if (action === 'input.unknown') { textGeneration(queryText).then((result) => { if (result.status === 1) { res.send({ fulfillmentMessages: [ { text: { text: [result.response], }, }, ], }); } else { res.send({ fulfillmentMessages: [ { text: { text: ["Sorry, I'm not able to help with that."], }, }, ], }); } }); } else { res.send({ fulfillmentMessages: [ { text: { text: [`No handler for action ${action}`], }, }, ], }); } }); // Set up endpoint for incoming messages app.post('/bot/message', (req, res) => { const message = req.body; webhook.send(message).then(() => res.send('ok')); }); };
- Replace YOUR_CHATGPT_API_KEY with your actual ChatGPT API key.
- Replace YOUR_ODA_WEBHOOK_URL and YOUR_ODA_WEBHOOK_SECRET with your actual Oracle Digital Assistant webhook URL and secret.
Dialog Flow Integration
To integrate the ChatGPT service with your Oracle Digital Assistant dialog flow, follow these steps:
Create a new intent in your dialog flow with the action input.unknown.
Add a new fulfillment to the intent with the following settings:
Fulfillment type: Webhook
Webhook URL: YOUR_APP_URL/bot/message (replace with your actual app URL)
HTTP method: POST
Save and deploy your dialog flow.
Testing
Test your integration by sending a message to your Oracle Digital Assistant instance. The message should be routed to the ChatGPT service, which will generate a response. The response will then be sent back to the user.
Note: Make sure to replace the placeholder values with your actual credentials and URLs.
When it comes to limiting data modifications in Oracle, two options are available: Read-Only Instances and Read-Only Databases. While both options restrict data changes, they serve different purposes and are used in distinct contexts. Read-Only Instances: A Read-Only Instance is a configuration in Oracle Real Application Clusters (RAC) where one or more instances are set to operate in read-only mode. This setup is ideal for environments with high concurrency for both read and write operations. Key features of Read-Only Instances include: - Real-time query scaling by dedicating specific instances to read-only operations
- Write operations are not allowed on designated read-only instances, but other instances can still handle writes
- Useful for load balancing in RAC configurations
- Read-Only Instances are suitable for offloading read-heavy workloads in RAC environments and supporting real-time analytics without impacting primary write performance.
Read-Only Databases A Read-Only Database, on the other hand, is a database-wide mode that restricts all write operations. This setup is typically used for archiving, reporting, or maintenance tasks. Key features of Read-Only Databases include: - The entire database is locked for write operations
- Used for archiving, reporting, or maintenance tasks
- Can be achieved using the ALTER DATABASE OPEN READ ONLY command or a Data Guard physical standby database
- Read-Only Databases are ideal for archiving purposes, maintenance periods, or using a standby database for reporting.
Choosing the Right Option: When deciding between Read-Only Instances and Read-Only Databases, consider the following: - If you have a RAC environment and need to offload read-heavy workloads, Read-Only Instances might be the better choice.
- If you need to restrict, write operations across the entire database, a Read-Only Database is the way to go.
Ultimately, understanding the differences between Read-Only Instances and Read-Only Databases will help you make informed decisions about managing your Oracle database. Hope this helps.
Oracle Database 23ai Free Edition offers a fully functional database for development, testing, and production purposes, allowing users to experience the powerful features of Oracle Database. However, users may encounter errors during the installation process, which can be frustrating and time-consuming to resolve. This article addresses a common issue that users may encounter during the installation of Oracle Database 23ai Free Edition and provides a solution to ensure a successful installation. During a silent installation of Oracle Database 23ai Free Edition on Windows, the process terminates abruptly, and the setup.log file displays the following error message: "SEVERE: The provided destination folder has invalid characters. Verify and try again." The log file continues to grow in size, and the installation process must be manually terminated. This error can occur even when the destination folder path appears to be correct and free of any invalid characters. Troubleshooting: To resolve this issue, ensure that the following conditions are met:
1. Absolute Path for RSP File Specify an absolute path for the RSP file in the command line. For example: setup.exe /s /v"RSP_FILE=c:\myinstallpath\FREEInstall.rsp" /v"/L*v setup.log" /v"/qn" This is necessary because the setup.exe file does not recognize the RSP file if only the filename is provided. By specifying the absolute path, you ensure that the setup.exe file can locate the RSP file correctly.
2. Empty Values in RSP File Although the RSP file comment suggests that no parameter should be left with an empty value, it is safe to leave the DB_DOMAIN parameter empty if it is not required. This is because the DB_DOMAIN parameter is not mandatory, and leaving it empty does not affect the installation process. Here is an example RSP file (FREEInstall.rsp) that can be used for a successful installation:
#Do not leave any parameter with empty value #Install Directory location, username can be replaced with current user INSTALLDIR=C:\app\myname\product\23ai\ #Database password, All users are set with this password, Remove the value once installation is complete PASSWORD=mypassword #If listener port is set to 0, available port will be allocated starting from 1521 automatically LISTENER_PORT=0 #Specify char set of the database CHAR_SET=AL32UTF8 #Specify the database domain for the db unique name specification DB_DOMAIN= #Specify TRUE for performing software only install SOFTWARE_ONLY=FALSE #Specify TRUE if installer should modify directory permissions when ACL is incorrect MODIFY_DIRECTORY_PERMISSIONS=TRUE
By following the troubleshooting steps and using the example RSP file provided, you should be able to successfully install Oracle Database 23ai Free Edition on your Windows system. Remember to specify the absolute path for the RSP file and leave the DB_DOMAIN parameter empty if it is not required. If you encounter any further issues, refer to the Oracle Database documentation and support resources for assistance.
Hi Tom
Greetings
We had very high critical Application(with zero downtime) running in 4 Node RAC Environment with same HA Configuration. We are planning to perform defragmentation(Index) with minimal downtime.
We are planning the following steps
1) Disable Apply and open Standby database
2) Perform Index Defragmentation in Standby Database
3) After defragmentation, put back Physical Standby in Mount State
Now Standby is defragmentated
1) Switch Application to Standby(Switch Standby DB to Primary and vice versa)
2) Perform Defragmentation in Old Primary(Now HA)
3) Switch back application to Old Primary
My question is whether the above is possible? Or I am missing any steps here?
Or when switching HA as Primary and Primary as HA, whether defragmentation will go since it is block to block replication?
Please advise
Noticed 2 databases ran on the same disk got different performance on same query in identical plan. Ran AWR and saw big difference in IO stats section. i.e Buffer Cache Reads is in us level but another one is in ms. What could caused this difference if on the same disk, it is /u01 in my case ? thanks.
19c on linux
filesystemio_options none
=>One active database
<code>
IOStat by Function summary
'Data' columns suffixed with M,G,T,P are in multiples of 1024 other columns suffixed with K,M,G,T,P are in multiples of 1000
ordered by (Data Read + Write) desc
Function Name Reads: Data Reqs per sec Data per sec Writes: Data Reqs per sec Data per sec Waits: Count Avg Time
Direct Reads 2.9T 845.98 832.589M 8M 0.06 .002M 3.1M 65.22ns
Buffer Cache Reads 26.5G 756.04 7.484M 0M 0.00 0M 2.2M 165.58us
Others 519M 7.27 .143M 263M 1.57 .073M 29.4K 75.63us
</code>
=>Another much less active database
<code>IOStat by Function summary
'Data' columns suffixed with M,G,T,P are in multiples of 1024 other columns suffixed with K,M,G,T,P are in multiples of 1000
Function Name Reads: Data Reqs per sec Data per sec Writes: Data Reqs per sec Data per sec Waits: Count Avg Time
Direct Reads 49.2G 14.35 14.154M 0M 0.00 0M 51.1K 1.31us
Others 220M 3.45 .062M 53M 0.95 .015M 14K 24.99us
Buffer Cache Reads 214M 1.05 .06M 0M 0.00 0M 3163 2.56ms</code>
Hi Tom,
I?m encountering an issue with my Oracle Database 23Free. It throws the ORA-12954 error: "The request exceeds the maximum allowed database size of 12 GB." After investigating the database files, I discovered that the SYSAUX tablespace is the main contributor to the size issue.
<code>
/opt/oracle/oradata/FREE/FREEPDB1/sysaux01.dbf 6.1GB
/opt/oracle/oradata/FREE/FREEPDB1/system01.dbf 0.6GB
/opt/oracle/oradata/FREE/FREEPDB1/undotbs01.dbf 0.3GB
/opt/oracle/oradata/FREE/FREEPDB1/users01.dbf 0.3GB
/opt/oracle/oradata/FREE/pdbseed/sysaux01.dbf 0.3GB
/opt/oracle/oradata/FREE/pdbseed/system01.dbf 0.3GB
/opt/oracle/oradata/FREE/pdbseed/undotbs01.dbf 0.1GB
/opt/oracle/oradata/FREE/sysaux01.dbf 6.6GB
/opt/oracle/oradata/FREE/system01.dbf 1.2GB
/opt/oracle/oradata/FREE/undotbs01.dbf 0.6GB
/opt/oracle/oradata/FREE/users01.dbf 0.0GB
</code>
Upon further examination using
<code>
select * from V$SYSAUX_OCCUPANTS;
</code>
it seems that the majority of the space is being utilized by:
? SM/AWR
? SM/OPTSTAT
To address this, I attempted to purge statistics using the following PL/SQL block:
<code>
BEGIN
dbms_stats.purge_stats(dbms_stats.purge_all);
END;
</code>
However, I received the ORA-12954 error again during this process.
Do you have any advice on how to resolve this issue and bring the database back within the size limit? Your help would be greatly appreciated!
Best regards,
Christian
I want to get difference between two dates say
(10-FEB-2000 - 02-JAN-1999)
and the result has to be displayed in following format
1 year, 1 month, 8 days
Hi Tom,
First of thank for your great work about helping so many people with brilliant explanations example.
My question is, when I am creating a table with primary keys, and then regenerating the redo data/statements information using log miner ordered by SCN, I see that oracle is creating unique index on table ?test1? first and then creates a table ?test1?.
Why and how oracle is doing that? If I would play these statement on some other database in same the sequence ordered by SCN, it has failed as it is trying to create an unique index on a table which does not exist.
Any help would be highly appreciated.
Environment is Oracle 11.1.7 on AIX.
Many thanks in advance.
Bobby
Kindly ask for Your opinion regarding ORACLE_SID setting of databases in standby configuration ..
We have different settings in our standby databases and both configurations are working without any complains.
? ORACLE_SID = db_unique_name ( ORACLE_SID in primary is different from ORACLE_SID in standby database )
? ORACLE_SID = db_name ( ORACLE_SID in primary is equal to ORACLE_SID in standby database )
What would be the advatage or downside of one or another configuraation .. ?
Regards Dejan
I have three tables. Clients, Invoices and Payments and I try to make a sum of invoiced amount and a sum of payments in the same script
If I run the sum of invoiced amount, all things are good, like below
select a.id, a.name,
sum(b.quantity*b.unitprice)
from clients a, invoices b
where a.id=b.client_id
group by a.id, a.name
ID NAME SUM(B.QUANTITY*B.UNITPRICE)
1 Client1 325
2 Client2 150
3 Client3 30
If I run a sum of payments the result is correct, like below
select a.id, a.name,
sum(c.amount)
from clients a, payments c
where a.id=c.client_id
group by a.id, a.name
ID NAME SUM(C.AMOUNT)
1 Client1 200
2 Client2 125
3 Client3 30
But if I try to make both SUM in a single select, the result is wrong and I don't know what I'm doing wrong
select a.id, a.name,
sum(b.quantity*b.unitprice),
sum(c.amount)
from clients a, invoices b, payments c
where a.id=b.client_id and a.id=c.client_id
group by a.id, a.name
ID NAME SUM(B.QUANTITY*B.UNITPRICE) SUM(C.AMOUNT)
1 Client1 650 400
2 Client2 300 250
3 Client3 30 30
Hello,
We have a batch process that commits on completion of each insert, which involves just a few bytes. It should ideally be done in batches (commit after processing 1000 entities etc). It is a legacy application and modifying the code is not possible. Will reducing the log_buffer size improve performance? Right now it is at 28 MB. It is a large server with 700+ GB RAM and SGA set to 220 GB. Since each commit involves writing only a few bytes and if flushing the REDO involves flushing the entire log_buffer, reducing the size of log_buffer will improve performance. But I read somewhere that not all log_buffer gets flushed and only minimal data gets flushed to disk (online redo) on commit. Requesting expert opinion.
Hi ask Tom team,
at first sorry for my lacked knowledge of DBA.
I have a BLOB Table and BLOB total Length from the beginning ist over 3GB, and ofcourse the TABLESPACE size is more than 3GB
Now I deleted 3GB BLOB Files and only around 30MB left.
But I cannot shrink the Tablespace smaller because LOBSEGMENTS are now extended to Block_ID Position up to 3Gb.
Is there anyway to come over it, to rearrange segments in TABLESPACE like before they were extended
Many thanks
Pages
|