Learn DB Concepts with me...

Subscribe to Learn DB Concepts with me... feed
Learn database management from - Arvind ToorpuArvind Toorpuhttp://www.blogger.com/profile/02536712992750057274noreply@blogger.comBlogger208125
Updated: 3 hours 59 min ago

Unlocking Oracle 23c AI's JSON Relational Duality: Simplifying Data Handling for Modern Applications

Wed, 2024-11-06 09:15





Unlocking Oracle 23c AI's JSON Relational Duality: Simplifying Data Handling for Modern Applications


With the release of Oracle 23c AI, Oracle has introduced a suite of new features designed to empower DBAs and developers. One of the most revolutionary additions is JSON Relational Duality, which offers unprecedented flexibility in handling data by allowing users to access the same data as both relational and JSON, seamlessly switching between formats without duplication or additional processing.

This article explores JSON Relational Duality and demonstrates how it simplifies data management for applications that need to handle both structured and unstructured data. 


What Is JSON Relational Duality?

JSON Relational Duality in Oracle 23c AI allows the same data to be represented as both relational and JSON simultaneously. This feature is particularly valuable for applications where structured relational data needs to be accessed as unstructured JSON data—and vice versa. 

For example, a company with a traditional relational structure may want to integrate with a modern application that uses JSON-based APIs. JSON Relational Duality bridges this gap, enabling applications to leverage both formats without complex transformations or performance compromises.


Why JSON Relational Duality Matters

In the past, handling both relational and JSON data often meant duplicating data or creating intricate ETL processes to convert between formats. JSON Relational Duality eliminates these challenges by allowing a single, consistent view of data that works natively with both JSON and relational formats.


- Reduced Complexity: You avoid complex ETL processes when transforming data formats.

- Enhanced Flexibility: Developers can work with the format best suited to their application needs.

- Improved Performance: Access data in the format you need without extra processing, reducing load on the system.


Getting Started with JSON Relational Duality

To demonstrate JSON Relational Duality, let’s use an example of a user, 'user', who manages employee data in a relational structure but needs to provide a JSON format for API consumption.

Suppose we have a table 'employees' with the following schema:


SQL

CREATE TABLE employees (

    employee_id NUMBER PRIMARY KEY,

    first_name VARCHAR2(50),

    last_name VARCHAR2(50),

    department VARCHAR2(50),

    salary NUMBER

);



Step 1: Enabling JSON Relational Duality


With Oracle 23c AI, we can use a 'DUPLICATE JSON' clause to enable duality, allowing 'employees' to be queried as JSON without changing the schema.


SQL

ALTER TABLE employees ENABLE JSON RELATIONAL DUALITY;


Step 2: Querying Data in JSON Format

Now, 'user' can retrieve the same data in JSON format using a simple query:

SQL

SELECT JSON_OBJECT(*) 

FROM employees 

WHERE department = 'Sales';


The query above will output each row as a JSON object, making it compatible with any API or application that requires JSON.


Step 3: Using JSON Data with Relational Functions

JSON Relational Duality allows JSON data to be used in SQL operations as if it were relational. This means that the 'user' can join, filter, and manipulate JSON data using SQL.

For example, let’s filter employees by salary and project it as JSON:

SQL

SELECT JSON_OBJECT(*) 

FROM employees 

WHERE salary > 70000;



Use Case: Integrating Relational Data with a JSON-Based Application


Imagine the 'user' is tasked with integrating an Oracle database with a cloud-based application that consumes JSON via REST API. JSON Relational Duality simplifies this process. Instead of transforming data into JSON on the fly or storing duplicates, the 'user' can directly query relational data in JSON format and feed it into the API, streamlining integration.


Example API Call with JSON Duality Data

Using JSON output from Oracle, the 'user' can now make API calls without extra data transformation:


JSON:

{

    "employee_id": 101,

    "first_name": "John",

    "last_name": "Doe",

    "department": "Sales",

    "salary": 75000

}

 

JSON Relational Duality in Complex Queries

Let’s explore a more complex query in which the 'user' needs to get department-wise average salaries, outputting the results in JSON format.


SQL

SELECT JSON_OBJECT(

    'department' VALUE department,

    'average_salary' VALUE AVG(salary)

) AS department_summary

FROM employees

GROUP BY department;



This query enables a modern, JSON-based interface to summarize data traditionally stored in relational format, making it accessible to front-end applications that prefer JSON.


SEO-Optimized Benefits of JSON Relational Duality


For DBAs, developers, and system architects, JSON Relational Duality in Oracle 23c AI is a keyword-rich feature that aligns with trends in hybrid database management and Oracle 23c AI JSON integration. It emphasizes dual-format data handling and native JSON support in relational databases, making it ideal for applications with mixed data needs. Key benefits like performance optimization, reduced data duplication, and simplified data management address frequently searched queries, ensuring that Oracle 23c AI remains at the forefront of modern database management.


Summary: Embracing the Duality Advantage in Oracle 23c AI

Oracle 23c AI’s JSON Relational Duality is a groundbreaking feature, particularly for DBAs like 'user', who must manage complex data structures efficiently. With JSON Relational Duality, Oracle offers flexibility without sacrificing performance, enabling seamless data integration for applications that demand both structured and unstructured data.

For anyone managing databases in hybrid environments, JSON Relational Duality is a powerful tool that reduces workload, enhances performance, and provides a competitive edge in the data-driven landscape.



Categories: DBA Blogs

Oracle EXPDP commands

Wed, 2024-09-04 14:25

The expdp (Data Pump Export) utility is used to export data and metadata from an Oracle database. It provides several command-line options to customize the export process. Below are some common expdp commands and their explanations:


Basic Syntax:

 


expdp [username/password] DIRECTORY=directory_name DUMPFILE=dumpfile_name LOGFILE=logfile_name [options]


Common Expdp Commands:

    • Export a Full Database:

expdp system/password FULL=Y DIRECTORY=dpump_dir DUMPFILE=full_db.dmp LOGFILE=full_db.log

    • FULL=Y: Exports the entire database.
    • Export a Specific Schema:

expdp system/password SCHEMAS=schema_name DIRECTORY=dpump_dir DUMPFILE=schema.dmp LOGFILE=schema.log
    • SCHEMAS=schema_name: Exports a specific schema.
    • Export Specific Tables:

expdp system/password TABLES=table1,table2 DIRECTORY=dpump_dir DUMPFILE=tables.dmp LOGFILE=tables.log
    • TABLES=table1,table2: Exports specific tables.
    • Export a Specific Table with Data and Metadata:

expdp system/password TABLES=table_name DIRECTORY=dpump_dir DUMPFILE=table.dmp LOGFILE=table.log
    • TABLES=table_name: Exports a specific table.
    • Export with Compression:

expdp system/password FULL=Y DIRECTORY=dpump_dir DUMPFILE=full_db.dmp LOGFILE=full_db.log COMPRESSION=ALL
    • COMPRESSION=ALL: Compresses all data during export.
    • Export with Data Filtering (e.g., Export Data from a Specific Date):

expdp system/password TABLES=table_name DIRECTORY=dpump_dir DUMPFILE=table.dmp LOGFILE=table.log QUERY=table_name:"WHERE created_date > TO_DATE('2024-01-01', 'YYYY-MM-DD')"
    • QUERY=table_name:"WHERE condition": Filters rows based on a condition.
    • Export Metadata Only:
    • expdp system/password FULL=Y DIRECTORY=dpump_dir DUMPFILE=metadata.dmp LOGFILE=metadata.log CONTENT=METADATA_ONLY
    • CONTENT=METADATA_ONLY: Exports only metadata (no data).
    • Export Data Only:

expdp system/password FULL=Y DIRECTORY=dpump_dir DUMPFILE=data_only.dmp LOGFILE=data_only.log CONTENT=DATA_ONLY
    • CONTENT=DATA_ONLY: Exports only data (no metadata).
    • Export a Database with a Specific Date Format:

expdp system/password FULL=Y DIRECTORY=dpump_dir DUMPFILE=full_db.dmp LOGFILE=full_db.log PARALLEL=4
    • PARALLEL=4: Uses 4 parallel threads for faster export.
  • Export with a Job Name:


expdp system/password FULL=Y DIRECTORY=dpump_dir DUMPFILE=full_db.dmp LOGFILE=full_db.log JOB_NAME=export_full_db
    • JOB_NAME=export_full_db: Assigns a name to the export job.


Additional Parameters:

  • CONTENT: Specifies whether to export metadata only (METADATA_ONLY), data only (DATA_ONLY), or both (ALL).
  • EXCLUDE: Excludes specific objects or object types from the export. Example: EXCLUDE=TABLE:"='table_name'".
  • INCLUDE: Includes specific objects or object types in the export. Example: INCLUDE=TABLE:"IN ('table1', 'table2')".
  • REMAP_SCHEMA: Remaps schema names. Example: REMAP_SCHEMA=old_schema:new_schema.
  • REMAP_TABLESPACE: Remaps tablespace names. Example: REMAP_TABLESPACE=old_tablespace:new_tablespace.


Directory Object:

Before running expdp, ensure that the DIRECTORY object exists in the database and points to a valid filesystem directory where the dump files will be written.

 

 

CREATE OR REPLACE DIRECTORY dpump_dir AS '/path/to/directory';


Example Execution:

To execute an expdp command, open a command prompt or terminal and run the appropriate expdp command based on your requirements. Ensure you have the necessary privileges and that the Oracle environment variables (ORACLE_HOME and PATH) are set correctly.



Conclusion:

The expdp utility offers powerful options for exporting data and metadata from Oracle databases. By using the appropriate parameters and options, you can tailor the export process to meet specific needs and optimize performance.

Categories: DBA Blogs

Identify and Terminate Sessions in oracle

Mon, 2024-08-19 19:47
 First, you have to identify the session to be killed with alter system kill session.Step-by-Step Instructions to Identify and Terminate a Session:


Invoke SQL*Plus:

First, open SQL*Plus to begin the process.

Query V$SESSION to Identify the Session:

Use the following query to retrieve the session details. This will list all active sessions with their respective SID, SERIAL#, STATUS, SCHEMANAME, and PROGRAM.


SELECT SID, SERIAL#, STATUS, SCHEMANAME, PROGRAM FROM v$session;

The SID (Session Identifier) and SERIAL# values of the Oracle session to be killed can then be identified from this output.


Execute the ALTER SYSTEM Command:

Substitute the identified SID and SERIAL# values and issue the alter system kill session command.


ALTER SYSTEM KILL SESSION 'sid,serial#';

Handling Sessions Marked for Kill:

Sometimes, Oracle is not able to kill the session immediately with the alter system kill session command alone. Upon issuing the command, the session may be 'marked for kill' and will be terminated as soon as possible.


Forcing Session Termination:

In the case where a session is 'marked for kill' and not terminated immediately, you can force the termination by adding the immediate keyword:


ALTER SYSTEM KILL SESSION 'sid,serial#' IMMEDIATE;

Verify Termination:

To ensure that the session has been terminated, re-query the V$SESSION dynamic performance view:


SELECT SID, SERIAL#, STATUS, SCHEMANAME, PROGRAM FROM v$session WHERE SID = 'sid' AND SERIAL# = 'serial#';

The value of the STATUS column will be 'ACTIVE' when the session is making a SQL call and 'INACTIVE' if it is not.


Confirm PMON Cleanup:

After the Process Monitor (PMON) has cleaned up the session, the row will be removed from V$SESSION. Re-query to confirm:


SELECT SID, SERIAL#, STATUS, SCHEMANAME, PROGRAM FROM v$session WHERE SID = 'sid' AND SERIAL# = 'serial#';

Handling RAC Environments:

In a Real Application Clusters (RAC) environment, you can optionally specify the INST_ID, which is shown when querying the GV$SESSION view. This allows you to kill a session on a different RAC node.


ALTER SYSTEM KILL SESSION 'sid,serial#,@inst_id';

By following these detailed steps, you can efficiently identify and terminate a session using SQL*Plus, ensuring minimal disruption and maintaining database integrity.

Categories: DBA Blogs

Oracle Resource Manager: Granular Control for Managing Idle Sessions

Fri, 2024-04-19 10:46


Oracle Resource Manager: Granular Control for Managing Idle Sessions


In database management, efficiently handling idle sessions is essential to maintaining performance and resource availability. Oracle Resource Manager is a powerful tool, often underutilized, that offers granular control over how sessions consume resources. This post explores how to leverage Oracle Resource Manager to monitor and automatically terminate idle sessions, complete with in-depth, practical examples.


Why Use Oracle Resource Manager?


Oracle Resource Manager allows DBAs to define and enforce resource limits on user sessions based on specific conditions, such as idle time, CPU usage, and session priority. This level of control can be particularly useful in environments with high session volumes, such as transactional systems or shared database infrastructures, where idle sessions can prevent other active sessions from connecting.


Setting Up Resource Manager for Idle Session Management


To illustrate, let's walk through a scenario where we have a transactional database with frequent user connections. Our goal is to manage idle sessions, terminating any session that remains idle for over an hour. 


Step 1: Creating a Resource Plan


A resource plan acts as a blueprint, defining how Oracle should handle session resources. We’ll create a resource plan named `Session_Management_Plan` to automatically switch idle sessions to a termination state.


SQL:

BEGIN

    DBMS_RESOURCE_MANAGER.create_pending_area();


    DBMS_RESOURCE_MANAGER.create_plan(

        plan    => 'Session_Management_Plan',

        comment => 'Resource Plan to handle idle sessions exceeding 1 hour'

    );


    DBMS_RESOURCE_MANAGER.submit_pending_area();

END;

/



Here, we set up a simple resource plan structure to act as a container for directives. The `create_pending_area` function places the configuration in a pending state until it is finalized and submitted.


Step 2: Defining Consumer Groups


In Oracle Resource Manager, consumer groups classify sessions based on their resource needs. Here, we create two consumer groups: `ACTIVE_SESSIONS` for sessions with standard activity and `IDLE_TERMINATION` for sessions that exceed the idle limit.


SQL:

BEGIN

    DBMS_RESOURCE_MANAGER.create_pending_area();


    DBMS_RESOURCE_MANAGER.create_consumer_group(

        consumer_group => 'ACTIVE_SESSIONS',

        comment        => 'Group for actively monitored sessions'

    );


    DBMS_RESOURCE_MANAGER.create_consumer_group(

        consumer_group => 'IDLE_TERMINATION',

        comment        => 'Group for sessions to be terminated if idle too long'

    );


    DBMS_RESOURCE_MANAGER.submit_pending_area();

END;

/



These consumer groups allow us to specify different rules and behaviors based on session status.


Step 3: Creating Plan Directives


Directives define the rules for each consumer group within a plan. We’ll set a directive to monitor sessions within `ACTIVE_SESSIONS`, moving idle sessions to the `IDLE_TERMINATION` group if they remain idle for over an hour.


SQL:

BEGIN

    DBMS_RESOURCE_MANAGER.create_pending_area();


    DBMS_RESOURCE_MANAGER.create_plan_directive(

        plan             => 'Session_Management_Plan',

        group_or_subplan => 'ACTIVE_SESSIONS',

        comment          => 'Standard sessions with monitoring for idle state',

        switch_time      => 3600, -- 1 hour idle limit (3600 seconds)

        switch_group     => 'IDLE_TERMINATION'

    );


    DBMS_RESOURCE_MANAGER.create_plan_directive(

        plan             => 'Session_Management_Plan',

        group_or_subplan => 'IDLE_TERMINATION',

        comment          => 'Group for terminating sessions idle over 1 hour',

        cpu_p1           => 0, -- No CPU allocation as sessions are idle

        max_idle_blocker_time => 3600 -- Terminate after one hour idle

    );


    DBMS_RESOURCE_MANAGER.submit_pending_area();

END;

/



In this configuration:

- `switch_time` specifies the idle duration threshold (3600 seconds or 1 hour).

- `switch_group` moves sessions from `ACTIVE_SESSIONS` to `IDLE_TERMINATION` once they exceed the idle time.

- `cpu_p1` is set to zero for `IDLE_TERMINATION` to prevent idle sessions from consuming CPU.

- `max_idle_blocker_time` limits the maximum time for idle sessions in the `IDLE_TERMINATION` group, ensuring termination.


Step 4: Activating the Resource Plan

With the resource plan and directives set, we activate `Session_Management_Plan` to enforce these rules on the database.


SQL:

ALTER SYSTEM SET resource_manager_plan = 'Session_Management_Plan';

 

By activating this plan, Oracle will apply our specified rules for monitoring idle sessions and automatically terminate them after an hour of inactivity.


Step 5: Verifying and Monitoring Session Status

To ensure the plan is working as expected, monitor session activity by checking which sessions are in `ACTIVE_SESSIONS` versus `IDLE_TERMINATION`.


SQL:

SELECT username, 

       session_id, 

       status, 

       last_call_et AS idle_seconds, 

       consumer_group

FROM   v$session

WHERE  consumer_group IN ('ACTIVE_SESSIONS', 'IDLE_TERMINATION')

ORDER BY idle_seconds DESC;



The `last_call_et` column shows idle time in seconds, while the `consumer_group` column lets you see whether the session is active or set for termination. This query provides visibility into session status, letting you track how effectively idle sessions are managed.


Additional Tips for Using Oracle Resource Manager


1. Testing in Non-Production: Resource Manager settings should be tested in non-production environments first to ensure they align with your workload and that CPU or memory-intensive processes are unaffected by idle session policies.

2. Avoid Overly Aggressive Termination Policies: While terminating idle sessions frees up resources, be cautious with overly aggressive thresholds that could disrupt user sessions critical for business operations.

3. Regularly Review Session Activity: Use `DBA_HIST_ACTIVE_SESS_HISTORY` or `DBA_HIST_RESOURCE_PLAN_DIRECTIVE` views to analyze historical session activity and refine thresholds as needed.


Conclusion:

Oracle Resource Manager offers a structured, policy-based way to manage session resources, making it easier for DBAs to maintain optimal database performance. By automatically handling idle sessions, DBAs can prevent resource bottlenecks and maximize availability for active sessions. With careful setup and monitoring, Resource Manager becomes an invaluable tool in any Oracle DBA’s toolkit.


References: Oracle Database Administrator’s Guide, Oracle 19c Documentation, Oracle Database Resource Manager Concepts and Usage.


Categories: DBA Blogs

Managing and Terminating Idle Oracle Database Sessions Automatically

Wed, 2024-03-20 10:35

 



Managing and Terminating Idle Oracle Database Sessions Automatically


In database environments, maintaining optimal performance requires careful management of user sessions, especially idle ones. Idle sessions can consume resources and lead to unnecessary wait times, especially when the maximum number of sessions is reached. In this post, we’ll explore how to identify idle sessions, create a configuration to manage them, and set up an automatic process to terminate idle sessions that last longer than one hour.


Understanding Idle Sessions in Oracle Database

An idle session is a session where the user is connected but not actively executing SQL statements. While Oracle can handle many sessions simultaneously, excessive idle sessions can eventually strain resources. By managing and terminating these sessions, DBAs can optimize resource usage, particularly in environments where there’s a limit on concurrent connections.


Method 1: Using Oracle Profiles to Manage Idle Sessions

Oracle Profiles allows DBAs to control session-level resource usage, including limiting the amount of idle time.

Step 1: Create a Profile with an Idle Time Limit


To set an idle timeout of one hour (60 minutes), we’ll create a profile with a `IDLE_TIME` resource limit. 


SQL: 

CREATE PROFILE user_idle_limit_profile

LIMIT

    IDLE_TIME 60; -- 60 minutes


Here, `IDLE_TIME` represents the maximum allowed idle time for a session, in minutes. Once this threshold is exceeded, Oracle will automatically disconnect the session.


Step 2: Assign the Profile to Users


To enforce the idle timeout, assign the profile to users who require monitoring:


SQL: 

ALTER USER john_doe PROFILE user_idle_limit_profile;

ALTER USER jane_smith PROFILE user_idle_limit_profile;


All users assigned to `user_idle_limit_profile` will be disconnected automatically if idle for more than one hour. Keep in mind that an `IDLE_TIME` of zero means there is no idle limit.


 Note: Sessions terminated this way will receive an ORA-02396 warning message, indicating that they have been disconnected due to inactivity.


Step 3: Verifying Profile Application


You can verify if the profile was applied correctly by checking the `DBA_USERS` view:


SQL: 

SELECT username, profile 

FROM dba_users 

WHERE profile = 'USER_IDLE_LIMIT_PROFILE';


This confirms that the intended users are bound by the idle session limit.


Method 2: Using Oracle Resource Manager for More Granular Control


Oracle’s Resource Manager offers more control. It allows you to create rules that monitor and act on specific session conditions, including idle sessions.


Step 1: Create a Resource Plan

First, create a resource plan that specifies the action to take when a session remains idle for too long. Here’s an example:


SQL: 

BEGIN

    DBMS_RESOURCE_MANAGER.create_pending_area();


    DBMS_RESOURCE_MANAGER.create_plan(

        plan    => 'idle_session_plan',

        comment => 'Resource plan to terminate idle sessions'

    );


    DBMS_RESOURCE_MANAGER.create_plan_directive(

        plan             => 'idle_session_plan',

        group_or_subplan => 'OTHER_GROUPS',

        comment          => 'Terminate sessions idle for more than 60 minutes',

        switch_for_call  => 'TRUE',

        switch_time      => 3600, -- 3600 seconds (1 hour)

        switch_group     => 'KILL_SESSION'

    );

    DBMS_RESOURCE_MANAGER.submit_pending_area();

END;

/


In this example, the `switch_time` parameter is set to 3600 seconds (1 hour). If a session remains idle for this duration, Oracle will automatically move it to the `KILL_SESSION` group, where it’s terminated.


Step 2: Activate the Resource Plan

Once the resource plan is defined, activate it to enforce the rules:


SQL: 

ALTER SYSTEM SET resource_manager_plan = 'idle_session_plan';

With this plan active, sessions that remain idle for over an hour are automatically terminated, reducing unnecessary resource consumption.


Step 3: Verifying Active Resource Plan

To check which resource plan is active, you can use:


SQL: 

SELECT name, active

FROM v$rsrc_plan

WHERE is_top_plan = 'TRUE';


This will confirm that `idle_session_plan` is the active resource management plan.


Monitoring and Auditing Idle Sessions

Regularly reviewing the sessions terminated by the profile or resource manager helps ensure that active users are unaffected. You can monitor terminated sessions by querying Oracle’s session and resource views:


SQL: 

SELECT sid, serial#, username, status, last_call_et

FROM v$session

WHERE status = 'INACTIVE' 

  AND last_call_et > 3600; -- Sessions idle for more than 1 hour


The `last_call_et` column records the idle duration in seconds. This query allows you to keep track of all sessions that have been idle for over an hour.


Final Thoughts and Best Practices


1. Set Realistic Limits: Configure idle timeouts that reflect actual user behavior. For instance, if users typically require 30-minute breaks, a 60-minute idle timeout may be ideal.

2. Monitor Frequently: As with any automated process, regularly monitor your configuration to ensure that critical sessions are not mistakenly terminated.

3. Communicate with Users: Inform users about session policies, especially if implementing a more aggressive timeout.


By setting up automated processes to manage idle sessions, Oracle DBAs can improve resource utilization, ensure a smooth user experience, and optimize system performance. This proactive approach helps avoid potential bottlenecks, allowing DBAs to focus on higher-impact tasks.


Categories: DBA Blogs

Optimizing Storage and Performance with Oracle Database Advanced Compression

Thu, 2024-02-08 10:25
 Optimizing Storage and Performance with Oracle Database Advanced Compression

In the world of database management, the challenge of balancing performance with storage efficiency is ever-present. Oracle's Advanced Compression feature offers a robust solution to this by reducing storage footprint, optimizing performance, and saving costs — all without compromising data integrity. In this post, we’ll dive into how you can leverage advanced compression in Oracle, with practical examples and straightforward code snippets.

  Why Use Advanced Compression in Oracle?

Advanced Compression isn't just about saving storage; it enhances data retrieval speed by minimizing the amount of data that needs to be read from disk. This is especially beneficial for larger tables or indices that experience heavy I/O operations. Advanced Compression works across various Oracle database objects and data types, from tables and indexes to LOBs and backups.

  Types of Compression in Oracle Database

Oracle provides several types of compression suited to various workloads:

1. Basic Table Compression: Suitable for read-only or static tables, it compresses data as it's loaded.
2. OLTP Table Compression: Designed for transactional environments, it works dynamically on data inserts and updates.
3. Hybrid Columnar Compression (HCC): Available only on Exadata and a few other Oracle storage solutions, it’s ideal for data warehouses and archival tables.
4. Index Compression: Reduces the storage for indexes by compressing repeated values.

Let’s walk through these options with examples using a hypothetical table and index.

  1. Basic Table Compression

To start, let’s look at Basic Table Compression, commonly used for data warehousing or infrequently updated tables.

Suppose we have a table `Employee_Records` that stores historical data and is rarely modified:

 SQL :
CREATE TABLE Employee_Records (
    employee_id NUMBER,
    first_name VARCHAR2(50),
    last_name VARCHAR2(50),
    department_id NUMBER,
    hire_date DATE
) COMPRESS BASIC;
 

Here, we specify `COMPRESS BASIC`, which instructs Oracle to compress the data during bulk insert operations. Keep in mind that this is ideal for tables where data modifications are infrequent, as Basic Compression doesn’t automatically recompress on updates.

  2. OLTP Table Compression

For transactional tables with frequent updates, OLTP Table Compression is more appropriate. Oracle's advanced algorithm ensures that as rows are inserted or updated, the data remains compressed without additional manual intervention.

Let’s use `Order_Details`, a table holding real-time sales data:

 SQL :
CREATE TABLE Order_Details (
    order_id NUMBER,
    product_id NUMBER,
    quantity NUMBER,
    order_date DATE,
    customer_id NUMBER
) COMPRESS FOR OLTP;
 

With `COMPRESS FOR OLTP`, this table will dynamically compress data during both inserts and updates, keeping storage usage low while maintaining fast performance.

Tip: OLTP compression is a powerful way to balance performance and space in high-transaction environments. However, it requires some CPU overhead, so monitor performance in CPU-bound systems.

  3. Hybrid Columnar Compression (HCC)

HCC is an exclusive feature for Exadata, ZFS, and Oracle Cloud customers. It combines row and column storage, making it highly efficient for queries that scan large datasets. HCC can offer multiple levels, like `COMPRESS FOR QUERY` and `COMPRESS FOR ARCHIVE`, allowing you to optimize for retrieval speed or maximum storage savings.

Using `Sales_Archive` as an example table:

 SQL :
CREATE TABLE Sales_Archive (
    sale_id NUMBER,
    region VARCHAR2(30),
    sale_amount NUMBER,
    sale_date DATE
) COMPRESS FOR QUERY HIGH;
 

In this case, `COMPRESS FOR QUERY HIGH` balances both storage efficiency and query performance, ideal for analytic queries on historical data. For long-term storage where frequent access isn’t necessary, consider `COMPRESS FOR ARCHIVE HIGH`.

  4. Index Compression

Finally, Index Compression can be particularly beneficial for large indexes with repeating values. By compressing the key prefix, Oracle significantly reduces index size, speeding up queries.

Imagine we have a commonly queried index on the `Employee_Records` table:

 SQL :
CREATE INDEX idx_employee_department 
ON Employee_Records (department_id, last_name, first_name) COMPRESS 1;
 

In this example, `COMPRESS 1` compresses the leading column (`department_id`) in the index. If more columns contain duplicate values, you can increase the compression level. Index compression works well in cases where there’s repetition, such as departmental groupings, making it a great fit for HR or CRM databases.

  Monitoring and Verifying Compression Benefits

To see the benefits of compression, you can query Oracle's data dictionary to monitor the space savings:

 SQL :
SELECT segment_name, 
       segment_type, 
       bytes / 1024 / 1024 AS size_MB 
FROM   dba_segments 
WHERE  segment_name = 'EMPLOYEE_RECORDS';
 

By running this query before and after enabling compression, you can quantify the storage savings directly.

  Considerations and Best Practices

1. Testing: Test compression in a non-production environment to assess CPU overhead and performance impact.
2. Monitoring: Regularly monitor space and CPU metrics to ensure that compression is meeting your goals without overtaxing system resources.
3. Reorganizing Tables: Consider periodically reorganizing heavily updated tables with OLTP compression to maintain optimal storage savings.

  Conclusion

Oracle's Advanced Compression provides a flexible toolkit for database administrators looking to optimize both storage and performance. By selecting the right type of compression for each workload, you can achieve substantial space savings while maintaining fast access times. Try implementing compression gradually, starting with non-critical tables, and measure the impact to find the best fit for your database environment.

Categories: DBA Blogs

import a single table from a full export backup in oracle

Fri, 2019-09-13 11:01

import a single table from a full export backup and remap it


impdp USERNAME/PASSWORD tables=SCHEMA.TABLE_NAME directory=DPUMP dumpfile=DUMPFILE_%U.dmp
remap_schema=SOURCE:TARGET 
REMAP_TABLE=TABLE_NAME:TABLE_NAME_NEW


Optional things above :

  1. Remove remap if you don't want.
  2. Add ENCRYPTION_PASSWORD=IF_ANY
Categories: DBA Blogs

Restore archivelogs from RMAN backup

Fri, 2018-02-02 14:06




Restore archive logs from RMAN backup


rman> restore archivelog from logseq=37501 until logseq=37798 thread=1;

or

rmna> restore archivelog between sequence 37501 and 37798 ;
Categories: DBA Blogs

Setting up Optach environment variable

Mon, 2017-10-16 17:22

Setting up Optach environment variable :

For Korn / Bourne shell:

% export PATH=$PATH:$ORACLE_HOME/OPatch
For C Shell:
% setenv PATH $PATH:$ORACLE_HOME/OPatch
Categories: DBA Blogs

Simple password encryption package to demonstrate how

Tue, 2017-09-12 11:51
rem -----------------------------------------------------------------------
rem Purpose:   Simple password encryption package to demonstrate how
rem                  values can be encrypted and decrypted using Oracle's
rem                  DBMS Obfuscation Toolkit
rem Note:        Connect to SYS AS SYSDBA and run ?/rdbms/admin/catobtk.sql
rem Author:     Frank Naude, Oracle FAQ
rem -----------------------------------------------------------------------


---- create table to store encrypted data

-- Unable to render TABLE DDL for object ATOORPU.USERS_INFO with DBMS_METADATA attempting internal generator.
CREATE TABLE USERS_INFO
(
  USERNAME VARCHAR2(20 BYTE)
, PASS VARCHAR2(20 BYTE)
)users;

-----------------------------------------------------------------------
-----------------------------------------------------------------------

CREATE OR REPLACE PACKAGE PASSWORD AS
   function encrypt(i_password varchar2) return varchar2;
   function decrypt(i_password varchar2) return varchar2;
END PASSWORD;
/
show errors


CREATE OR REPLACE PACKAGE BODY PASSWORD AS

  -- key must be exactly 8 bytes long
  c_encrypt_key varchar2(8) := 'key45678';

  function encrypt (i_password varchar2) return varchar2 is
    v_encrypted_val varchar2(38);
    v_data          varchar2(38);
  begin
     -- Input data must have a length divisible by eight
     v_data := RPAD(i_password,(TRUNC(LENGTH(i_password)/8)+1)*8,CHR(0));

     DBMS_OBFUSCATION_TOOLKIT.DESENCRYPT(
        input_string     => v_data,
        key_string       => c_encrypt_key,
        encrypted_string => v_encrypted_val);
     return v_encrypted_val;
  end encrypt;

  function decrypt (i_password varchar2) return varchar2 is
    v_decrypted_val varchar2(38);
  begin
     DBMS_OBFUSCATION_TOOLKIT.DESDECRYPT(
        input_string     => i_password,
        key_string       => c_encrypt_key,
        decrypted_string => v_decrypted_val);
     return v_decrypted_val;
  end decrypt;


end PASSWORD;
/
show errors

-- Test if it is working...

select password.encrypt('PASSWORD1') from dual;
select password.decrypt(app_password.encrypt('PASSWORD1')) from dual;
select password.encrypt('PSW2') from dual;
select password.decrypt(app_password.encrypt('PSW2')) from dual;


--- Insert encrypted Password ---

insert into USERS_INFO values ('redddy',( select password.encrypt('REDDY1') from dual) );
select password.decrypt((pass)) from USERS_INFO where USERNAME='redddy';

Categories: DBA Blogs

update rows from multiple tables (correlated update)

Fri, 2017-07-14 10:01

Cross table update (also known as correlated update, or multiple table update) in Oracle uses non-standard SQL syntax format (non ANSI standard) to update rows in another table. The differences in syntax are quite dramatic compared to other database systems like MS SQL Server or MySQL.
In this article, we are going to look at four scenarios for Oracle cross table update.

Suppose we have two tables Categories and Categories_Test. See screenshots below.

lets take two tables TABA & TABB:

Records in TABA:















Records in TABB:













1. Update data in a column LNAME in table A to be upadted with values from common column LNAME in table B.

The update query below shows that the PICTURE column LNAME is updated by looking up the same ID value in ID column in table TABA and TABB.

 update TABA A
set (a.LNAME) = (select B.LNAME FROM TABB B where A.ID=B.ID);















2. Update data in two columns in table A based on a common column in table B.

If you need to update multiple columns simultaneously, use comma to separate each column after the SET keyword.

update TABA A
set (a.LNAME, a.SAL) = (select B.LNAME, B.SAL FROM TABB B where A.ID=B.ID);



Categories: DBA Blogs

How to Secure our Oracle Databases

Fri, 2017-06-02 13:27

How Secure can we make our Oracle Databases??

This is a routine question that runs in minds of most database administrators.  
HOW SECURE ARE OUR DATABASES. CAN WE MAKE IT ANYMORE SECURE.

I am writing this post to share my experience and knowledge on securing databases. I personally follow below tips to secure my databases:


 1. Make sure we only grant access to those users that really need to access database.
2. Remove all the unnecessary grants/privileges from users/roles.
3. Frequently audit database users Failed Logins in order to verify who is trying to login and their actions.
4. If a user is requesting elevated privileges, make sure you talk to them and understand their requirements.
5. Grant no more access than what needed.
6. At times users might need access temporarily. Make sure these temporary access are revoked after tasks are completed.
7. Define a fine boundary on who can access what??
8. Use User profiles / Audit to ensure all activities are tracked.
9.  Enforce complex password. Here is the Link on how to do it 
10 Use Triggers to track user activity.
11. Make sure passwords are encrypted in applications, this can be potential threat if you application code has been compromised.
12. Add password to your listener.
13. Allow access only from needed/known servers/clients. Use Valid_node_checking Link on how to restrict access to servers/clients.







Categories: DBA Blogs

java.lang.SecurityException: The jurisdiction policy files are not signed by a trusted signer

Thu, 2016-12-15 13:18


I was trying to Install OID (Oracle Identity Manager) and I got this error :

Problem:




        at oracle.as.install.engine.modules.configuration.standard.StandardConfigActionManager.start(StandardConfigActionManager.java:186)
        at oracle.as.install.engine.modules.configuration.boot.ConfigurationExtension.kickstart(ConfigurationExtension.java:81)
        at oracle.as.install.engine.modules.configuration.ConfigurationModule.run(ConfigurationModule.java:86)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.SecurityException: Can not initialize cryptographic mechanism
        at javax.crypto.JceSecurity.<clinit>(JceSecurity.java:88)
        ... 31 more
Caused by: java.lang.SecurityException: The jurisdiction policy files are not signed by a trusted signer!
        at javax.crypto.JarVerifier.verifyPolicySigned(JarVerifier.java:328)
        at javax.crypto.JceSecurity.loadPolicies(JceSecurity.java:317)
        at javax.crypto.JceSecurity.setupJurisdictionPolicies(JceSecurity.java:262)
        at javax.crypto.JceSecurity.access$000(JceSecurity.java:48)
        at javax.crypto.JceSecurity$1.run(JceSecurity.java:80)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.crypto.JceSecurity.<clinit>(JceSecurity.java:77)



Cause:

My current version of java was 1.8.* which is not fully supported.
In this case JDK 1.8.0.1 is installed on all nodes in the cluster and JCE local policy version 6 was used for AES 256 kerberos encryption. JCE must be in sync with the JDK version.
      
[oracle@linux06 jdk1.8.0_111]$ cd ..
[oracle@linux06 java]$ ls
default  jdk1.8.0_111  latest
[oracle@linux06 java]$ cd default/
[oracle@linux06 default]$ ls
bin        javafx-src.zip  man          THIRDPARTYLICENSEREADME-JAVAFX.txt
COPYRIGHT  jre             README.html  THIRDPARTYLICENSEREADME.txt
db         lib             release
include    LICENSE         src.zip


Solution:

Download :

For Java 6 use :
http://www.oracle.com/technetwork/java/javase/downloads/jce-6-download-429243.html

For Java 7 use :
http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html

For java 8 use :
http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html        

********************    ********************    ********************
Update java with with new java unlimted jusrisdiction :
********************    ********************    ********************

After download and unzip :

[oracle@linux06 JCE]$ unzip jce_policy-8.zip
Archive:  jce_policy-8.zip
   creating: UnlimitedJCEPolicyJDK8/
  inflating: UnlimitedJCEPolicyJDK8/local_policy.jar
  inflating: UnlimitedJCEPolicyJDK8/README.txt
  inflating: UnlimitedJCEPolicyJDK8/US_export_policy.jar
[oracle@linux06 JCE]$ ls -ll
total 16
-rw-rw-r--. 1 oracle oracle 8409 Dec 14 10:39 jce_policy-8.zip
drwxrwxr-x. 2 oracle oracle 4096 Dec 20  2013 UnlimitedJCEPolicyJDK8
[oracle@linux06 JCE]$ pwd
/u01/app/SFTW/JCE
[oracle@linux06 JCE]$ ls
jce_policy-8.zip  UnlimitedJCEPolicyJDK8
[oracle@linux06 JCE]$ cd UnlimitedJCEPolicyJDK8/
[oracle@linux06 UnlimitedJCEPolicyJDK8]$ ls
local_policy.jar  README.txt  US_export_policy.jar

********************    ********************    ********************
as root user backup and replace files (US_export_policy & local_policy.jar)
********************    ********************    ********************

[oracle@linux06 security]$ su root
Password:
[root@linux06 security]# ls
blacklist          java.policy    local_policy.jar
blacklisted.certs  java.security  trusted.libraries
cacerts            javaws.policy  US_export_policy.jar
[root@linux06 security]# cd /usr/java/default/jre/lib/security
[root@linux06 security]# mv US_export_policy.jar US_export_policy.jar_bak
[root@linux06 security]# mv local_policy.jar local_policy.jar_bak
[root@linux06 security]# ls -ll
total 164
-rw-r--r--. 1 root root   4054 Sep 22 18:23 blacklist
-rw-r--r--. 1 root root   1273 Sep 22 18:23 blacklisted.certs
-rw-r--r--. 1 root root 112860 Sep 22 18:23 cacerts
-rw-r--r--. 1 root root   2466 Sep 22 18:23 java.policy
-rw-r--r--. 1 root root  27358 Sep 22 18:23 java.security
-rw-r--r--. 1 root root     98 Sep 22 18:23 javaws.policy
-rw-r--r--. 1 root root   3405 Sep 22 18:35 local_policy.jar_bak
-rw-r--r--. 1 root root      0 Sep 22 18:23 trusted.libraries
-rw-r--r--. 1 root root   2920 Sep 22 18:35 US_export_policy.jar_bak
[root@linux06 security]# pwd
/usr/java/default/jre/lib/security
[root@linux06 security]# cp /u01/app/SFTW/JCE/UnlimitedJCEPolicyJDK8/US_export_policy.jar /usr/java/default/jre/lib/security
[root@linux06 security]# cp /u01/app/SFTW/JCE/UnlimitedJCEPolicyJDK8/local_policy.jar /usr/java/default/jre/lib/security
[root@linux06 security]# ls -ll
total 172
-rw-r--r--. 1 root root   4054 Sep 22 18:23 blacklist
-rw-r--r--. 1 root root   1273 Sep 22 18:23 blacklisted.certs
-rw-r--r--. 1 root root 112860 Sep 22 18:23 cacerts
-rw-r--r--. 1 root root   2466 Sep 22 18:23 java.policy
-rw-r--r--. 1 root root  27358 Sep 22 18:23 java.security
-rw-r--r--. 1 root root     98 Sep 22 18:23 javaws.policy
-rw-r--r--. 1 root root   3035 Dec 14 10:47 local_policy.jar
-rw-r--r--. 1 root root   3405 Sep 22 18:35 local_policy.jar_bak
-rw-r--r--. 1 root root      0 Sep 22 18:23 trusted.libraries
-rw-r--r--. 1 root root   3023 Dec 14 10:46 US_export_policy.jar
-rw-r--r--. 1 root root   2920 Sep 22 18:35 US_export_policy.jar_bak
Categories: DBA Blogs

bash: /bin/install/.oui: No such file or directory

Thu, 2016-12-08 21:30

 Problem:

[oracle@linux5 database]$ . runInstaller
bash: /bin/install/.oui: No such file or directory
[oracle@linux5 database]$ uname -a
Linux linux5 3.8.13-16.2.1.el6uek.x86_64 #1 SMP Thu Nov 7 17:01:44 PST 2013 x86_64 x86_64 x86_64 GNU/Linux


Solution:



[oracle@linux5 database]$ ./runInstaller
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 20461 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 4031 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2016-11-22_09-46-02AM. Please wait ...[oracle@linux5 database]$
Categories: DBA Blogs

uninstall java on linux

Fri, 2016-12-02 12:43
If you are not sure of what the dependent packages that might be blocking java then you can also use yum remove jdk*
This will also take care of dependent rpms.

[root@linux06 usr]# yum remove jdk1.8.0_111-1.8.0_111-fcs.i586

Loaded plugins: refresh-packagekit, security
Setting up Remove Process
Resolving Dependencies
--> Running transaction check
---> Package jdk1.8.0_111.i586 2000:1.8.0_111-fcs will be erased
--> Processing Dependency: java for package: jna-3.2.4-2.el6.x86_64
--> Running transaction check
---> Package jna.x86_64 0:3.2.4-2.el6 will be erased
--> Finished Dependency Resolution

Dependencies Resolved

======================================================================================================================
 Package           Arch        Version                 Repository                                                Size
======================================================================================================================
Removing:
 jdk1.8.0_111      i586        2000:1.8.0_111-fcs      @/jdk-8u111-linux-i586                                   259 M
Removing for dependencies:
 jna               x86_64      3.2.4-2.el6             @anaconda-OracleLinuxServer-201311252058.x86_64/6.5      298 k

Transaction Summary
======================================================================================================================
Remove        2 Package(s)

Installed size: 259 M
Is this ok [y/N]: Y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
  Erasing    : jna-3.2.4-2.el6.x86_64                                                                             1/2
  Erasing    : 2000:jdk1.8.0_111-1.8.0_111-fcs.i586                                                               2/2
  Verifying  : 2000:jdk1.8.0_111-1.8.0_111-fcs.i586                                                               1/2
  Verifying  : jna-3.2.4-2.el6.x86_64                                                                             2/2

Removed:
  jdk1.8.0_111.i586 2000:1.8.0_111-fcs                                                                               

Dependency Removed:
  jna.x86_64 0:3.2.4-2.el6                                                                                           

Complete!
Categories: DBA Blogs

Is it safe to move/recreate alertlog while the database is up and running

Mon, 2016-11-14 19:00


 Is it safe to move/recreate alertlog while the database is up and running??


It is totally safe to "mv" or rename it while we are running. Since chopping part of it out would be lengthly process, there is a good chance we would write to it while you are editing it so I would not advise trying to "chop" part off -- just mv the whole thing and we'll start anew in another file.

If you want to keep the last N lines "online", after you mv the file, tail the last 100 lines to "alert_also.log" or something before you archive off the rest.



[oracle@Linux03 trace]$ ls -ll alert_*
-rw-r-----. 1 oracle oracle   
488012 Nov 14 10:23 alert_orcl.log

I will rename the existing alertlog file to something
 
[oracle@Linux03 trace]$ mv alert_orcl.log alert_orcl_Pre_14Nov2016.log

[oracle@Linux03 trace]$ ls -ll alert_*
-rw-r-----. 1 oracle oracle 488012 Nov 14 15:42 alert_orcl_Pre_14Nov2016.log
[oracle@Linux03 trace]$ ls -ll alert_*


Now lets create some activity that will need to update the alertlog.

[oracle@Linux03 bin]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Nov 14 16:23:02 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> alter system switch logfile;

System altered.

SQL> /

lets see if the new alertlog file has been created.[oracle@Linux03 trace]$ ls -ll alert_*
-rw-r-----. 1 oracle oracle    249 Nov 14 16:23 alert_orcl.log
-rw-r-----. 1 oracle oracle 488012 Nov 14 15:42 alert_orcl_Pre_14Nov2016.log
Categories: DBA Blogs

querying directory permissions granted to a user

Thu, 2016-11-03 16:05
Querying directory permissions granted to a user

SELECT grantee, table_name directory_name, LISTAGG(privilege, ',') WITHIN GROUP (ORDER BY grantee)
  FROM dba_tab_privs
 WHERE table_name ='
DPUMP' group by GRANTEE,TABLE_NAME;




SAMPLE output:

GRANTEE              DIRECTORY_NAME                 GRANTS            
-------------------- ------------------------------ --------------------
SCOTT                  DPUMP                       READ,WRITE         
TIGER                   DPUMP                       READ,WRITE         
TOM                      DPUMP                       READ,WRITE         
CAM                      DPUMP                       READ,WRITE         
SAM                      DPUMP                       READ,WRITE
Categories: DBA Blogs

Failed to auto-stop Oracle Net Listener using ORACLE_HOME/bin/tnslsnr

Thu, 2016-11-03 11:10


Usage : we can use dbshut script file in $ORACLE_HOME/bin to shutdown  database & listener.

 [oracle@Linux03 bin]$ ps -ef|grep pmon
oracle   20693     1  0 10:57 ?        00:00:00 ora_pmon_orcl
oracle   21133 19211  0 11:01 pts/0    00:00:00 grep pmon

[oracle@Linux03 bin]$ dbshut
Processing Database instance "orcl": log file /u01/app/oracle/product/12.1.0.2/db_1/shutdown.log

[oracle@Linux03 bin]$ ps -ef|grep pmon
oracle   21287 19211  0 11:09 pts/0    00:00:00 grep pmon
[oracle@Linux03 bin]$





Error : Failed to auto-stop Oracle Net Listener using ORACLE_HOME/bin/tnslsnr

[oracle@Linux03 bin]$ dbshut

Failed to auto-stop Oracle Net Listener using ORACLE_HOME/bin/tnslsnr


 Solution (same as above): edit dbshut script and change

From : ORACLE_HOME_LISTNER=$1
 To  :  ORACLE_HOME_LISTNER=$ORACLE_HOME 




Note : 
One pre-req for this script to run is set 'Y' in /etc/oratab. This basically tell the script to start all database instance running of this home. while 'N' tells not to start db using dbshut script.

[oracle@Linux03 Desktop]$ grep 'orcl' /etc/oratab
orcl:/u01/app/oracle/product/12.1.0.2/db_1:Y
Categories: DBA Blogs

dbstart: line 275: ORACLE_HOME_LISTNER: command not found

Thu, 2016-11-03 11:00

Usage : we can use dbstart script file in $ORACLE_HOME/bin to start database & listener.

[oracle@Linux03 bin]$ ps -ef|grep pmon
oracle   20588 19211  0 10:56 pts/0    00:00:00 grep pmon

[oracle@Linux03 bin]$ dbstart
Processing Database instance "orcl": log file /u01/app/oracle/product/12.1.0.2/db_1/startup.log

[oracle@Linux03 bin]$ ps -ef|grep pmon
oracle   20693     1  0 10:57 ?        00:00:00 ora_pmon_orcl
oracle   21035 19211  0 10:57 pts/0    00:00:00 grep pmon

[oracle@Linux03 bin]$ 



Common error with dbstart script :


Error : /u01/app/oracle/product/12.1.0.2/db_1/bin/dbstart: line 275: ORACLE_HOME_LISTNER: command not found

[oracle@Linux03 bin]$ dbstart

/u01/app/oracle/product/12.1.0.2/db_1/bin/dbstart: line 275: ORACLE_HOME_LISTNER: command not found


Solution : Edit dbstart script and change (~ line 275)

 From : ORACLE_HOME_LISTNER=$1
 To   : ORACLE_HOME_LISTNER=$ORACLE_HOME



Note : 
One pre-req for this script to run is set 'Y' in /etc/oratab. This basically tell the script to start all database instance running of this home. while 'N' tells not to start db using dbshut script.

[oracle@Linux03 Desktop]$ grep 'orcl' /etc/oratab
orcl:/u01/app/oracle/product/12.1.0.2/db_1:Y

Categories: DBA Blogs

expdp content=data_only

Tue, 2016-10-11 23:19
[oracle@oracle1 dpump]$ expdp atest/password directory=dpump dumpfile=test_tab1.dmp content=data_only tables=test_tab1 logfile=test_tab1.log

Export: Release 11.2.0.1.0 - Production on Wed Feb 11 10:58:23 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "ATEST"."SYS_EXPORT_TABLE_01":  atest/******** directory=dpump dumpfile=test_tab1.dmp content=data_only tables=test_tab1 logfile=test_tab1.log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
. . exported "ATEST"."TEST_TAB1"                         5.937 KB      11 rows
Master table "ATEST"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ATEST.SYS_EXPORT_TABLE_01 is:
  /u01/app/oracle/dpump/test_tab1.dmp
Job "ATEST"."SYS_EXPORT_TABLE_01" successfully completed at 10:58:26

[oracle@oracle1 dpump]$ clear
[oracle@oracle1 dpump]$ impdp atest2/password directory=dpump dumpfile=test_tab1.dmp content=data_only logfile=test_tab1_imp.log TABLE_EXISTS_ACTION=truncate remap_schema=atest:atest2

Import: Release 11.2.0.1.0 - Production on Wed Feb 11 10:58:50 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "ATEST2"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "ATEST2"."SYS_IMPORT_FULL_01":  atest2/******** directory=dpump dumpfile=test_tab1.dmp content=data_only logfile=test_tab1_imp.log TABLE_EXISTS_ACTION=truncate remap_schema=atest:atest2
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "ATEST2"."TEST_TAB1"                        5.937 KB      11 rows
Job "ATEST2"."SYS_IMPORT_FULL_01" successfully completed at 10:58:52
Categories: DBA Blogs

Pages