Fuad Arshad

Subscribe to Fuad Arshad feed
This is Just stuff I find on various echnologies ranging from databases to Cloud related tech. This Blog will have topics and content based on things i learn. I can also be found at http://www.twitter.com/fuadar
Updated: 2 hours 39 min ago

Published: Upgrading Ansible Tower

Mon, 2019-01-14 10:54

ZDLRA System Activity Report

Mon, 2017-07-10 11:30
The Recovery Appliance or ZDLRA is a great way to ensure consistent Backup and Recovery of you Oracle Database but as DBA's we often want to see what is happening Behind the Covers. As with every Oracle Product there is a GUI (Enterprise Manager) or Command line based environment.
The ZDLRA Development team just released a very nifty little script that is available via

Zero Data Loss Recovery Appliance System Activity Script (Doc ID 2275176.1)

This System is supposed to be used in conjunction with Enterprise Manager and a different way of l

The script is broken down into Multiple Sections  and the header is very important  to read and understand 

--------
 ZDLRA Activity script: 09-Jun-2017
Oracle suggests Enterprise Manager as the proper tool for monitoring
 a Zero Data Loss Recovery Appliance.  However, this simple script
 provides a different perspective on activity on the system and
 can be another aid in understanding system activity.
 The intention is that this script is run daily and only provides
 a short history of events
-------------------------------

This is followed by the version of the ZDLRA Software you are running. 
VERSION       NAME
---------------------------------------------------------------------- ---------
22-05-2017 10:06:57  ZDLRA_12.1.1.1.8.201705_LINUX.X64_RELEASE       ZDLRA

In This case this is the release of 12.1.1.1.8 that was released on 22nd of May

The you will see the General State of the system and in a Healthy Environment. there will be idling schedulers and the oldest work will be displayed. Typically the oldest work should be be a couple of hours /days old . If not that might point to some discrepancy and an SR should be opened to evaluate this situation

STATE   SCHEDULERS CURRENT_TIMER_ACTION RESOURCE_WAIT_TASK OLD_WORK
-----         ----------             -------------------------                   --------------------                    ------------
ON           176                  Idling                               UNLIMITED           21-JUN-2017

Then this is followed up by an examination of what is running on the system. On a regular system you will see both Work and Maintenance tasks.

  TASK_TYPE STATE     CURRENT_COUNT LAST_EXECUTE_TIME  WORK_TYPE   MIN_CREATION
----------                -----                -------------               --------------------                    -----------            ------------
PURGE_DUP  RUNNING 1                 09-JUL-2017 10:46:47         Work          09-JUL-2017
CROSSCHECK_DB TASK_WAIT 1                            Maintenance     09-JUL-2017
VALIDATE  TASK_WAIT 1                                    Maintenance     09-JUL-2017

Next Section Displays the State of Storage on the Recovery Appliance and can be used as a measure of understanding how much space has been used on the Recovery Appliance.

 TOTAL_SPACE USED_SPACE FREESPACE FREESPACE_GOAL
-------------               -------------               -------------      --------------
   596048.371           321061.930              274986.051   5960.484


The next Sections include Status of Replication Server and Task History for the last day. This is particularly helpful to assess how things are running.

There are also sections that include that state of each Database and how many days of Backups are available , Locally and replicated as well as sections that show all the incidents in the system and if any config changes were made to the system that were non default.

While Enterprise Manager is still the preferred way of ensuring you see , manage and get alerted on the Recovery Appliance. This handy little script is a nice way to see the overall status a Recovery Appliance really quickly





 

Setting up Redo Transport With Standby's for ZDLRA with EM 13.2

Fri, 2017-05-05 10:00
Oracle ZDLRA or Zero Data loss Recovery Appliance allows for transporting redo from the protected database and stores it securely inside the Recovery Appliance. This allows for near zero Recovery Point Objectives to be met.
Setting up Real time redo Transport involves setting up a wallet and redo_transport_user  parameter to the virtual private catalog user as well as definition and archive log destination that points to the Recovery Appliance . This allows redo to be shipped to the Appliance and Stored for future restores and recoveries.
If the Database Environment does not have an associated Standby , this procedure is very simple and Enterprise manager can handle the setup or it can be done easily via command line as described here in the documentation. Since we are changing the redo_transport_user , this brings in some interesting considerations for standby databases. 
The Default Behavior for redo shipping to a standby database  uses sys as the user that is used for transport and apply unless  there is a value in redo_transport_user.   The REDO_TRANSPORT_USER database initialization parameter can be used to select a different user password for redo transport authentication by setting this parameter to the name of any user who has been granted the SYSOPER privilege. This User will need to be created in the protected database and needs to be exactly the same as the Virtual Private Catalog user that has been created on the Recovery Appliance Catalog. The procedure to setup a protected database that is a part of a dataguard configuration is detailed in the Whitepaper here.
With Enterprise Manager 13.2 this procedure has now been automated and is accessible via the Data guard administration menu





In the Dataguard Administration screen you will see a button to add the Recovery Appliance


The Add Recovery Appliance screen will ask you for the Recovery Appliance to configure with this protected database and since this is a standby will provide details on what will happen

You will need to provide Host credentials for the Primary and Standby Databases. This will result in a Job being submitted to perform the tasks


You can then view the Status of this procedure


The End result will be the Protected Database as well as the Standby Database configured along with the Recovery Appliance.



The Enterprise Manager team is hard at working on developing new features and hopefully this new feature will allow for easier and faster configurations of Protected Databases that also have a standby associated with them. 

Links for 2017-02-22 [del.icio.us]

Thu, 2017-02-23 02:00

Happy new year

Mon, 2017-01-02 16:04



It's the start of a new year , and as with all new years it's time to prepare and start learning a fresh. Oracle as a database , as a technology , as a field is constantly being update from being a database to a platform to a cloud provider .
Learning is the core of what we thrive to do . as technology evolves , the opportunities to learn increase .

This year i had the pleasure to emerse myself to technologies that spanned between on-premises and the cloud and i hope to continue my learning curve

Benjamin Stotter / 500px

#ThanksOTN OTN Appreciation Day: Recovery Appliance - Database Recovery on Steroids

Tue, 2016-10-11 15:36
+ORACLE-BASE.com Tim hall came up with a brilliant idea to appreciate OTN  for all the ways it helped shape the Oracle Community. I have to say thati  whole heartedly agree and here is my contribution for #ThanksOTN.

Recovery Appliance or RA or ZDLRA is something I've been very passionate about since its release and thus this very biased post on RA.  Recovery Appliance is Database Backup and Recovery on Steroids . The ability to do fulls and incremental backups is something that every product boasts, so whats special about ZDLRA. Its the Ability to sleep in peace, its the ability to know my backups are good.
To Quote this Article from DBTA  which is for Sqlserver and 2009
 "To summarize, data deduplication is a great feature for backing up desktops, collaboration systems, web applications, and email systems. If I were a DBA or storage administrator, however, I'd skip deduplicating databases files and backups and devote that expensive technology to the areas of my infrastructure where it can offer a strong ROI


This notion really hasn't changed much though de-duplication software has come a long way.
Why de-dup when you dont even send what you dont need , and thats what the Recovery Appliance brings to the table. Send less data and recover as whole , no more restoring L0's then applying L1's and redo . Just ask to recover a virtual Full and redo needed to get to that point will be sent . This makes the Restore and recovery Process automated much faster than traditional backups.
This couple with automatic block checking , built in validation makes the RA something i'm personally proud of a product that i work with and it truly makes my database recovery on steroids.

REDO_TRANSPORT_USER and Recovery Appliance (ZDLRA)

Tue, 2016-05-17 10:14

“REDO_TRANSPORT_USER” was an Oracle Database Parameter that was introduced in Oracle release 11.1 to help transporting redo from a primary to a standby by using a user designated for log transport , The default configuration assumes the user “SYS” is performing the transport.
This distinction is very important since the user “SYS” is available on every Oracle database and as such most data guard environment when created with default settings are created with “SYS” being the used for Log Transport services.
The Zero Data Loss Recovery Appliance (ZDLRA) adds an interesting twist to this configuration. In order for Real-TIme redo to work on a ZDLRA, the “REDO_TRANSPORT_USER” needs to be set to the Virtual Private Catalog (VPC) user of the ZDLRA. For database that are not participating in the Data Guard configuration , this is not an issue and a user does not be created on the Protected Database i.e the database being backed up to the ZDLRA. The important distinction comes into play if you already have a standby configured to receive redo, that process will break since we have switched the “REDO_TRANSPORT_USER” to a user that doesn’t exist on the protected database. In order to avoid this issue if you already have a Data Guard , you will need to create the VPC user as a user in the primary database with the "create session” and “sysoper" with an optional “sysdg” (12c) .
An example configuration is detailed below.
SQL> select * from v$pwfile_users;

SQL> select * from v$pwfile_users;

USERNAMESYSDBSYSOPSYSASSYSBASYSDGSYSKMCON_IDSYSTRUETRUEFALSEFALSEFALSEFALSE 0SYSDGFALSEFALSEFALSEFALSETRUEFALSE 0SYSBACKUPFALSEFALSEFALSETRUEFALSEFALSE 0SYSKMFALSEFALSEFALSEFALSEFALSETRUE 0

SQL> create user ravpc1 identified by ratest;
User created.

SQL> grant sysoper,create session to ravpc1;
Grant succeeded.

SQL> select * from v$pwfile_users;

USERNAMESYSDBSYSOPSYSASSYSBASYSDGSYSKMCON_IDSYSTRUETRUEFALSEFALSEFALSEFALSE 0SYSDGFALSEFALSEFALSEFALSETRUEFALSE 0SYSBACKUPFALSEFALSEFALSETRUEFALSEFALSE 0SYSKMFALSEFALSEFALSEFALSEFALSETRUE 0RAVPC1FALSETRUEFALSEFALSEFALSEFALSE 0

SQL> spool off


Once you have ensure that the password file has the entries , copy the password file to the standby node(s) and then ensure that the destination state on the primary to the standby is reset by deferring and then reenabling the destination state

SQL> alter system set log_archive_dest_state_X=defer scope=both sid='*'
SQL> alter system set log_archive_dest_state_X=enable scope=both sid='*'

This will ensure that you have redo transport working to the Data Guard standby and the ZDLRA


References

Data Guard Standby Database log shipping failing reporting ORA-01031 and Error 1017 when using Redo Transport User (Doc ID 1542132.1)
MAA White Paper - Deploying a Recovery Appliance in a Data Guard environment
REDO_TRANSPORT_USER Reference
Redo Transport Services
Real-Time Redo for Recovery Appliance

Links for 2016-04-29 [del.icio.us]

Sat, 2016-04-30 02:00

Enterprise Manager 13c And Database Backup Cloud Service

Mon, 2016-03-21 10:35

The Oracle Database Cloud Service allows for backup of an Oracle Database to the Oracle Cloud using Rman. Enterprise Manager 13c provides a very easy way to configure Oracle Database Backup Cloud Service. This post will walk you thru setup of the Oracle Database Backup Cloud service as well as running backups from EM.


There is a new menu Item to configure the Database Backup Cloud Service (DBCS) in the Backup & Recovery Drop down.


This will show you how to setup the Database Backup Cloud Service. If nothing was configured before you will see the screenshot .

Once you click on the Configure Database Backup Cloud Service you will be asked for the Service (Storage) and the Identity Domain that you want the Backups to go to . This Identity Domain comes as part of the DBCS or as Part of DBaaS that can be purchased from Oracle Cloud


Once the Settings are saved . A popup will confirm that the setting have been saved.


After Saving the Settings Submit the Configuration Job . This will Download the Oracle Backup Module to the hosts as well as configure the Media Management Settings. The Job will provide details and confirm all configuration is complete, and will configure this on all nodes of a RAC which can save a lot of time.

We have now completed the setup and can validate by looking the Configure Cloud Backup Setup . This also has an option to test cloud backup as well.

. Lets ensure we have settings there and Checking in Backup Settings , The Media Management settings will shows the location of the Library , Environment and Wallet. The Database Backup Cloud Service requires all backups sent to it is encrypted.


You can also validate this by connecting to rman on the command line and running a "SHOW ALL"

As you can see we have confirmed that the media management setup is completed and well as run a job to download the Cloud Backup Module and configure it.
Now as a final Step we will configure a backup and run an Rman Backup to the Cloud. In the Backup and Recovery Menu Schedule a Backup . Fill out the pertinent setting and make sure ou either encrypt via a password or a wallet or both. The backup that i scheduled was encrypted using a password.

On the Second Page Select the Destination which is the Cloud in our case. and Schedule it


Validate that the setting are right and execute the Job. You can monitor the job by clicking the View Job. The New Job Interface in EM13c is really nice and allows you to see a graphical representation of execution time as well as a log of what is happening side by Side like below.

Once the Backup is completed you can not only see the backup thru EM but also using RMAN on the command line

There are a couple of things that i didn’t show during the process . Parallelism during a backups is important as is compression.
Enterprise Manager 13c allows for making the already simple process of setting up Backup’s to the Database Cloud Service much easier.

Zero Data Loss Recovery Appliance - Basics

Mon, 2016-03-07 09:17

Oracle released Zero Data Loss Recovery Appliance in 2014. The Recovery Appliance was designed to ensure efficient and consistent Oracle Database Backups with a very key focus on Recovery.
I am going to write a series of blogs starting with this one to discuss the fundamental architecture of the Recovery Appliance and discuss the business case as well as deployment and operational strategies around the Recovery Appliance.
So Lets start with why an Appliance. Oracle has had a very interesting strategy start from way before the sun Acquisition. The Exadata was a prime example of a Database Machine that was optimized for Database Workloads. The Engineered Systems Family has since grown to include the smaller Oracle Database Appliance to the currently newest member of the family Zero Data loss recovery Appliance.

Now Lets Start with the Basics . The Recovery Appliance as the name suggests is an Appliance built to solve Data Protection gaps that most customers face , when trying to ensure their critical data that most often resides in the Oracle Database. So why recovery appliance and why now. Over the years Data storage has continued to grow and so does the amount of data stored in databases, where once a couple of GB’s of data was a big deal, today organizations are dealing with Petabytes of Database Storage. Database’s backups are getting harder and harder to manage and modern Backup Appliances have a focus on getting more out of the storage rather than provide a way to ensure recoverability and don’t have a good enough method to ensure that backups are valid. The Recovery Appliance is designed to solve these challenges and give customer an autopilot for their backups.
The name Recovery appliance suggests how much emphasis was put forward in ensuring recoverability of the database, and hence there were controls put in place to ensure everything is validated not just once , but on a regular basis, with extensive reporting made available.Backups are a very important part of every enterprise and the Recovery Appliance brings the ability to perform an incremental forever backup strategy. The incremental forever strategy as the name suggests provides for one full backup (Level 0 ) followed by subsequent incrementals (Level 1 ) Backups. This in conjunction with Protection Policies that ensure a recovery window is maintained , thus providing the autopilot that ensures backups are successful with very little overhead on the machine that is taking the backup. This is done by offloading the de-duplication and compression activities to the Recovery Appliance.
So far i’ve used terminologies like Protection Policies , De-duplication , compression etc. While these terminologies are common in the backup space , too often people have a hard time making the connection. So lets start by a brief definition of each term
Full Backups
When a Complete Backup of the database is taken, This is called a Full Backup and in a traditional environment, this can be done daily or weekly , depending on the backup strategy . Traditional Backup appliances rely on these full to provide De-duplication capabilities. Full backup require a lot of overhead since all blocks have to be read from the I/O subsystem and processed by database host.
Incremental Backups
Incremental backups as the name suggests is the ability to take backups of data blocks that have changed since the previous backups. The Oracle Backup and Recovery Users Guide is the best place to understand the incremental backup strategy and how that can be employed in terms of a backup strategy.
De-duplication
De-duplication is a technique to eliminate duplicate copies of repeating data. This technique is typically employed with flat files or text based data since you can find a better repeating . Incremental Backups are a poor source to de-dup since there is not much data that is repeating and due to the unique structure of the Oracle block , it makes it hard to get a lot of de-duplication.
Compression
Compression is act of shrinking data and Oracle provides various methods of compressing data within the database and with the rman backup process itself.
In Part 2 of this Blog post i will talk about some of the terminologies likes protection policies and incremental forever strategy as well as dicuss the architecture of the Recovery Appliance.


Exadata 12c New Features RMOUG Slides

Mon, 2015-02-23 08:33
I've finally gotten around to post my RMOUG Slide Deck on Slideshare. Hopefully this is helpful to folks looking at new features in Exadata.

Compliance and File Monitoring in EM12c

Mon, 2014-12-29 14:36
I was recently asked to help a customer set up File Monitoring in Enterprise Manager and I thought since I haven’t blogged in a while, this could be a good way to start back up again..
Enterprise manager 12c provides a very nice Compliance and File Monitoring Framwork. There are many Built in Frameworks include for PCI DSS and STIG but this How-to will only focus on a custom file monitoring framework.
Prior to Setting up Compliance features . Ensure that Privilege Delegation is set to sudo or whatever Privilege delegation provider you are using.  and Credentials for Realtime Monitoring are setup for hosts. All the Prereqs are explained here http://docs.oracle.com/cd/E24628_01/em.121/e27046/install_realtime_ccc.htm#EMLCM12307
Also important in the above link is how every OS interacts with these features.


Go To Enterprise -→ Compliance → Library

Create a New Compliance Standard



Name and Describe the Framework


You will see  the Framework Created


Now lets add some Facets to monitor > In this example I selected a tnsnames from my rdbms home


Below is a finished facet


Next lets create a rule that uses that facet

After Selecting the right rule lets Add more color

Lets add the facet that defined what file(s) will be monitored

For this example I will select all aspects  for testing but ensure that you have sized your respository as well as understand all the consequences  for each aspect





After defining the monitoring actions, you have the option to filtor and create monitoring rules based on specific events.
I will skip this for now
As we inch towards the end we can authorize changes and each event manually or incorporate a Change Management System that has a connector available in EM12c.

After We have completed this, we now have an opportunity to review the setting and then make this rule production.
Now lets create a Standard. We are creating a custom File Monitoring Standard With a RTM type Standard Applicable to host

We will add rules to the File Monitor . In this Case we will add the tnsnames rule we created to the Standard. You can add standard as well as rules to a Standard

Next Lets Associate Targets to this Standard.
You will be asked to confirm

Optionally now  you can add this to the compliance framework for one stop monitoring

Now that we have set everything up. Lets Test this. Here is the original tnsnames.ora
Lets add another tns entry

Prior to the change . here is that the Compliance Results Page Looks Like. As you can see the evaluation was successful. And we are 100% compliancet



Now  If If go to Compliance -> Real time observations . I can see that I didn’t install the Kernel module needed for granular control and this cannot use certain functionality

So I’m going to remove these from my rule for now.
Now I have made a whole bunch of changes including even moving the file. It is all captured .

There are many changes here and we can actually compare what changed
If you select unauthorized as the audited event  for the change the compliance score drops and you can use it for see how many violations for a given rule happen.

In Summary. Em12c Provides a very robust framework of monitoring compliance standards as well as custom created frameworks to ensure your auditors and IT Managers are happy.


HeartBleed and Oracle

Fri, 2014-04-11 08:23
There are a lot of people asking about Heartbleed and how it has impacted the web.
Oracle has published  MOS Note 1645479.1 that talks about all the products impacted and if and when fixes will be available.
The following blog post is also a good reference about the vulnerability.  https://blogs.oracle.com/security/entry/heartbleed_cve_2014_0160_vulnerability



Pages