I've never had a tool I really liked that would extract a chunk of a large production database for testing purposes while respecting the database's foreign keys. This past week I finally got to write one: rdbms-subsetter.
rdbms-subsetter postgresql://user:passwd@host/source_db postgresql://user:passwd@host/excerpted_db 0.001
Getting it to respect referential integrity "upward" - guaranteeing every needed parent record would be included for each child row - took less than a day. Trying to get it to also guarantee referential integrity "downward" - including all child records for each parent record - was a Quixotic idea that had me tilting at windmills for days. It's important, because parent records without child records are often useless or illogical. Yet trying to pull them all in led to an endlessly propagating process - percolation, in chemical engineering terms - that threatened to make every test database a complete (but extremely slow) clone of production. After all, if every row in parent table P1 demands rows in child tables C1, C2, and C3, and those child rows demand new rows in parent tables P2 and P3, which demand more rows in C1, C2, and C3, which demand more rows in their parent tables... I felt like I was trying to cut a little sweater out of a big sweater without snipping any yarns.
So I can't guarantee child records - instead, the final process prioritizes creating records that will fill out the empty child slots in existing parent records. But there will almost inevitably be some child slots left open when the program is done.
I've been using it against one multi-GB, highly interconnected production data warehouse, so it's had some testing, but your bug reports are welcome.
Like virtually everything else I do, this project depends utterly on SQLAlchemy.
I developed this for use at 18F, and my choice of a workplace where everything defaults to open was wonderfully validated when I asked about the procedure for releasing my 18F work to PyPI. The procedure is - and I quote -Just go for it.
In the spirit of Movember, the database bloggers are also chipping in with their fair share of contribution, not only with a mo but also with the blog posts. This Log Buffer Edition encompasses that all.
Will the REAL Snap Clone functionality please stand up?
Oracle Linux images for Docker released.
In-depth look into Oracle API Catalog (OAC) 12c.
Patch Set Update: Hyperion Strategic Finance 126.96.36.199.504.
Rollback to Savepoint Does Not Release Locks.
Overcoming the OPENQUERY Record Limit for AD.
Configuring Critical SQL Server Alerts.
What is Biml? – Level 1?
Database Configuration Management for SQL Server
Free eBook: SQL Server Backup and Restore.
Set up a Memory Quota for SQL Server Memory Optimized Databases.
In C (and C++) you can specify that a variable should take a specific number of bits of storage by doing “uint32_t foo:4;” rather than just “uint32_t foo”. In this example, the former uses 4 bits while the latter uses 32bits. This can be useful to pack many bit fields together.
Oracle AVDF post-installation configuration.
Everything about MySQL Users and Logins You Didn’t Know and Were Afraid to Ask.
Setting up a MySQL Enterprise Monitor 3 Test Environment.
While playing with MySQL 5.7.5 on POWER8, I came across a rather interesting bug (74775 – and this is not the only one… I think I have a decent amount of auditing and patching to do now) which made me want to write a bit on memory barriers and the volatile keyword.
In Oracle 12c, a new database auditing foundation has been introduced. Oracle Unified Auditing changes the fundamental auditing functionality of the database. In previous releases of Oracle, there were separate audit trails for each individual component. Unified Auditing consolidates all auditing into a single repository and view. This provides a two-fold simplification: audit data can now be found in a single location, and all audit data is in a single format. Oracle 12c Unified Auditing supports –
- Standard database auditing
- SYS operations auditing (AUDIT_SYS_OPERATIONS)
- Fine Grained Audit (FGA)
- Data Pump
- Oracle RMAN
- Oracle Label Security (OLS)
- Database Vault (DV)
- Real Application Security (RAS)
- SQL*Loader Direct Load
Unified Auditing comes standard with Oracle Enterprise Edition; no additional license is required. It is installed by default, but not fully enabled by default. There are two modes of operation to allow for a transition from pre-12c auditing –
- Mixed Mode – default 12c option. All pre-12c log and audit functionality and configurations work as before. New Unified Auditing functionality is also available. Log data is available in both the traditional locations as well as a new view SYS.UNIFIED_AUDIT_TRAIL. Also, log data continues to be written in clear text when Syslog is used.
- Full Mode or PURE mode – enabled only by stopping the database and relinking the Oracle kernel. Once enabled, pre-12c log and audit configurations are ignored, and audit data is saved using the Oracle SecureFiles, which is a proprietary file format. Because of this, Syslog is not supported. All audit data can be found in the view SYS.UNIFIED_AUDIT_TRAIL.
Figure 1 – Auditing Pre-Oracle 12c
Figure 2 – Oracle 12c Unified Auditing – Mixed Mode
Figure 3 – Oracle 12c Unified Auditing – Pure Mode
Figure 4 – Oracle 12c Unified Audit
If you have questions, please contact us at mailto:firstname.lastname@example.orgReference
For more information on Unified Auditing can be found here:
- Integrigy Oracle 12c Unified Auditing Whitepaper Oracle 12c Unified Auditing
- 12c Unified Auditing, Oracle Database Security Guide 12c Release 1 (12.1) http://docs.oracle.com/database/121/DBSEG/auditing.htm#DBSEG1023
- Predefined Unified Audit Policies, Oracle Database Security Guide 12c Release 1 (12.1) http://docs.oracle.com/database/121/DBSEG/audit_config.htm#DBSEG356
Over the last two years my organisation has been working with multiple partners helping them create partner integrations and showing them how to use the variety of SOAP APIs available for Sales Cloud integrators. Based on this work Ive been working with development with the aim to simplify some of the API calls which require "multiple" calls to achieve a single objective.. For example to create a Customer Account you often need to create the Location first , and then the contacts and then the customer account.. In SalesCloud R9 you will have a new subset of APIs which will simplify this.
So you all have a head-start in learning the API I've worked with our documentation people and we've just released a new whitepaper/doc onto Oracle Support explaining the new API in lovely glorious detail. It also includes some sample code of each of the operations you might use and some hints and tips!
Enjoy and feel free to post feedback
You can download the documentation from Oracle Support, the document is called "Using Simplified SOAP WebServices" , its docId is 1938666.1 and this is a direct link to the document
We share our skills to maximize your revenue!
Additional new material WebLogic & ADF Community, from the WebLogic Partner Community EMEA.
Secure Oracle MAF applications with Oracle Mobile Security Suite, from The Oracle Mobile Platform Blog.
From Geertjan's Blog: Integrated Cheat Sheets for NetBeans IDE 8.0.1
JavaOne Replay: 'Java EE 8 Overview', from The Aquarium.
EM Express 12c Database Administration Page FAQ, from Oracle DB/EM Support.
From Data Integration: ODI 12c - Spark SQL and Hive?
From DaaS (Data as a Service): Introducing the Global Data Guide!
Patch Set Update: EPM Architect 188.8.131.52.501, from Business Analytics - Proactive Support.
New video on YouTube:
Oracle Planning and Budgeting Cloud Simplified Interface
From that Jeff Smith: Setting Up Oracle SQL Developer on a Mac.
Keep This Python Cheat Sheet on Hand When Learning to Code, from LifeHacker.
Announcing the release of Oracle Solaris Studio 12.4, from Solaris and Systems Information for ISVs.
From Darryl Gove's blog: Performance Made Easy.
New releases announced at the MySQL on Windows blog:
MySQL Connector/NET 6.6.7 has been released.
MySQL Connector/NET 6.7.6 has been released
MySQL Connector/NET 6.8.4 has been released
MySQL Connector/NET 6.9.5 has been released
From Oracle E-Business Suite Technology:
New Critical Patches Added to EBS 12.2.3 and R12.2.4 Release Notes
E-Business Suite 12.2.4 VM Virtual Appliances Now Available
From Oracle E-Business Suite Support Blog:
Pricing Based on Secondary UOM
Webcast: Overview of Intercompany Transactions
R12: Resolve Supplier Bank Account Issues When Creating Payment For Customer Refunds
Have you seen the new Interactive Troubleshooting (IT) Flows for Procurement?
Webcast: Respecting Ship Set Constraints Rapid Planning
Subledger Accounting for Payroll
Webcast: Topics in Inventory Convergence and Process Manufacturing
Service Contracts APIs are Finally Here!
I found a broken link to an Oracle help document in one of my posts and when I went to the Oracle 12c database documentation to find the new URL to put in my post I found that Oracle had totally revamped their online manuals.
Here is a link to the new high level Oracle help site: url
I’ve only looked at it for a few minutes and can’t say whether I like it or not. You can still download the manuals in PDF format so that remains familiar. It looks like the new site integrates documentation across most or all of Oracle’s products in a similar format and that’s pretty cool.
Anyway, I just saw this and thought I would pass it along.
Join us for another Oracle Customer Reference Forum on Tuesday, November 18, 2014, at 9:00 a.m. Pacific Time / 11:00 a.m. Central Time.
The leadership team at Ovation Brands—CFO Keith Kravcik and CIO Patrick Benson—will talk about why they chose to implement Oracle ERP Cloud and Oracle Planning & Budgeting Cloud to operate an integrated, state-of-the-art finance system able to support Ovation’s ambitious reinvention strategy.
Ovation Brands currently operates 334 restaurants in 35 states, comprised of 324 steak-buffet restaurants and 10 Tahoe Joe’s Famous Steakhouse restaurants. The restaurants are principally operated under the Old Country Buffet®, HomeTown® Buffet and Ryan’s® brands. Ovation employs approximately 17,000 team members and serves approximately 100 million customers annually. Corporate headquarters are based in Greer, SC with a Corporate Support Center located in Eagan, MN. For more information, visit www.OvationBrands.com.
Invite your customers and prospects. Register now to attend the live Forum on Tuesday, November 18 at 9:00 a.m. Pacific Time / 11:00 a.m. Central Time and learn more directly from Ovation Brands.
The world has changed to one that’s always on, always-engaged, requiring organizations to rapidly become “digital businesses.” In order to thrive and survive in this new economy, having the right digital experience and engagement strategy and speed of execution is crucial.
But where do you start? How do you accelerate this transformation?
Attend this roundtable to hear directly from leading industry analysts from Forrester Research, Inc., Primitive Logic, client companies, and solution experts as they outline the best practice strategies to seize the full potential of digital experience and engagement platform. Gain insights on how your business can deliver the exceptional and engaging digital experiences and the drive the next wave of revenue growth, service excellence and business efficiency.
We look forward to your participation at the Solution Roundtable.
Register now or call 1.800.820.5592 ext. 12873.
Register Now December 4, 2014
10:30 a.m. - 11:45 a.m. Hyatt Regency Orange County
11999 Harbor Blvd Garden Grove, CA 92840 Featuring:
Vice President and Principal Analyst, Forrester Research, Inc.
If you are an employee or official of a government organization, please click here for important ethics information regarding this event. Copyright © 2014, Oracle Corporation and/or its affiliates.
Here you can download complete sample application - adfbpm11gr4.zip. This application implements a method based on BPM API, where a list of outcomes by the currently selected Task ID is fetched from BPM engine:
Each outcome is represented by ActionType. I'm constructing a list of outcomes to be used on ADF UI. There is ADF UI iterator components on the fragment, this component is generating dynamic buttons, based on the constructed set of outcomes. Outcome name is used to set button name and outcome itself is used as attribute value for the generic action listener method:
Generic action listener method is responsible to parse outcome name, initialise a payload if needed, and execute BPM API to submit the outcome for further task processing:
We can check how this works. There is a human task AssignEmployee with SUBMIT outcome, in the sample application:
Task action button is generated accordingly - there is only one Submit action button for the selected task:
The next human task ApproveEmployee is set with two outcomes - APPROVE and REJECT:
Based on the set of the outcomes, two buttons are present now - Approve and Reject:
There are currently two types of Advisor Webcasts;
- Product Specific Webcasts, which share best practices, troubleshooting guidance, release information;
- Live Essentials Webcasts, designed to help you better utilize Oracle's support tools and procedures.
November Featured Webcasts by Product Area: Database Oracle Database 12.1 Support Update for Linux on System z November 20 Enroll Database Oracle 12c: New Database Initialization Parameters November 26 Enroll E-Business Suite Topics in Inventory Convergence and Process Manufacturing November 12 Enroll E-Business Suite Oracle Receivables Posting & Reconciliation Process In R12 November 13 Enroll E-Business Suite Respecting Ship Set Constraints Rapid Planning November 13 Enroll E-Business Suite Overview of Intercompany Transactions November 18 Enroll E-Business Suite Empowering Users with Oracle EBS Endeca Extensions November 20 Enroll Eng System Oracle Exadata 混合列式压缩 (Oracle Exadata Hybrid Columnar Compression) - Mandarin only November 20 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne: Introduction and Demo on Multi Branch Plant MRP Planning (R3483) November 11 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne: 9.1 Enhancement - Inventory to G/L Reconciliation Process November 12 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne: Installation and setup of the Web Development client November 13 Enroll JD Edwards EnterpriseOne Using JD Edwards EnterpriseOne Equipment Billing November 19 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne 2014 1099 Year End Refresher Webcast November 20 Enroll JD Edwards World JD Edwards World - Localizacao Brazil- Ficha Conteudo de Importacao(FCI) (PORTUGUESE) November 18 Enroll Middleware WLS どんな問い合わせもスムーズにすすむ最初の一歩 + 3 (Japanese Only) November 19 Enroll Middleware WLS 管理服务器与被管服务器之间的通讯机制 (Mandarin Only) November 26 Enroll Oracle Business Intelligence Using OBIEE with Big Data November 25 Enroll PeopleSoft Enterprise Financial Aid Regulatory 2015-2016 Release 1 (9.0 Bundle #35) November 12 Enroll PeopleSoft Enterprise PeopleSoft 1099 Update for 2014 - Get your Copy B’s out on time! November 13 Enroll PeopleSoft Enterprise Payroll for North America – Preparing for Year-End Processing and Annual Tax Reporting November 19 Enroll
Next to that there also is the option to purchase a subscription (initial for 3 years, after which it can be renewed annually) allowing to download updates for OUM.
OUM aims at supporting the entire Enterprise IT lifecycle, including the Cloud.
I wrote a while ago about converting Oracle’s superb OBIEE SampleApp from a VirtualBox image into an EC2-hosted instance. I’m pleased to announce that Oracle have agreed for us to make the image (AMI) on Amazon available publicly. This means that anyone who wants to run their own SampleApp v406 server on Amazon’s EC2 cloud service can do so.
Before getting to the juicy stuff there’s some important points to note about access to the AMI, which you are implicitly bound by if you use it:
- In accessing it you’re bound by the same terms and conditions that govern the original SampleApp
- SampleApp is only ever for use in your own development/testing/prototyping/demonstrating with OBIEE. It must not be used as the basis for any kind of Productionisation.
- Neither Oracle nor Rittman Mead provide any support for SampleApp or the AMI, nor warranty to any issues caused through their use.
- Once launched, the server will be accessible to the public and its your responsibility to secure it as such.
- Create yourself an AWS account, if you haven’t already. You’ll need your credit card for this. Read more about getting started with AWS here.
- Request access to the AMI (below)
- Launch the AMI on your AWS account
- Everything starts up automagically. After 15-20 minutes, enjoy your fully functioning SampleApp v406 instance, running in the cloud!
You can get an estimate of the cost involved using the Amazon Calculator.
As a rough guide, as of November 2014 an “m3.large” instance costs around $4 a day – but it’s your responsibility to check pricing and commitments.
Be aware that once a server is created you’ll incur costs on it right through until you “terminate” it. You can “stop” it (in effect, power it off) which reduces the running costs but you’ll still pay for the ‘disk’ (EBS volume) that holds it. The benefit of this though is that you can then power it back up and it’ll be as you left it (just with a different IP).
You can track your AWS usage through the AWS page here.Security
- Access to the instance’s command line is through SSH as the oracle user using SSH keys only (provided by you when you launch the server) – no password access
- You cannot ssh to the server as root; instead connect as oracle and use sudo as required.
- The ssh key does not get set up until the very end of the first boot sequence, which can be 20 minutes. Be patient!
- All the OBIEE/WebLogic usernames and passwords are per the stock SampleApp v406 image, so you are well advised to change them. Otherwise if someone finds your instance running, they’ll be able to access it
- There is no firewall (iptables) running on the server. Since this is a public server you’d be wise to make use of Amazon’s Security Group functionality (in effect, a firewall at the virtual hardware level) to block access on all ports except those necessary.
For example, you could block all traffic except 7780, and then enable access on port 22 (SSH) and 7001 (Admin Server) just when you need to access it for admin.
- You first need to get access to the AMI, through the form below. You also need an active AWS account.
- Launch the server:
- From the AWS AMI page locate the SampleApp AMI using the details provided when you request access through the form below. Make sure you are on the Ireland/eu-west-1 region. Click Launch.
- Select an Instance Type. An “m3.large” size is a good starting point (this site is useful to see the spec of all instances).
- Click through the Configure Instance Details, Add Storage, and Tag Instance screens without making changes unless you need to.
- On the Security Group page select either a dedicated security group if you have already configured one, or create a new one.
A security group is a firewall that controls traffic to the server regardless of any software firewall configured or not on the instance. By default only port 22 (SSH) is open, so you’ll need to open at least 7780 for analytics, and 7001 too if you want to access WLS/EM as well
Note that you can amend a security group’s rules once the instance is created, but you cannot change which security group it is bound to. For ad-hoc purposes I’d always use a dedicated security group per instance so that you can change rules just for your server without impacting others on your account.
- Click on Review and Launch, check what you’ve specified, and then click Launch. You’ll now need to either specific an existing SSH key pair, or generate a new one. It’s vital that you get this bit right, otherwise you’ll not be able to access the server. If you generate a new key pair, make sure you download it (it’ll be a .pem file).
- Click Launch Instances
You’ll get a hyperlinked Instance ID; click on that and it’ll take you to the Instances page filtered for your new server.
Shortly you’ll see the server’s public IP address shown.
- OBIEE is configured to start automagically at boot time along with the database. This means that in theory you don’t need to actually access the server directly. It does take 15-20 minutes on first boot to all fire up though, so be patient.
- The managed server is listening on port 7780, and admin server on 7001. If your server IP is 184.108.40.206 the URLs would be:
- Analytics: http://220.127.116.11:7780/analytics
- WLS: http://18.104.22.168:7001/console
- EM: http://22.214.171.124:7001/em
The server is a stock SampleApp v406 image, with a few extras:
- obiee and dbora services configured and set to run at bootup. Control obiee using:
sudo service obiee status sudo service obiee stop sudo service obiee start sudo service obiee restart
- screen installed with a .screenrc setup
To get access to the AMI, please complete this short form and we will send you the AMI details by email.
By completing the form and requesting access to the AMI, you are acknowledging that you have read and understood the terms and conditions set out by Oracle here.
In one of my last blog, named: "Oracle Audit Vault and Database Firewall (AVDF) 12.1 - installation on VirtualBox" I explained how to install AVDF on VirtualBox. Since some of you asked for a blog on "How to configure AVDF", I decided to write this posting on AVDF post-installation configuration. This one only concerns the post-installation phase, a third blog will be dedicated to practical cases concerning the configuration of Database Firewall Policies.Specifying the Audit Vault Server Certificate and IP Address
You must associate each Database Firewall with an Audit Vault Server by specifying the server's certificate and IP address, so that the Audit Vault Server can manage the firewall. If you are using a resilient pair of Audit Vault Servers for high availability, you must associate the firewall to both servers.
1. Log in to the Audit Vault Server as an administrator, and then click the Settings tab.
2. In the Security menu, click Certificate. The server's certificate is displayed.
4. Log in to the Database Firewall administration console
5. In the System menu, click Audit Vault Server.
6. Enter the IP Address of the Audit Vault Server.
7. Paste the Audit Vault Server's Certificate in the next field.
Registering Oracle Secured Target
Ensure That Auditing Is Enabled in the Oracle Secured Target
8. Click Apply.
Databaseoracle@vmtest12c:/home/oracle/ [DUMMY] SOUK
******** dbi services Ltd. ********
STATUS : OPEN
DB_UNIQUE_NAME : SOUK
OPEN_MODE : READ WRITE
LOG_MODE : NOARCHIVELOG
DATABASE_ROLE : PRIMARY
FLASHBACK_ON : NO
FORCE_LOGGING : NO
VERSION : 126.96.36.199.0
oracle@vmtest12c:/home/oracle/ [SOUK] sqlplus "/as sysdba"SQL*Plus: Release 188.8.131.52.0 Production on Sun Sep 15 22:35:49 2013Copyright (c) 1982, 2011, Oracle. All rights reserved.
Oracle Database 11g Enterprise Edition Release 184.108.40.206.0 - 64bit ProductionSQL>SQL>
SQL> SHOW PARAMETER AUDIT_TRAILNAME TYPE VALUE
------------------------------------ ----------- ---------------------------
audit_trail string DB
If the output of the SHOW PARAMETER command is NONE or if it is an auditing value that you want to change, then you can change the setting as follows.For example, if you want to change to XML, and if you are using a server parameter file, you would enter the following:
Registering Hosts in the Audit Vault Server
SQL> ALTER SYSTEM SET AUDIT_TRAIL=XML SCOPE=SPFILE;
SQL> SHUTDOWN IMMEDIATE;
1. Log in to the Audit Vault Server console as an administrator.
2. Click the Hosts tab. A list of the registered hosts, if present, appears in the Hosts page. To control the view of this list see "Working With Lists of Objects in the UI".
3. Click Register.
4. Enter the Host Name and Host IP address.
5. Click Save.Deploying and Activating the Audit Vault Agent on Secured Target Hosts
1. Log in to the Audit Vault Server console as an administrator.
2. Click the Hosts tab, and then from the Hosts menu, click Agent.
3. Click “Download Agent” and save the agent.jar file to a location of your choice.
4. Using an OS user account, copy the agent.jar file to the secured target's host computer.
5. On the host machine, set JAVA_HOME to the installation directory of the jdk1.6 (or higher version), and make sure the java executable corresponds to this JAVA_HOME setting.
6. Start a command prompt with Run as Administrator. In the directory where you placed the agent.jar file, extract it by running:
java -jar agent.jar -d Agent_Home
Request agent Activation
To request activation of the Audit Vault Agent:
1. On the secured target host computer, go to the following directory:
2. Agent_Home is the directory created in the step 7 above. Run the following command:
In this step, you approve the agent activation request in the Audit Vault Server, then start the agent on the secured target host machine.To activate and start the agent:
1. Log in to the Audit Vault Server console as an administrator.
2. Click the Hosts tab.
3. Select the host you want to activate, and then click Activate.
This will generate a new activation key under the Agent Activation Key column.You can only activate a host if you have completed the procedures in Step 1: Deploy the Audit Vault Agent on the Host Machine. Otherwise the Agent Activation Status for that host will be No Request.
4. Change directory as follows:
Agent_Home is the directory created in the step 7 above.
5. On the secured target host machine, run the following command and provide the activation key from Step 3:
./agentctl start -k key
Note: the -k argument is not needed after the initial agentctl start command.
To stop or start the Audit Vault Agent after initial activation and start, run one of the following commands from the Agent_Home/bin directory on the secured target host machine:
Changing the Logging Level for the Audit Vault Agent
The logging level you set affects the amount of information written to the log files. You may need to take this into account for disc space limitations.The following logging levels are listed in the order of amount of information written to log files, with debug providing the most information:
- error - Writes only error messages
- warn - (Default) Writes warning and error messages
- info - Writes informational, warning, and error messages
- debug - Writes detailed messages for debugging purposes
To change the logging level for an Audit Vault Agent:
1. Ensure that you are logged into AVCLI on the Audit Vault Server.
2. Run the ALTER HOST command. The syntax is as follows:
ALTER HOST host_name SET LOGLEVEL=av.agent:log_level
In this specification:
- host_name: The name of the host where the Audit Vault Agent is deployed.
- log_level: Enter a value of info, warn, debug, or error.
1. If you will collect audit data from a secured target, do stored procedure auditing (SPA), entitlements auditing, or enable database interrogation, create a user account on the secured target, with the appropriate privileges to allow Oracle AVDF to access the required data.
Setup scripts: Scripts are available to configure user account privileges for these secured target types:- "Oracle Database Setup Scripts"
- "Sybase ASE Setup Scripts"
- "Microsoft SQL Server Setup Scripts"
- "IBM DB2 for LUW Setup Scripts"
- "MySQL Setup Scripts"
- "Sybase SQL Anywhere Setup Scripts"
Linux secured targets: Assign the Oracle AVDF user to the log_group parameter in the Linux /etc/audit/auditd.conf configuration file. This user must have execute permission on the folder that contains the audit.log file (default folder is /var/log/audit).
Other types of secured targets: You must create a user that has the appropriate privileges to access the audit trail required. For example, for a Windows secured target, this user must have administrative permissions in order to read the security log.
Note: Oracle AVDF does not accept user names with quotation marks. For example, "JSmith" would not be a valid user name for an Audit Vault and Database Firewall user account on secured targets.
2. Log in to the Audit Vault Server console as an administrator.
3. Click the Secured Targets tab. The Secured Targets page lists the configured secured targets to which you have access. You can sort or filter the list of targets. See "Working With Lists of Objects in the UI".
4. Click Register, and in the Register Secured Target page, enter a name and description for the new target.
5. In the Secured Target Location field, enter the connect string for the secured target. See "Secured Target Locations (Connect Strings)" for the connect string format for a specific secured target type. For example, for Oracle Database, the string might look like the following:
6. In the Secured Target Type field, select the secured target type, for example, Oracle Database.
7. In the User Name, Password, and Re-enter Password fields, enter the credentials for the secured target user account you created in Step 1.
8. If you will monitor this secured target with a Database Firewall, in the Add Secured Target Addresses area, for each available connection of this database enter the following information, and then click Add.
- IP Address (or Host Name)
- Port Number
- Service Name (Oracle Database only)
9. If required, enter values for Attribute Name and Attribute Value at the bottom of the page, and click Add. Collection attributes may be required by the Audit Vault Agent depending on the secured target type. See "Collection Attributes" to look up requirements for a specific secured target type.
10. If you will monitor this secured target with a Database Firewall, you can increase the processing resource for this secured target by adding the following Collection Attribute:
- Attribute Name: MAXIMUM_ENFORCEMENT_POINT_THREADS
- Attribute Value: A number between 1 - 16 (default is 1)
This defines the maximum number of Database Firewall processes (1 - 16) that may be used for the enforcement point associated with this secured target. You should consider defining this if the number of secured targets you are monitoring is less than the number of processing cores available on the system running the Database Firewall. Setting a value when it is not appropriate wastes resources.
11. Click Save.
Configuring an Audit Trail in the Audit Vault Server
In order to start collecting audit data, you must configure an audit trail for each secured target in the Audit Vault Server, and then start the audit trail collection manually. Before configuring an audit trail for any secured target, you must:
- Add the secured target in the Audit Vault Server. See "Registering or Removing Secured Targets in the Audit Vault Server" for details.
- Register the secured target host machine and deploy and activate the agent on that machine. See "Registering Hosts".
This procedure assumes that the Audit Vault Agent is installed on the same computer as the secured target.
To configure an audit trail for a secured target:
1. Log in to the Audit Vault Server console as an administrator.
2. Click the Secured Targets tab.
3. Under Monitoring, click Audit Trails. The Audit Trails page appears, listing the configured audit trails and their status.
4. In the Audit Trails page, click Add.
5. From the Collection Host drop-down list, select the host computer of the secured target.
6. From the Secured Target Name drop-down list, select the secured target's name.
7. From the Audit Trail Type drop-down list, select one of the following:
- EVENT LOG
- TRANSACTION LOG
See Table B-13 for details on which type(s) of audit trails can be collected for a specific secured target type, and "Data Collected for Each Audit Trail Type" for descriptions of data collected.
8. In the Trail Location field, enter the location of the audit trail on the secured target computer, for example, sys.aud$. The trail location depends on the type of secured target. See "Audit Trail Locations" for supported trail locations. Note: If you selected DIRECTORY for Audit Trail Type, the Trail Location must be a directory mask.
9. If you have deployed plug-ins for this type of secured target, select the plug-in in the Collection Plug-in drop-down list. For more information on plug-ins, see "About Agent Plug-ins".
10. Click Save.Starting and Stopping Audit Trails in the Audit Vault Server
To start or stop audit trail collection for a secured target:
1. Log in to the Audit Vault Server console as an administrator.
2. Click the Secured Targets tab.
3. Click Audit Trails.
I very do hope that this blog will help you delpoying AVDF. Do not hesitate to post comments if you have any questions.
select to_char( sum( power(100,rownum-1)* deptno ), 'FM99G99G99G99G99', 'NLS_NUMERIC_CHARACTERS=,;' ) deptlist from dept DEPTLIST --------------- 40;30;20;10
I also wrote about distinct listagg. The same applies for sum distinct.
select to_char( sum(power(1e3,d-1)*deptno), 'FM999G999G999', 'NLS_NUMERIC_CHARACTERS=,;' ) deptsum, to_char( sum(distinct power(1e2,d-1)*deptno), 'FM99G99G99', 'NLS_NUMERIC_CHARACTERS=,;' ) deptsumdist, to_char( sum(power(1e1,d-1)), 'FM9G9G9', 'NLS_NUMERIC_CHARACTERS=,;' ) deptcount, to_char( sum(power(1e4,c-1)*comm), 'FM9999G9999G9999G9999G9999', 'NLS_NUMERIC_CHARACTERS=,;' ) commlist from ( select comm, deptno, dense_rank() over (order by deptno) d, dense_rank() over (order by comm) c from emp); DEPTSUM DSUMDIST COUNT COMMLIST ------------ -------- ----- ------------------- 180;100;030 30;20;10 6;5;3 1400;0500;0300;0000
When you look into V$RECOVERY_AREA_USAGE, you see a strange row at the bottom:
SQL> select * from v$recovery_area_usage; FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES CON_ID ----------------------- ------------------ ------------------------- --------------- ---------- CONTROL FILE 0 0 0 0 REDO LOG 0 0 0 0 ARCHIVED LOG 10.18 0 73 0 BACKUP PIECE 0 0 0 0 IMAGE COPY 0 0 0 0 FLASHBACK LOG 0 0 0 0 FOREIGN ARCHIVED LOG 0 0 0 0 AUXILIARY DATAFILE COPY 0 0 0 0
Curious what that could be? You will see values other than zero on a Logical Standby Database:
SQL> connect sys/oracle@logst as sysdba Connected. SQL> select database_role from v$database; DATABASE_ROLE ---------------- LOGICAL STANDBY SQL> select * from v$recovery_area_usage; FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES CON_ID ----------------------- ------------------ ------------------------- --------------- ---------- CONTROL FILE 0 0 0 0 REDO LOG 0 0 0 0 ARCHIVED LOG 14.93 0 9 0 BACKUP PIECE 0 0 0 0 IMAGE COPY 0 0 0 0 FLASHBACK LOG 0 0 0 0 FOREIGN ARCHIVED LOG 2.03 0 26 0 AUXILIARY DATAFILE COPY 0 0 0 0
In contrast to a Physical Standby Database, this one writes not only into standby logs but also into online logs while being in standby role. That leads to two different kinds of archive logs:
When DML (like insert and update) is done on the primary 1) that leads to redo entries into online logs 2) that are simultaneously shipped to the standby and written there into standby logs 2) also. The online logs on the primary and the standby logs on the standby will be archived 3) eventually. So far that is the same for both physical and logical standby. But now a difference: Logical standby databases do SQL Apply 4) by logmining the standby or the archive logs that came from the primary. That generates similar DML on the standby which in turn leads LGWR there to write redo into online logs 5) that will eventually get archived 6) as well.
A logical standby could do recovery only with its own archive logs (if there was a backup taken before) but not with the foreign archive logs. Therefore, those foreign archive logs can and do get deleted automatically. V$ARCHIVED_LOG and V$FOREIGN_ARCHIVED_LOG can be queried to monitor the two different kinds of logs.
That was one topic of the course Oracle Database 12c: Data Guard Administration that I’m delivering as an LVC this week, by the way. Hope you find it useful :-)
Tagged: Data Guard, High Availability
AIOUG meet “SANGAM – Meeting of Minds” is the Largest Independent Oracle Event in India, organized annually in the month of November. This year, the 6th annual conference, Sangam14 (7th, 8th and 9th November 2014) was held at Hotel Crowne Plaza Bengaluru Electronics City, India.
I had the honour to present papers on
- Histograms : Pre-12c and now
- Adaptive Query Optimization
Both the papers were well received by the audience.
On the first day, a full day seminar on “Optimizer Master Class” by Tom Kyte was simply great. Hats off to Tom who conducted the session through the day with relentless energy, answering the queries during breaks without taking any break himself.
The pick of the second day was Maria Colgan’s 2 hour session on “What You Need To Know About Oracle Database In-Memory Option”. The session was brilliant, to the point and packed with knowledge about the new feature.
Aman Sharma’s session on 12c High Availability New features was very well conducted and quite informative.
On the 3rd day there was a one hour session by Dr. Rajdeep Manwani on “Time to Reinvent Yourself – Through Learning, Leading, and Failing”. The session was truly amazing and left the audience introspecting .
On the whole, it was a learning experience with the added advantage of networking with Oracle technologists from core Oracle technology as well as Oracle Applications. Thanks to all the members of organizing committee whose selfless dedication and efforts made the event so successful. Thanks to all the speakers for sharing their knowledge.
Looking forward to SANGAM 15….
Comments: 0 (Zero), Be the first to leave a reply!
You might be interested in this:
- LET'S EDIT GPNP PROFILE
- RECOVER VOTING DISK - SCENARIO-I
- AUTOMATIC DEGREE OF PARALLELISM (DOP) - PART - I
- 12c RAC: ADD A NEW NETWORK INTERFACE
- 11g R2 RAC : DELETE A NODE
Last week I spoke on a panel at the Association of Public and Land-grant Universities (APLU) annual conference. Below are the slides and abridged notes on the talk.
It is useful to look across many of the technology-drive trends affecting higher education and ask what that tells about faculty of the future. Distance education (DE) of course is not new, and the first DE course came out in the mid 1800s in a course from London on shorthand. These distance, or often correspondence, course have expanded over time, but with the rise of the Internet online education (today’s version of DE) has been accelerating over the past 20 years to become quite common in our higher education system.
For the first time IPEDS has been collecting data on DE, starting with Fall 2012 data. We finally have some real data to show us what is happening state-by-state and by different measures. We’re talking numbers from 20 – 40+% of students taking at least one online course with public 4-year institutions. This is no longer just a fringe condition for our students – it’s hitting the mainstream.
We’re now in an area where online courses are becoming a standard part of our students’ educational experience. The student demographics and experience are changing. Much of this is driven by working adults, people coming back into college to get a degree, and what used to be called non-traditional students. What we know, of course, is that non-traditional students are now in the majority – we need new terminology.
The numbers we’re discussing with distance education really understate the change. There is no longer a simple dualism of traditional vs. online education. We’re seeing an emerging landscape of educational delivery models. What does this emerging landscape of educational delivery models look like? I have categorized the models not just in terms of modality—ranging from face-to-face to fully online—but also in terms of the method of course design. These two dimensions allow a richer understanding of the new landscape of educational delivery models. Within this landscape, the following models have emerged: ad hoc online courses and programs, fully online programs, School-as-a-Service or Online Service Providers, competency-based education, blended/hybrid courses and the flipped classroom, and MOOCs.
The vertical axis of course design gets at the core assumption that underlies much of the higher education system – the one-to-one relationship between a faculty member and a course. With many of the new models, we’re getting into multi-disciplinary faculty team designs and even team-based course designs include faculty, subject matter experts, instructional designers, and multi-media experts. These models raise a lot of questions over ownership of content and ability or permission to duplicate course section.
These new models change the assumptions of who owns the course, and it leads to different processes for designing, delivering, and updating courses–processes that just don’t exist in traditional education. The implications of this approach are significant. These differences create a barrier that very few institutions can cross.
It is culturally difficult to cross the barrier into the area of team-based course design, and yet this is where many of the new technology-enabled models involve.
There is another case of seeing the Course as a Product. Previously we had three separate domains with content (typically provided by publishers), platforms (typically provided by LMS vendors) and course and curriculum design (typically provided by faculty and academic departments. What we’re seeing more recently is the breakdown, or merging, of these domains with various products and services overlapping. Digital content includes both content and platform. Courseware, however, takes this tot the next level and organizes the content and delivery around learning outcomes. In other words, Courseware actually overlaps into the domain and course and curriculum design.
From an organizational change perspective, however, we are just now starting to see how digital education is affecting the mainstream of higher education. We’re not just dealing with niche programs but also having to grapple with how these changes are affecting our institutions as a whole.
Another way of viewing this situation is that we had been used to people experimenting with digital education as a group quietly playing in the corner.
But these people are contained no more and are loose in the house, often causing chaos but also having fun.
These moves raise many questions that need to be addressed at a policy and faculty governance level.
- How broadly are we applying these initiatives? There are big questions in figuring out which pilot programs to start, whether and when to expand the new models beyond an isolated program more broadly.
- Who owns the course when a team works on the design from start to finish?
- Who needs to give permission to take a master course and duplicate into multiple shells, or course sections, taught by others?
- How should faculty be credited for team-based course design and how should professional development opportunities adjust?
The late family therapist Virginia Satir created a model that can describe much of the changes arising from technology-based innovations. The model shows how social systems or cultures react to a transformative event through various stages (see Steve Smith’s post for more information).
The issue for our discussion is that a foreign element – the change or innovation – is the key event that triggers the move away from the late status quo. This change typically leads to resistance, and eventually to a period of chaos. During these two phases, the performance of the social system fluctuates to a large degree and actually is often worse than during the status quo phase, as the social system wrestles with how to integrate the change in a manner that produces benefits. The second key event is the transforming idea, when people determine how to integrate the innovation into the core of the social system. This integration phase leads to real performance improvements as well as less fluctuation. As the innovation reaches a critical mass, a new status quo develops.
It is not a given that the innovation actually takes hold, there are cases where the social system does not benefit from the innovation.
Some of the implications for faculty during these times of change:
- With all of the changes, it’s not just that change should be difficult, but also that performance will fluctuate wildly and often our outcomes will get worse as the system adapts to an innovation.
- The foreign element that dismantles the status quo is not necessarily the basis of technology adoption that gets adopted. The transforming idea is typically related to the foreign element, but it is not equivalent. Faculty ideally will have the time and opportunity to help “find” the transforming idea.
- It would be a mistake to add accountability measures prematurely, when the system has not had a chance to figure out how to successfully improve outcomes.
Many of these digital education models also raise the question of whether the faculty members need to be on campus, and if not, what support structures should be in place to help out these distance faculty. What about professional development opportunities? Beyond that, how to do you include distance faculty in governance processes.
Other changes, such as competency-based education, can move beyond seat time as a core design element. But how does this change faculty compensation and faculty workload?
We also see faculty age assumptions. What I’m seeing lately is more and more evidence that this assumption is incorrect – older faculty in general are not more resistant to change than younger faculty – and this could have implications for ed tech initiatives struggling to get faculty buy-in.
In a recent post here at 20MM, I pointed out an interesting finding from a recent survey on the use of Open Educational Resources (OER) by the Babson Survey Research Group.
It has been hypothesized that it is the youngest faculty that are the most digitally aware, and have had the most exposure to and comfort in work with digital resources. Older faculty are sometimes assumed to be less willing to adopt the newest technology or digital resources. However, when the level of OER awareness is examined by age group, it is the oldest faculty (aged 55+) that have the greatest degree of awareness, while the youngest age group (under 35) trail behind. The youngest faculty do show the greatest proportion claiming to be“very aware” (6.7%), but have lower proportions reporting that they are “aware” or “somewhat aware.”
Combine this finding with one from another recent survey by Gallup, sponsored and reported by Inside Higher Education.
The doubt extends across age groups and most academic disciplines. Tenured faculty members may be the most critical of online courses, with an outright majority (52 percent) saying online courses produce results inferior to in-person courses, but that does not necessarily mean opposition rises steadily with age. Faculty respondents younger than 40, for example, are more critical of online courses (38 percent) than are those between the ages of 50 and 59 (34 percent).
These findings challenge the predominant assumption about older faculty being more resistant to change, but I would not consider it proof of the reverse. For now, I think the safest assumption is to stop assuming that age is a determining factor for ed tech and pedagogical changes from faculty members. What are the implications?
- I have heard informal comments at schools about instituting change by waiting it out – letting the resistant older faculty retire over time and allowing innovative younger faculty to change the culture. This approach and assumption could be a mistake.
- Everett Rogers has found that opinion leaders play a crucial role in the change process. There could be key advantages in actively reaching out to older faculty who might be established opinion leaders to include them directly in change initiatives.
- We should not assume that older faculty would not want additional support and professional development. These ‘senior’ faculty members may need additional opportunities to learn new technologies, but you might be surprised to find they are more receptive to experimentation and participation in change initiatives.
I would not presume to be able to answer these questions for you, but I think it is important to highlight how technology changes will have faculty support and management implications that go well beyond niche programs and could change the faculty of the future. These innovations are having a broader effect.
Q (audience). Another dimension is that we’re seeing more need for interaction, seeking greater impact with students. We need more meaningful interactions between faculty and students. How do these changes apply to interaction?
A. One of the most encouraging aspects in the study mentioned from Inside Higher Ed faculty survey showed that the biggest definition of quality in online learning (and hopefully f2f learning) is mentorship. The quality of an online course or program depends on the design and implementation. There are a lot of bad online courses with poor engagement. But at the same time there are many well-designed online courses with more interaction between faculty and student that is even possible in traditional face-to-face courses. For example, online tools can increase the ability to reach out to introverts and bring them in to group discussions. Well-design learning analytics can act as a teacher’s eyes and ears to see more directly how different students are doing in the class. Moving forward, this is one of the biggest opportunities to enhance interaction. You raise a good point, though – it’s a challenge and cannot just be solved automatically.
If faculty or an institution fall back on traditional course design just being placed online, there will be problems. Some of the best-designed courses, however, go beyond the official LMS tools and use social media, blogs, and various interactive tools to enhance creativity and interaction. Long and short – it’s not a matter of if a course or program is put online, it’s a matter of how the course is designed, the faculty role in actively creating opportunities for interaction, and adequate support for students and faculty.
The post APLU Panel: Effects of digital education trends on teaching faculty appeared first on e-Literate.