Skip navigation.

Feed aggregator

If you are in Cleveland don't miss our January 23 2015 NEOOUG meeting!

Grumpy old DBA - Fri, 2015-01-16 14:09
Please join us next friday for our 1st 2015 regular meeting. Mark Rabne our resident Oracle technical geek will be taking us through major 2014 Oracle technology and application announcements. Kind of a recap of Oracle Open World 2014 major items plus some additional ones after that.

It's the usual deal at the Rockside Road Oracle office so free lunch at noon and networking opportunities. Meeting starts at 1 pm.

Our March meeting will be Big Data ish related ( and we have a great announcement coming up on a workshop for our May GLOC 2015 conference ).
Here is the info on Jan 23 2015 meeting

I hope to see you there!
Categories: DBA Blogs

ERROR: The following required ports are in use: 6801 : WLS OAEA Application Port

Vikram Das - Fri, 2015-01-16 13:55
Anil pinged me today when his adop phase=fs_clone failed with this error message:

-----------------------------
ERROR: The following required ports are in use:
-----------------------------
6801 : WLS OAEA Application Port
Corrective Action: Free the listed ports and retry the adop operation.

Completed execution : ADOPValidations.java

====================================
Inside _validateETCHosts()...
====================================

This is a bug mentioned in the appendix of article: Integrating Oracle E-Business Suite Release 12.2 with Oracle Access Manager 11gR2 (11.1.2) using Oracle E-Business Suite AccessGate (Doc ID 1576425.1)
Bug 19817016The following errors are encountered when running fs_clone after completing AccessGate and OAM integration and after completing a patch cycle:

Checking  WLS OAEA Application Port on aolesc11:  Port Value = 6801
RC-50204: Error: - WLS OAEA Application Port in use: Port Value = 6801

-----------------------------
ERROR: The following required ports are in use:
-----------------------------
6801 : WLS OAEA Application Port
Corrective Action: Free the listed ports and retry the adop operation.

Workaround:
Stop the oaea managed server on the run file system before performing the fs_clone operation, immediately after the accessgate deployment.

Solution:
This issue will be addressed through Bug 19817016.
If you read the bug:
Bug 19817016 : RUNNING ADOP FS_CLONE FAILS DUE TO PORT CONFLICT BETWEEN RUN AND PATCH EDITIONClick to add to FavoritesEmail link to this documentPrintable PageTo BottomTo Bottom Bug Attributes TypeB - DefectFixed in Product VersionSeverity2 - Severe Loss of ServiceProduct Version12.2.4Status11 - Code/Hardware Bug (Response/Resolution)Platform226 - Linux x86-64Created14-Oct-2014Platform VersionORACLE LINUX 5Updated02-Dec-2014Base BugN/ADatabase Version11.2.0.3Affects PlatformsGenericProduct SourceOracleKnowledge, Patches and Bugs related to this bug Related Products LineOracle E-Business SuiteFamilyApplications TechnologyAreaTechnology ComponentsProduct1745 - Oracle Applications Technology Stack
Hdr: 19817016 11.2.0.3 FSOP 12.2.4 PRODID-1745 PORTID-226
Abstract: RUNNING ADOP FS_CLONE FAILS DUE TO PORT CONFLICT BETWEEN RUN AND PATCH EDITION

*** 10/14/14 11:58 am ***
Service Request (SR) Number:
----------------------------


Problem Statement:
------------------
Running fs_clone after completing EBS and OAM integration and after
completing a patch cycle results in fs_clone failing with the following
errors:

Checking  WLS OAEA Application Port on aolesc11:  Port Value = 6801
RC-50204: Error: - WLS OAEA Application Port in use: Port Value = 6801

-----------------------------
ERROR: The following required ports are in use:
-----------------------------
6801 : WLS OAEA Application Port
Corrective Action: Free the listed ports and retry the adop operation.

Detailed Results of Problem Analysis:
-------------------------------------
The problem is due to the newly added managed server port being the same for
both the run and patch edition.  Going back to the sequence of steps and
tracking the port assignment, it showed the following:

- deploy accessgate on patch
Creates managed server - oaea_server1:6801
This is the default port and doing this to the patch edition...

fs2 - run -> 6801 port
fs1 - patch -> 6801 port

- complete OAM registration
- close patching cycle
- cutover
- after cutover, SSO is working

fs1 - run -> 6801 port
fs2 - patch -> 6801 port

- fs_clone -> fails due to both run(fs1) and patch(fs2) referencing the same
port 6801

Configuration and Version Details:
----------------------------------
OAM - 11.1.2.2.0
WG - 11.1.2.2.0
EAG - 1.2.3
WT - 11.1.1.6.0

EBS 12.2.4 w/ AD/TXK delta 5

Steps To Reproduce:
-------------------
As part of the EBS integration w/ OAM, we add a managed server for use as the
EBS AccessGate (EAG) to the existing WLS in EBS.  There is an option to do
this to both run edition, as well as the patch edition during an active patch
cycle.  In this case the latter was done.  Here is a summary of the steps
used:

1. Start patch cycle
2. Integrated OID and EBS
3. Cutover
4. Confirmed OID provisioning is working
5. Start patch cycle
6. Apply pre-req EBS patches for OAM
7. Proceed w/ OAM integration on patch file system
8. Cutover
9. Confirmed SSO/OAM is working
10. Run fs_clone -> this is where the issue appears


Additional Information:
-----------------------
The workaround here is to stop the oaea_server1 managed server operating in
the run edition on port 6801, and then re-running fs_clone.  Once this is
done, fs_clone completes and the patch edition now operates on port 6802 for
the same managed server.

For A Severity 1 Bug: Justification and 24x7 Contact Details:
-------------------------------------------------------------


*** 10/14/14 01:19 pm ***
*** 10/16/14 07:05 am ***
*** 10/16/14 07:05 am ***
*** 10/17/14 01:47 am ***
*** 10/17/14 01:49 am ***
*** 10/17/14 01:57 am ***
*** 10/17/14 08:47 am ***
*** 10/23/14 12:16 am ***
*** 10/23/14 12:17 am ***
*** 10/26/14 10:07 pm ***
*** 10/27/14 10:06 pm ***
*** 10/27/14 10:09 pm ***
*** 10/30/14 10:40 pm ***
*** 10/30/14 10:49 pm ***
*** 10/30/14 10:49 pm ***
*** 11/05/14 04:30 pm ***
*** 11/05/14 04:30 pm ***
*** 11/06/14 10:59 am ***
*** 11/17/14 09:20 pm ***
*** 12/02/14 12:36 am ***
*** 12/02/14 07:26 pm ***

Till a patch is made available, you need to shutdown the oaea managed server and restart fs_clone. So much for keeping all services online and the promise of no outage during fs_clone.

Categories: APPS Blogs

January 21: Swedbank HCM Cloud Reference Forum

Linda Fishman Hoyle - Fri, 2015-01-16 10:57

Join us for an Oracle HCM Cloud Customer Reference Forum on Wednesday, January 21, 2015, at 9:00 a.m. PT / 12:00 p.m. ET. You will hear Fredrik Rexhammar, Business Information Officer (BIO) in Group HR at Swedbank, discuss the company’s initiative to replace its in-house HR system with a cloud-based HR solution that included Core HR, Compensation, Talent Review and Performance & Goal Management. The goal was to better manage its people and support its complex compensation model.

Fredrik will talk about Swedbank’s selection process for new HR software, its implementation experience with Oracle HCM Cloud, and the expectations and benefits of its new modern HR system.

Swedbank is a modern bank firmly rooted in Swedish savings bank history. It is an inclusive bank with approximately 20,000 employees, 8 million private customers, and more than 600,000 corporate and organizational customers.

You can register now to attend the live Forum on Wednesday, January 21, 2015 at 9:00 a.m. PT / 12:00 p.m. ET and learn more from Swedbank directly.

The Database Protection Series– Common Threats and Vulnerabilities- Part 2

Chris Foot - Fri, 2015-01-16 10:35

This is the third article of a series that focuses on database security. In my introduction, I provide an overview of the database protection process and what is to be discussed in future installments. In last month’s article, we began with a review of the various database vulnerabilities and threat vectors we need to address. In this article, we’ll finish our discussion of the most common threats and vulnerabilities. In the next installment of this series, we’ll take a look at the database vulnerability analysis process. We’ll begin by learning how to perform an initial database vulnerability assessment. In addition, we’ll discuss the importance of performing assessments on a regular basis to ensure that no new security vulnerabilities are introduced into our environment.

Unsecured Non-Database Files

It’s fairly obvious that, as DBAs, our focus will be on securing our sensitive database data stores. However, during the course of normal processing, the database often interacts with flat files and other objects that may contain sensitive data that needs to be secured. For our review, we’ll classify the data as we have always done – input or output. Input data that the database ingests or output data that the database generates.

Databases can receive data from a host of different mechanisms:

  • The database can retrieve data directly from other databases or be sent data from those systems. Database links in Oracle and linked servers in Microsoft SQL Server are often implemented to share data. If your sensitive database can be accessed using these features, you will need to take the additional steps required to secure those access mechanisms. Both Oracle and Microsoft have made improvements to the security of external database links, but the level of protection depends on how they are implemented. There will be times when this will require you to secure multiple database targets. It will broaden the scope of the security activities you will be required to perform, but the sensitive database data store will be vulnerable until you do.
  • Input files that are used by the database product’s load or import utility. DBAs can be pretty creative about using the database’s inherent toolsets to ingest data into their databases or transfer it to other systems. You will need to identify the data they contain and secure these files accordingly.
  • ETL products that extract, transform and load data into other data stores. ETL products are able to access data from a variety of sources, transform it into a common format and move it to the target destination. Each ETL product uses different strategies to collect and process the data. Identify what work files are used, how the product is secured and the sensitivity of the data that is being accessed as well as sent to other systems.
  • Middleware products that transfer data between disparate systems. Like ETL products, you will identify the sensitivity of the input and output, work files produced and how the product is secured.

Databases also have the ability to produce various forms of output:

    • Application report files that are either stored on disk or sent directly to printers. An in-depth review of the application report output the database generates will need to be performed. If the data being reported on contains sensitive data elements, you will need to determine if the printers are in secure locations, the personnel that have access to them and if the reports are stored on disk, how the storage is secured.
    • Flat file output generated by the database. Besides application reports that we just discussed, there are numerous methods that technicians use to generate flat file output from the database data store. Oracle external tables, export files, custom coded output files generated by developers and DBAs during debugging sessions, and system trace execution all have the capability to expose data. Everything from the spool command in SQL*PLUS to the PL/SQL util_file needs to be evaluated. A best practice is to provide a secure set of folders or directories in the operating system running the database and to not allow non-secure destinations to be utilized.
    • Database product and third-party database backup files. All leading database products provide the functionality to encrypt database backup files as do most third-party offerings. An analysis is required to determine how the data is encrypted, at what point in the process is it encrypted and how is the encryption mechanism secured.
Unsecured Data Transmissions

One of the more challenging tasks will be to identify the mechanisms used to transmit database data throughout the organization. You need to determine what’s being transmitted over the network wire as well as the wireless. One of the constraints I have in this series is that I can’t get into the details that would allow you to secure your connections to the target database. That’s far beyond the scope and intent of this series of articles. The series’ intent is to be a general overview of database protection best practices. All major database manufacturers provide a wealth of documentation on how to secure the communication mechanisms, encrypt data transfers as well as secure the operating system the database runs on. If you are serious about protecting data transmissions, a thorough review of vendor documentation is essential. In addition, you’ll need to become quite good friends with your network engineers as their assistance and expertise will be required.

Access Tools

Databases can be accessed using a variety of tools. That’s the benefit of using a database; you can interact with it using everything from Excel to a sophisticated enterprise-wide program suite. You will need to work with end-users, application developers and your security team to determine what happens to that data after it is retrieved from the database. For example, if a business user accesses sensitive data using Excel, where do they store the spreadsheets? The solution is to inter-weave the proper security procedures, constraints and end-point permissions to safeguard the data.

Application Hacks – SQL Injection and Buffer Overflows

SQL injection occurs when an attacker sends commands to the database by attaching it to web form input. The intent is to grant themselves privileges or access the data directly. In the past, hackers were required to manually attach the malicious code to the statement. There are hacking toolkits available now that allow them to automate the process. SQL injection attempts to confuse the database so it is unable to distinguish between code and data.

Here’s a couple of very rudimentary examples of SQL injection (as processed by the database):

SELECT name, address, SSN FROM employees WHERE lastname=”FOOT” or “x=x”

The program wants to return names, addresses and social security numbers for a specific employee. The attached   or “X=X” returns as true and allows the hacker to return all employees’ information.

SELECT name, address FROM employees where lastname=”FOOT” ;SELECT * from employees;

Most databases allow the use of delimiters to string statements together. In this case, instead of selecting just the name and address, the SQL statement injected at the end dumps the entire contents of the table.

Statements that use parameters as input, as opposed to using dynamic statements that generate the input values during execution as well as the use of stored procedures containing SQL code, prevent hackers from attaching malicious code to the statements. For example, in the or “x=x” example used above in a SQL statement using parameters as input (lastname = @lname), the database would look for the “x=x”value literally and fail to successfully process the statement.

A buffer overflow, also called a buffer overrun, occurs when the data being input to the buffer overflows into adjacent memory. The volume of input exceeds buffer size. This is a fairly complex hack, requiring a strong knowledge of the programming language using the buffer. The ease of performing the buffer overflow attack is based on the application language used, how the software is protected and how the developers write the code used to process data. By carefully coding input to a web application, the attacker is able to execute the code contained in the overflow. The hacker issues the commands to overwrite the internal program structures and then executes the additional code. The most common strategies of this hack are to crash the program, corrupt the data or have the code stored in the overflow execute malicious code to access data or grant authorities.  You’ll quickly find a listing of languages on the web that are vulnerable to buffer overflows.  Some are far more vulnerable than others.

I’ll be devoting an article to ongoing database security strategies. One of the key steps of that process will be to educate developers, DBAs, network engineers and OS administrators on how security best practices can be utilized to harden the application ecosystem. Although DBAs may feel that preventing SQL injection, buffer overflows and other application attacks are the responsibility of the development teams, the DBA must take an active role in their protection.

Privilege Abuse

Privilege abuse can be broken down into the following two categories:

  • Intentional Abuse – An example of an intentional abuse of privileges would be a database administrator, senior level application developer or business user accessing data they shouldn’t.
  • Non-Intentional Abuse- The user, in error, accesses sensitive data. The data is exposed unintentionally. Data stored in an unsecure directory, on a laptop that is subsequently stolen or on a USB drive, for example. The list of potential vulnerabilities is pretty much endless.

Disgruntled employees, especially disgruntled ex-employees, and those with just a general criminal inclination are common offenders. To safeguard sensitive data stores, the organization can ensure that background and credit checks are performed on new employees, only the privileges necessary for the employee to perform their work are granted and security credentials are immediately revoked upon termination for any reason. Once again, we will focus more on this topic in upcoming articles of this series.

Audit Trails (or lack thereof)

Auditing is not an alerting mechanism. Auditing is activated, the data is collected and reports are generated that allow the various activities performed in the database to be analyzed for the collected time period.

Identifying a data breach after the fact is not database protection. It is database reporting. To protect databases we are tasked with safeguarding, the most optimal solution is to alert in real time or alert and stop the unwarranted data accesses from occurring. We’ll discuss the various real-time breach protection products during our discussion on security monitoring products.

You will need to be very scientific when selecting the level of auditing to perform. Too much will lead to an excessive use of finite system resources. Auditing can place a significant impact on the system and database. Too little will give you the potential of missing critical security events that have occurred. An in-depth analysis of who and what is to be audited is an absolute requirement.

Auditing just the objects containing sensitive data elements and users with high levels of privileges are good starting points. Leading database vendors like Oracle, Microsoft and IBM all have advanced auditing features that reduce auditing’s impact on the system by transferring it to other components. In addition, most vendors offer add-on products that improve auditing’s capabilities at an additional price.

Auditing plays a critical role in database security, especially to those organizations that don’t have a real-time breach protection solution. Properly populated audit trails allow administrators to identify fraudulent activities, and the audit reports are often requirements for the various industry regulations including SOX, HIPAA and PCI.

Poor Security Strategies, Controls and Education

The two critical components that play a significant role in the database protection process are education and awareness; the awareness that your systems are vulnerable to breaches and not putting your head in the sand thinking that your systems aren’t potential targets. Pay a quick visit to the various websites that record data breaches. Although you will see information stating that organizations storing massive numbers of credit cards, like large retailers, are the most popular targets, you will also find that no organization is immune. Breaches occur daily, and all organizations are targets.

According to the Symantec 2014 Breach Investigations Report, companies with less than 250 employees accounted for 31% of all reported attacks. Visa reports an even more alarming statistic: 85% of all Visa card breaches occur at the small to medium-sized business level. The National Cyber Security Alliance SMB report states that 60% of small businesses close their doors within 6 months of a data breach.

When sensitive data is breached for any reason, it can threaten the survivability of your organization. The financial impact of the breach is not the only issue that affects companies that are victims of unauthorized data access. Loss of customer goodwill, bad press and legal penalties (lawsuits, fines, etc.) must also be considered.

After you realize the importance of protecting your sensitive database data stores, you need to transfer that awareness to your entire organization. DBAs can’t protect their environments on their own. All IT groups must become actively involved. Management buy-in is crucial. Expenditures on products and personnel may need to be made to improve the level of protection required to safeguard sensitive data assets. The organization has to commit the resources necessary to generate a well thought out enterprise-wide security strategy that requires that the appropriate level of controls be in place and audited regularly. If you don’t, I’ll be reading about your shop in the next data breach newsletter.

Learning how to secure your environments is like learning anything else. You will need to commit time to learning various security best practices. At an enterprise level, industry regulatory requirements like SOX, HIPAA and PCI DSS provide a laundry list of protective controls. Download the compliance control objectives. It will give your organization an excellent starting point. In RDX’s case, we decided to become PCI DSS and HIPAA compliant. PCI DSS contains a little over 300 separate security objectives and information about how those objectives are to be audited to demonstrate proof of compliance.

In the next installment of this series, we’ll take a look at the database vulnerability analysis process.

Thanks for reading.

The post The Database Protection Series– Common Threats and Vulnerabilities- Part 2 appeared first on Remote DBA Experts.

Chick-fil-A joins the payment card breach club [VIDEO]

Chris Foot - Fri, 2015-01-16 09:26

Transcript

Hi, welcome to RDX. Given the number of payment card breaches that have occurred over the past couple of years, it’s no surprise that a fast food joint recently joined the list of companies that have been affected.

According to eSecurity Planet, Chick-Fil-A recently noted that a few of its restaurants have experienced unusual credit and debit card activity. Additional reports suggest that Chick-Fil-A is the link to approximately 9,000 instances of payment card loss. It’s possible that the perpetrators managed to steal payment card numbers from Chick-Fil-A’s databases, but analysts are still investigating.

First, it may be appropriate for Chick-Fil-A as well as other retailers to use tokenization, which will prevent hackers from accessing payment data. In addition, setting up a database security monitoring solution will allow specialists to receive alerts the minute a server records suspicious activity.

Thanks for watching!

The post Chick-fil-A joins the payment card breach club [VIDEO] appeared first on Remote DBA Experts.

Did you forget to run root.sh?

Laurent Schneider - Fri, 2015-01-16 09:08

Not easy to detect, and depending on the product (agent/database), it may have only limited side effects.

Like external jobs not running, operating systems statistics not collected.

But it is not always easy to diagnose.

For instance if you patch from OMS 12cR2 to 12cR3, and you run the root.sh only in 12cR2, they are very few statistics missing (one is the OS_STORAGE_ENTITY).

Running the root.sh doesn’t generate a log file or an entry in the inventory.

To check if it was executed, check what it is supposed to do. It is a bit different in each version. One think it always does is changing the ownership to root and set the sticky bit for a few binaries. For the database, this is done in sub-scripts called rootadd.sh (10g) or rootadd_rdbms.sh (11g/12c).


eval ls -l $(find $ORACLE_HOME -name "rootadd*sh" -exec awk '$1="$CHOWN"&&$2=="root"{print $3}' {} \;|sort -u)

-rwsr-x--- root dba .../product/11.2.0/db_4/bin/extjob
-rwsr-x--- root dba .../product/11.2.0/db_4/bin/jssu
-rws--x--- root dba .../product/11.2.0/db_4/bin/nmb
-rws--x--- root dba .../product/11.2.0/db_4/bin/nmhs
-rws--x--- root dba .../product/11.2.0/db_4/bin/nmo
-rwsr-x--- root dba .../product/11.2.0/db_4/bin/oradism
-rw-r----- root dba ...11.2.0/db_4/rdbms/admin/externaljob.ora

If the ownership is root, you definitely did run the root.sh.

On the 12c agent, there is a FULL_BINARY_LIST variable that point to list of root binaries in sbin


eval $(grep FULL_BINARY_LIST= $AGENT_HOME/root.sh)
cd $AGENT_HOME/../../sbin
ls -l $FULL_BINARY_LIST

-rws--x--- root dba nmb
-rws--x--- root dba nmhs
-rws--x--- root dba nmo

If all files exist and belong root, it looks like you did run the root.sh.

Spatial space

Jonathan Lewis - Fri, 2015-01-16 07:00

One thing you (ought to) learn very early on in an Oracle career is that there are always cases you haven’t previously considered. It’s a feature that is frequently the downfall of “I found it on the internet” SQL.  Here’s one (heavily paraphrased) example that appeared on the OTN database forum a few days ago:

select table_name,round((blocks*8),2)||’kb’ “size” from user_tables where table_name = ‘MYTABLE';

select table_name,round((num_rows*avg_row_len/1024),2)||’kb’ “size” from user_tables where table_name = ‘MYTABLE';

The result from the first query is 704 kb,  the result from the second is 25.4 kb … fragmentation, rebuild, CTAS etc. etc.

The two queries are perfectly reasonable approximations (for an 8KB block size, with pctfree of zero) for the allocated space and actual data size for a basic heap table – and since the two values here don’t come close to matching it’s perfectly reasonable to consider doing something like a rebuild or shrink space to reclaim space and (perhaps) to improve performance.

In this case it doesn’t look as if the space reclaimed is likely to be huge (less than 1MB), on the other hand it’s probably not going to take much time to rebuild such a tiny table; it doesn’t seem likely that the rebuild could make a significant difference to performance (though apparently it did), but the act of rebuilding might cause execution plans to change for the better because new statistics might appear as the rebuild took place. The figures came from a test system, though, so maybe the table on the production system was much larger and the impact would be greater.

Being cautious about wasting time and introducing risk, I made a few comments about the question –  and learned that one of the columns was of type SDO_GEOMETRY. This makes a big difference about what to do next, because dbms_stats.gather_table_stats() doesn’t process such columns correctly, which results in a massive under-estimate for the avg_row_len (which is basically the sum of avg_col_len for the table). Here’s an example (run on 12c, based on some code taken from the 10gR2 manuals):


drop table cola_markets purge;

CREATE TABLE cola_markets (
  mkt_id NUMBER,
  name VARCHAR2(32),
  shape SDO_GEOMETRY);

INSERT INTO cola_markets VALUES(
  1,
  'cola_a',
  SDO_GEOMETRY(
    2003,  -- two-dimensional polygon
    NULL,
    NULL,
    SDO_ELEM_INFO_ARRAY(1,1003,3), -- one rectangle (1003 = exterior)
    SDO_ORDINATE_ARRAY(1,1, 5,7) -- only 2 points needed to
          -- define rectangle (lower left and upper right) with
          -- Cartesian-coordinate data
  )
);

insert into cola_markets select * from cola_markets;
/
/
/
/
/
/
/
/
/

execute dbms_stats.gather_table_stats(user,'cola_markets')
select
	avg_row_len, num_rows, blocks,
	round(avg_row_len * num_rows / 7200,0) expected_blocks
from user_tables where table_name = 'COLA_MARKETS';

analyze table cola_markets compute statistics;
select
	avg_row_len, num_rows, blocks,
	round(avg_row_len * num_rows / 7200,0) expected_blocks
from user_tables where table_name = 'COLA_MARKETS';

If you care to count the number of times I execute the “insert as select” it’s 10, so the table ends up with 2^10 = 1024 rows. The 7,200 in the calculated column converts bytes to approximate blocks on the assumption of 8KB blocks and pctfree = 10. Here are the results following the two different methods for generating object statistics:


PL/SQL procedure successfully completed.

AVG_ROW_LEN   NUM_ROWS     BLOCKS EXPECTED_BLOCKS
----------- ---------- ---------- ---------------
         14       1024        124               2

Table analyzed.

AVG_ROW_LEN   NUM_ROWS     BLOCKS EXPECTED_BLOCKS
----------- ---------- ---------- ---------------
        109       1024        124              16

Where does the difference in Expected_blocks come from ? (The Blocks figures is 124 because I’ve used 1MB uniform extents – 128 block – under ASSM (which means 4 space management blocks at the start of the first extent.)

Here are the column lengths after the call to dbms_stats: as you can see the avg_row_len is the sum of avg_col_len.


select column_name, data_type, avg_col_len
from   user_tab_cols
where  table_name = 'COLA_MARKETS'
order by
        column_id
;

COLUMN_NAME          DATA_TYPE                AVG_COL_LEN
-------------------- ------------------------ -----------
MKT_ID               NUMBER                             3
NAME                 VARCHAR2                           7
SYS_NC00010$         SDO_ORDINATE_ARRAY
SHAPE                SDO_GEOMETRY
SYS_NC00008$         NUMBER                             0
SYS_NC00004$         NUMBER                             4
SYS_NC00005$         NUMBER                             0
SYS_NC00006$         NUMBER                             0
SYS_NC00007$         NUMBER                             0
SYS_NC00009$         SDO_ELEM_INFO_ARRAY

The figures from the analyze command are only slightly different, but fortunately the analyze command uses the row directory pointers to calculate the actual row allocation, so picks up information about the impact of inline varrays, LOBs, etc. that the dbms_stats call might not be able to handle.


COLUMN_NAME          DATA_TYPE                AVG_COL_LEN
-------------------- ------------------------ -----------
MKT_ID               NUMBER                             2
NAME                 VARCHAR2                           6
SYS_NC00010$         SDO_ORDINATE_ARRAY
SHAPE                SDO_GEOMETRY
SYS_NC00008$         NUMBER                             1
SYS_NC00004$         NUMBER                             3
SYS_NC00005$         NUMBER                             1
SYS_NC00006$         NUMBER                             1
SYS_NC00007$         NUMBER                             1
SYS_NC00009$         SDO_ELEM_INFO_ARRAY

As a basic reminder – whenever you do anything slightly non-trivial (e.g. something you couldn’t have done in v5, say) then remember that all those dinky little script things you find on the Internet might not actually cover your particular case.


Oracle Audit Vault and Compliance Reporting

The Oracle Audit Vault has seeded reports for the following compliance and legislative requirements – no additional license is required.

  • Payment Card Industry (PCI)
  • Sarbanes-Oxley Act (SOX)
  • Gramm-Leach-Bliley Act (GLBA)
  • Health Insurance Portability and Accountability Act (HIPAA)
  • United Kingdom Data Protection Act (DPA)

For each compliance statue, following table lists the included reports available –

Compliance Report

Description

Activity Overview

Digest of all captured audit events for a specified period of time

All Activity

Details of all captured audit events for a specified period of time

Audit Settings Changes

Details of observed user activity targeting audit settings for a specified period of time

Created Stored Procedures

Stored procedures created within a specified period of time

Data Access

Details of audited read access to data for a specified period of time

Data Modification

Details of audited data modifications for a specified period of time

Database Schema Changes

Details of audited DDL activity for a specified period of time

Deleted Stored Procedures

Stored procedures deleted within a specified period of time

Entitlements Changes

Details of audited entitlement related activity for a specified period of time

Failed Logins

Details of audited failed user logins for a specified period of time

New Stored Procedures

Latest state of stored procedures created within a specified period of time

Secured Target Startup and Shutdown

Details of observed startup and shutdown events for a specified period of time

Stored Procedure Activity Overview

Digest of all audited operations on stored procedures for a specified period of time

Stored Procedure Modification History

Details of audited stored procedure modifications for a specified period of time

User Login and Logout

Details of audited successful user logins and logouts for a specified period of time

If you have questions, please contact us at mailto:info@integrigy.com

Reference Tags: AuditingComplianceSarbanes-Oxley (SOX)PCIHIPAAOracle Audit Vault
Categories: APPS Blogs, Security Blogs

The info in OTHER_XML of view DBA_HIST_SQL_PLAN

Marco Gralike - Fri, 2015-01-16 04:04
I had some time to spend, killing time, and thought about something that was “on…

Reading MAF iOS Simulator Logging Output

Andrejus Baranovski - Fri, 2015-01-16 03:34
It could be very handy to know how and where to read MAF logging output from iOS simulator. This is not that obvious to find logging output on Mac OS system. All log is written into application.log file, this file is located inside temporary application directory. I will explain how to locate this directory and how to open application.log file. You can read more about MAF testing and logging here - 18.5 Using and Configuring Logging.

Sample mobile application - ADFMobileLogginApp.zip, is pretty basic and contains System.out.println to write a message into application.log file:


Message is written to the log from Action Listener method, invoked from Save button. Log file application.log is accessible from the following location (you can access it from the console) -

Users/youruser/Library/Application Support/iPhone/ 7.1/Applications/com.company.ADFMobileLoggingApp/Document/logs.

As you can see, application.log file is stored under logs directory in documents folder. You must navigate to the iOS simulator, application temporary folder to access it:


There will be application.log file available:


Log file can be opened directly from console, execute command open -a TextEdit application.log:


Message from saveAction() is printed in the log:


Enjoy MAF coding in Mac OS !

Oracle Ace Associate

Oracle in Action - Thu, 2015-01-15 23:44

RSS content

It gives me immense pleasure to share with you the news that
I am an Oracle Ace Associate“.

Thanks to the “Oracle ACE Program” for accepting  me  to receive the Oracle ACE Associate award.

My heart is full of gratitude for Sir Murali Vallath who nominated me for this.

Thanks to AIOUG for giving me an opportunity to speak during SANGAM 14 and publishing my white paper on ‘Histograms – Pre-12c and now” in  Oracle Connect Issue Dec 2014.

I want to  thank  my husband  for encouraging me, and readers of my blog for their time, comments and suggestions.

Thank you so much!



Tags:  

Del.icio.us
Digg

Comments:  18 comments on this item
You might be interested in this:  
Copyright © ORACLE IN ACTION [Oracle Ace Associate], All Right Reserved. 2015.

The post Oracle Ace Associate appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Log Buffer #406, A Carnival of the Vanities for DBAs

Pythian Group - Thu, 2015-01-15 20:32

This Log Buffer Edition covers blog posts from various bloggers of Oracle, SQL Server and MySQL.

Oracle:

Sync tables: generate MERGE using Unique constraint.

What Hardware and Software Do YOU Want Oracle to Build?

There were a number of new features introduced in Ops Center 12.2.2. One of the shiny ones is an expansion of the backup and recovery capabilities to include Proxy Controllers.

Want to Be a Better Leader? Answer One Question.

Managing a remote Oracle Database instance with “Geographic Edition”.

SQL Server:

Learn how you can use SQLCop to prevent your developers from writing stored procedures that are named sp_ something.

Data Cleaning in SQL 2012 with Data Quality Services.

Stairway to PowerPivot and DAX – Level 9: Function / Iterator Function Pairs: The DAX MAX() and MAXX() Functions.

Options to Improve SQL Server Bulk Load Performance.

Dynamically Create Tables Based on an Access Table

MySQL:

Stored Procedures: critiques and defences.

JSON UDF functions 0.3.3 have been released.

Business Scalability, Operational Efficiency and Competitive Edge with MariaDB MaxScale 1.0 GA.

MySQL 5.7 labs and the HTTP Plugin – inserting, updating and deleting records in MySQL via HTTP.

Hyper-threading – how does it double CPU throughput?

Categories: DBA Blogs

Jumpstart Patch Application and Installs with PeopleTools Templates

PeopleSoft Technology Blog - Thu, 2015-01-15 18:36

PeopleTools OVM templates provide a great way for customers running current levels of PeopleTools to keep up to date with their PeopleTools middle tiers.  The templates provide a simplified method of rolling out new patches, allowing Oracle to bundle, patch, test and deliver all the required middle tier components such as Java, WebLogic, Tuxedo and the Oracle Linux operating system. Using the pre-built templates eliminates the possibility of downloading the wrong version, and/or missing a critical patch of one of the required components. Templates also significantly reduce the amount of time needed to locate, download, patch and test the individual elements of a PeopleTools environment. For customers that need to customize, the PeopleTools templates are a great starting point to create baseline custom templates.

PeopleSoft has produced middle tier templates with PeopleTools patches since PT 8.53.03.  These templates have been found on MyOracleSupport (MOS) Patches and Updates under the PeopleSoft Enterprise PeopleTools product category.

Until now, all templates produced were available for download.  Older templates include components that may have become stale over time. To ensure that only current environments are available, we now plan to make each patch level template available for no more than six months. This will allow customers enough time to download and use a template before environment components become out-of-date. The use of the patched templates allows customers to adopt PeopleTools maintenance faster and reduce the risk of encountering known bugs and security issues.

Should older patches be required, they will still be available via the traditional patching methods, in accordance with our documented Patch Policy.

Secure Enterprise Search (SES) for PeopleSoft templates are also made available on MOS. SES templates are intended to be a ‘black box’ or ‘appliance’ installation. As such, only the most current level of an SES template is available. This appliance is compatible with all PeopleTools levels used with PeopleSoft 9.2 applications.

As new SES templates come out, they REPLACE the previous version found on MOS.

For information on the PeopleTools middle tier templates for Oracle Linux and Exalogic, SES templates and PeopleSoft update images, please see Oracle’s PeopleSoft Virtualization Products.

Additional Information:
Oracle VM Templates for PeopleSoft White Paper
PeopleSoft Virtual Machine Templates for Exalogic Red Paper

Master Detail - Detail in APEX with FOEX

Dimitri Gielis - Thu, 2015-01-15 17:30
In the previous post I talked about Master-Detail and that it wasn't that easy to do Master-Detail-Detail today declaratively in APEX 4.2 (and 5.0).
Below is a screenshot of FOEX (a framework build on top of APEX) and a Master-Detail-Detail in there.
You can see the live example at https://www.apexrnd.be/ords/f?p=FOEX_FDOCS:MASTER_DETAIL_DETAIL:0::NO:::

Here's a screenshot of the page behind the scenes:


At first it might seem complex, but it isn't. In FOEX you can put regions in different places on the screen (center, east, west pane etc.), so many regions are to control those areas.
The most important regions are the "List of Customers", "List of Orders" and "Order Items", those are the regions that you see on the first screenshot. The other region "Manage Order Items" is a Modal Dialog that comes when you want to add an order item.


My goal is not to explain FOEX in great detail here, you can read about it on their website, but basically they extended APEX with a custom theme, many (many!) plugins and a builder add-on (you see in the screenshot the "Create FOEX Region", so it's really like you are working natively in APEX. Here's a screenshot when you hit the button to create a FOEX region:


So almost natively you can build your Master-Detail-Detail, through their wizards.

I thought to mention this solution here as well, as although my first choice is to make simple and clean web applications, if you do want a lot of information on your screen (like in a master-detail-detail), and you like ExtJs (which is used behind the scenes), FOEX is probably one of the best choices you have.

APEX R&D is a partner of FOEX, so if you need some more info, don't hesitate to contact us.

Categories: Development

Oracle Priority Support Infogram for 15-JAN-2015

Oracle Infogram - Thu, 2015-01-15 16:14

Oracle Support
Malware sites offering Oracle 'patches', from Proactive Support - Data Integration.
RDBMS
New MOS Notes on Database Upgrades for 12c with or without Oracle Multitenant, from Upgrade your Database – NOW!
And from the same source: Upcoming Upgrade Workshops Jan/Feb 2015
Data Modeling
From that JEFF SMITH: Drawing Foreign Keys & Relationships in SQL Developer Data Modeler
NoSQL
From the Oracle NoSQL Database blog: Big Data SQL for Oracle NoSQL Database (part 1)
Security
From the Identity Management blog:  The Future of User Authentication
OEM
Enterprise Manager Ops Center - The Power of Groups, from Oracle Enterprise Manager blog.
Java
From Joseph D. Darcy's Oracle Weblog: More concise try-with-resources statements in JDK 9
SOA
From SOA & BPM Partner Community Blog: 2 Minute Tech Tip: Industrial SOA
and
A Dirty Dozen Questions on Oracle SOA 12c You Need Answered, but Feared to Ask….
BPM
Repository Creation Utility, MDS and Schema Versions, from the Solving Business Challenges with Oracle's BPM Suite blog.
Fusion
Release 9: The Activity Redesign, from the Fusion Applications Developer Relationsblog.  EBS
From Oracle E-Business Suite Technology:
Reminder: EBS 12.0 Extended Support Ends January 31, 2015
EBS 12.1.3 Certified with Microsoft Windows Server 2012 R2
From Oracle E-Business Suite Support Blog:
Webcast: EAM/VCP Integration Features and Demonstration
Easy Assistance for Troubleshooting Issues
Webcast: Handling Coupon in Advanced Pricing
Webcast: Implement & Understand The Oracle Receivables Manage Accounting Exception (Sweeping) Program
Check out the New and Improved R12 Approval Analyzer Diagnostic Script!
Check out the New and Improved R12 Approval Analyzer Diagnostic Script!
…And Finally
The Best Tools for Finding Information When Google Isn't Enough, from LifeHacker.
Cool Hacks to Combat Winter, from DailyFinance.

OAUG BIP SIG ... we're getting the band back together

Tim Dexter - Thu, 2015-01-15 14:28

 Today's post comes to you from Brent at STR Software. If you could help out, it would be greatly appreciated, read on ...

First off, if you are not familiar with the term SIG, it stands for Special Interest Group. OAUG facilitates a number of SIGs to bring users together that share common or industries concerning certain Oracle products.

Unfortunately, the BI Publisher SIG has been offline for a number of years and has not been given the attention it needs to be a useful resource for members of OAUG. Well... I'm getting the band back together and I need your help!

The SIG itself was formed to specifically focus on BI Publisher embedded in Oracle EBS, Peoplesoft and JD Edwards. I have put together a survey that is being emailed out to previous members of the SIG to get thoughts on how the SIG can be of service. That list is pretty old and YOU may not be on it, so if you are interested in participating in the SIG (or even if you are not), have a look at the link below and let me know your thoughts. Our first official meeting will be at Collaborate 15 in Las Vegas, hope to see you there!

Take the survey -> here!

Categories: BI & Warehousing

Webcast: Delivering Next-Gen Digital Experiences

WebCenter Team - Thu, 2015-01-15 12:06
Oracle Corporation Digital Strategies For Customer Engagement Growth Automating Marketing & Customer Engagement

Becoming a digital business is imperative for organizations to deliver the next wave of revenue growth, service excellence and business efficiency. And the stakes are high -- 94% of customers discontinue communications because of irrelevant messages and experiences.

Join this webcast for an in-depth look at technologies that enable IT leaders to connect digital customer experiences to business outcomes. Learn how to:
  • Deliver omni-channel experiences that are seamless, tailored and innovative across Paid, Owned and Earned media
  • Convert unknown audiences to known and involved customers
  • Extend reach and orchestrate engagement across all channels and devices
  • Move Marketing from silo’d technologies to a single Digital Experience Platform that also connects Marketing to the entire organization
Register now for this webcast.

Red Button Top Register Now Red Button Bottom Live Webcast Calendar February 12, 2014
10 a.m. PT / 1 p.m. ET Featured Speaker:

Chris Preston Chris Preston,
Sr. Director
Customer Strategies
Oracle Hardware and Software Engineered to Work Together Copyright © 2015, Oracle Corporation and/or its affiliates.
All rights reserved.
Contact Us | Legal Notices | Privacy

Master Detail (-Detail) in APEX

Dimitri Gielis - Thu, 2015-01-15 09:17
In the last posts we used following tables to show (Report) and edit (Form) the data:

Another way to work with this data is using a Master-Detail report/form.
In the master we have our customer info and in the detail the products he's interested in.

Oracle APEX provides a wizard to create a Master-Detail Form
You just follow the wizard to set it up:


By default the wizard will only show related tables, which is most logical - if you don't see your table, you probably don't have the FK relation on the tables.

You've some options for the layout - edit the records directly on the detail screen or as a separate form. Which one to pick? It depends... for small detail tables I would go for a Tabular Form, but for larger or more complex ones I would go with another Form. Tabular Forms are not the most flexible in APEX, but for now that is the only declarative option you have to edit multiple records at the same time. Alternatives are to use Modal dialogs, code your own custom solution or use a solution of somebody else. FOEX for example has a nice solution which I'll cover in the next post.

Tabular forms got improved only a little bit in APEX 5.0, but Modal Dialogs come native in APEX 5.0.  Tabular Forms will be more enhanced in APEX 5.1 which can then do master - detail - detail and it will also come with another solution - a new "Multi-Row Edit" region type - which could work well in this case.

You find the Master Detail result online at https://www.apexrnd.be/ords/f?p=DGIELIS_BLOG:MASTER_DETAIL


What if our tables were a bit more complex and we need Master-Detail-Detail today?
We would need to create our own custom "tabular forms", basically a report where we use the apex_item api... but that is for another post.
Categories: Development

Security Big Data - Part 7 - a summary

Steve Jones - Thu, 2015-01-15 09:00
Over six parts I've gone through a bit of a journey on what Big Data Security is all about. Securing Big Data is about layers Use the power of Big Data to secure Big Data How maths and machine learning helps Why its how you alert that matters Why Information Security is part of Information Governance Classifying Risk and the importance of Meta-Data The fundamental point here is that
Categories: Fusion Middleware