Skip navigation.

Feed aggregator

21st Century DBMS success and failure

Curt Monash - Mon, 2014-07-14 00:37

As part of my series on the keys to and likelihood of success, I outlined some examples from the DBMS industry. The list turned out too long for a single post, so I split it up by millennia. The part on 20th Century DBMS success and failure went up Friday; in this one I’ll cover more recent events, organized in line with the original overview post. Categories addressed will include analytic RDBMS (including data warehouse appliances), NoSQL/non-SQL short-request DBMS, MySQL, PostgreSQL, NewSQL and Hadoop.

DBMS rarely have trouble with the criterion “Is there an identifiable buying process?” If an enterprise is doing application development projects, a DBMS is generally chosen for each one. And so the organization will generally have a process in place for buying DBMS, or accepting them for free. Central IT, departments, and — at least in the case of free open source stuff — developers all commonly have the capacity for DBMS acquisition.

In particular, at many enterprises either departments have the ability to buy their own analytic technology, or else IT will willingly buy and administer things for a single department. This dynamic fueled much of the early rise of analytic RDBMS.

Buyer inertia is a greater concern.

  • A significant minority of enterprises are highly committed to their enterprise DBMS standards.
  • Another significant minority aren’t quite as committed, but set pretty high bars for new DBMS products to cross nonetheless.
  • FUD (Fear, Uncertainty and Doubt) about new DBMS is often justifiable, about stability and consistent performance alike.

A particularly complex version of this dynamic has played out in the market for analytic RDBMS/appliances.

  • First the newer products (from Netezza onwards) were sold to organizations who knew they wanted great performance or price/performance.
  • Then it became more about selling “business value” to organizations who needed more convincing about the benefits of great price/performance.
  • Then the behemoth vendors became more competitive, as Teradata introduced lower-price models, Oracle introduced Exadata, Sybase got more aggressive with Sybase IQ, IBM bought Netezza, EMC bought Greenplum, HP bought Vertica and so on. It is now hard for a non-behemoth analytic RDBMS vendor to make headway at large enterprise accounts.
  • Meanwhile, Hadoop has emerged as serious competitor for at least some analytic data management, especially but not only at internet companies.

Otherwise I’d say: 

  • At large enterprises, their internet operations perhaps excepted:
    • Short-request/general-purpose SQL alternatives to the behemoths — e.g. MySQL, PostgreSQL, NewSQL — have had tremendous difficulty getting established. The last big success was the rise of Microsoft SQL Server in the 1990s. That’s why I haven’t mentioned the term mid-range DBMS in years.
    • NoSQL/non-SQL has penetrated large enterprises mainly for a few specific use cases, for example the lists I posted for MongoDB or graph databases.
  • Internet-only companies have few inertia issues when it comes to database managers. They’ll consider anything they regard as being in their price ballpark (which is however often restricted to open source). I think part of the reason is that as quickly as they rewrite their applications, DBMS are vastly less “strategic” to them than they are to most larger enterprises.
  • The internet operations of large companies — especially large retailers — in many cases behave like internet-only companies, but in many other cases behave like the rest of the enterprise.

The major reasons for DBMS categories to get established in the first place are:

  • Performance and/or scalability (many examples).
  • Developer features (for example dynamic schema).
  • License/maintenance cost (for example several open source categories).
  • Ease of installation and administration (for example open source again, and also data warehouse appliances).

Those same characteristics are major bases for competition among members of a new category, although as noted above behemoth-loyalty can also come into play.

Cool-vs.-weird tradeoffs are somewhat secondary among SQL DBMS.

  • There’s not much of a “cool” factor, because new products aren’t that different in what they do vs. older ones.
  • There’s not a terrible “weird” factor either, but of course any smaller offering faces FUD, and also …
  • … appliances are anti-strategic for many buyers, especially ones who demand a smooth path to the cloud.)

They’re huge, however, in the non-SQL world. Most non-SQL data managers have a major “weird” factor. Fortunately, NoSQL and Hadoop both have huge “cool” cred to offset it. XML/XQuery unfortunately did not.

Finally, in most DBMS categories there are massive issues with product completeness, more in the area of maturity than that of whole product. The biggest whole product issues are concentrated on the matter of interoperating with other software — business intelligence tools, packaged applications (if relevant to the category), etc. Most notably, the handful of DBMS that are certified to run SAP share a huge market that other DBMS can’t touch. But BI tools are less of a differentiator — I yawn when vendors tell me they are certified for/partnered with MicroStrategy, Tableau, Pentaho and Jaspersoft, and I’m surprised at any product that isn’t.

DBMS maturity has a lot of aspects, but the toughest challenges are concentrated in two main areas:

  • Reliability, especially but not only in short-request use cases.
  • Performance across a great variety of use cases. I observe frequently that performance in best-case scenarios, performance in the lab and performance in real-world environments are much further apart than vendors like to think.

In particular:

  • Maturity demands seem to be much higher for SQL DBMS than for NoSQL.
    • I think this is one of several reasons NoSQL has been much more successful than NewSQL.
    • It’s why I think MarkLogic’s “Enterprise NoSQL” positioning is a mistake.
  • As for MySQL:
    • MySQL wasn’t close to reliable enough for enterprises to trust it until InnoDB became the default storage engine.
    • MySQL 5 point releases have added major features, or decent performance for major features. I’ll confess to having lost track of what’s been fixed and what’s still missing.
    • In saying all that I’m holding MySQL to a much higher maturity standard than I’m holding NoSQL — because that’s what I think enterprise customers do.
  • PostgreSQL “should” be doing a lot better than it is. I have an extremely low opinion of its promoters, and not just for personal reasons. (That said, the personal reasons don’t just apply to EnterpriseDB anymore. I’ve also run out of patience waiting for Josh Berkus to retract untruths he posted about me years ago.)
  • SAP HANA checks boxes for performance (In-memory rah rah rah!!) and whole product (Runs SAP!!). That puts it well ahead of most other newish SQL DBMS, purely analytic ones perhaps excepted.
  • Any other new short-request SQL DBMS that sounds like is has traction is also memory-centric.
  • Analytic RDBMS are in most respects held to lower maturity standards than DBMS used for write-intensive workloads. Even so, products in the category are still frequently tripped up by considerations of concurrent performance and mixed workload management.

Related links

There have been 1,470 previous posts in the 9-year history of this blog, many of which could serve as background material for this one. A couple that seem particularly germane and didn’t get already get linked above are:

21st Century DBMS success and failure

DBMS2 - Mon, 2014-07-14 00:37

As part of my series on the keys to and likelihood of success, I outlined some examples from the DBMS industry. The list turned out too long for a single post, so I split it up by millennia. The part on 20th Century DBMS success and failure went up Friday; in this one I’ll cover more recent events, organized in line with the original overview post. Categories addressed will include analytic RDBMS (including data warehouse appliances), NoSQL/non-SQL short-request DBMS, MySQL, PostgreSQL, NewSQL and Hadoop.

DBMS rarely have trouble with the criterion “Is there an identifiable buying process?” If an enterprise is doing application development projects, a DBMS is generally chosen for each one. And so the organization will generally have a process in place for buying DBMS, or accepting them for free. Central IT, departments, and — at least in the case of free open source stuff — developers all commonly have the capacity for DBMS acquisition.

In particular, at many enterprises either departments have the ability to buy their own analytic technology, or else IT will willingly buy and administer things for a single department. This dynamic fueled much of the early rise of analytic RDBMS.

Buyer inertia is a greater concern.

  • A significant minority of enterprises are highly committed to their enterprise DBMS standards.
  • Another significant minority aren’t quite as committed, but set pretty high bars for new DBMS products to cross nonetheless.
  • FUD (Fear, Uncertainty and Doubt) about new DBMS is often justifiable, about stability and consistent performance alike.

A particularly complex version of this dynamic has played out in the market for analytic RDBMS/appliances.

  • First the newer products (from Netezza onwards) were sold to organizations who knew they wanted great performance or price/performance.
  • Then it became more about selling “business value” to organizations who needed more convincing about the benefits of great price/performance.
  • Then the behemoth vendors became more competitive, as Teradata introduced lower-price models, Oracle introduced Exadata, Sybase got more aggressive with Sybase IQ, IBM bought Netezza, EMC bought Greenplum, HP bought Vertica and so on. It is now hard for a non-behemoth analytic RDBMS vendor to make headway at large enterprise accounts.
  • Meanwhile, Hadoop has emerged as serious competitor for at least some analytic data management, especially but not only at internet companies.

Otherwise I’d say: 

  • At large enterprises, their internet operations perhaps excepted:
    • Short-request/general-purpose SQL alternatives to the behemoths — e.g. MySQL, PostgreSQL, NewSQL — have had tremendous difficulty getting established. The last big success was the rise of Microsoft SQL Server in the 1990s. That’s why I haven’t mentioned the term mid-range DBMS in years.
    • NoSQL/non-SQL has penetrated large enterprises mainly for a few specific use cases, for example the lists I posted for MongoDB or graph databases.
  • Internet-only companies have few inertia issues when it comes to database managers. They’ll consider anything they regard as being in their price ballpark (which is however often restricted to open source). I think part of the reason is that as quickly as they rewrite their applications, DBMS are vastly less “strategic” to them than they are to most larger enterprises.
  • The internet operations of large companies — especially large retailers — in many cases behave like internet-only companies, but in many other cases behave like the rest of the enterprise.

The major reasons for DBMS categories to get established in the first place are:

  • Performance and/or scalability (many examples).
  • Developer features (for example dynamic schema).
  • License/maintenance cost (for example several open source categories).
  • Ease of installation and administration (for example open source again, and also data warehouse appliances).

Those same characteristics are major bases for competition among members of a new category, although as noted above behemoth-loyalty can also come into play.

Cool-vs.-weird tradeoffs are somewhat secondary among SQL DBMS.

  • There’s not much of a “cool” factor, because new products aren’t that different in what they do vs. older ones.
  • There’s not a terrible “weird” factor either, but of course any smaller offering faces FUD, and also …
  • … appliances are anti-strategic for many buyers, especially ones who demand a smooth path to the cloud.)

They’re huge, however, in the non-SQL world. Most non-SQL data managers have a major “weird” factor. Fortunately, NoSQL and Hadoop both have huge “cool” cred to offset it. XML/XQuery unfortunately did not.

Finally, in most DBMS categories there are massive issues with product completeness, more in the area of maturity than that of whole product. The biggest whole product issues are concentrated on the matter of interoperating with other software — business intelligence tools, packaged applications (if relevant to the category), etc. Most notably, the handful of DBMS that are certified to run SAP share a huge market that other DBMS can’t touch. But BI tools are less of a differentiator — I yawn when vendors tell me they are certified for/partnered with MicroStrategy, Tableau, Pentaho and Jaspersoft, and I’m surprised at any product that isn’t.

DBMS maturity has a lot of aspects, but the toughest challenges are concentrated in two main areas:

  • Reliability, especially but not only in short-request use cases.
  • Performance across a great variety of use cases. I observe frequently that performance in best-case scenarios, performance in the lab and performance in real-world environments are much further apart than vendors like to think.

In particular:

  • Maturity demands seem to be much higher for SQL DBMS than for NoSQL.
    • I think this is one of several reasons NoSQL has been much more successful than NewSQL.
    • It’s why I think MarkLogic’s “Enterprise NoSQL” positioning is a mistake.
  • As for MySQL:
    • MySQL wasn’t close to reliable enough for enterprises to trust it until InnoDB became the default storage engine.
    • MySQL 5 point releases have added major features, or decent performance for major features. I’ll confess to having lost track of what’s been fixed and what’s still missing.
    • In saying all that I’m holding MySQL to a much higher maturity standard than I’m holding NoSQL — because that’s what I think enterprise customers do.
  • PostgreSQL “should” be doing a lot better than it is. I have an extremely low opinion of its promoters, and not just for personal reasons. (That said, the personal reasons don’t just apply to EnterpriseDB anymore. I’ve also run out of patience waiting for Josh Berkus to retract untruths he posted about me years ago.)
  • SAP HANA checks boxes for performance (In-memory rah rah rah!!) and whole product (Runs SAP!!). That puts it well ahead of most other newish SQL DBMS, purely analytic ones perhaps excepted.
  • Any other new short-request SQL DBMS that sounds like is has traction is also memory-centric.
  • Analytic RDBMS are in most respects held to lower maturity standards than DBMS used for write-intensive workloads. Even so, products in the category are still frequently tripped up by considerations of concurrent performance and mixed workload management.

Related links

There have been 1,470 previous posts in the 9-year history of this blog, many of which could serve as background material for this one. A couple that seem particularly germane and didn’t get already get linked above are:

Categories: Other

Oracle encrypted table data found unencrypted in SGA

ContractOracle - Sun, 2014-07-13 21:29
When data needs to be kept private, or companies are worried about data leakage, then they often choose to store that data in encrypted columns in the table using Oracle Transparent Data Encryption. 

I wanted to see if that data was stored in the SGA in an unencrypted format.  I ran the following test from sqlplus.

CDB$ROOT@ORCL> create table credit_card_number(card_number char(16) encrypt);

Table created.

CDB$ROOT@ORCL> insert into credit_card_number values ('4321432143214321');

1 row created.

CDB$ROOT@ORCL> update credit_card_number set card_number = '5432543254325432' where card_number = '4321432143214321';

1 row updated.

CDB$ROOT@ORCL> VARIABLE cardnumber char(16);
CDB$ROOT@ORCL> EXEC :cardnumber := '6543654365436543';

PL/SQL procedure successfully completed.

CDB$ROOT@ORCL> update credit_card_number set card_number = :cardnumber where card_number = '5432543254325432';

1 row updated.

CDB$ROOT@ORCL> commit;

Now we search SGA for the data that should be encrypted to keep it private.  


[oracle@localhost shared_memory]$ ./sga_search 4321432143214321
USAGE :- sga_search searchstring


Number of input parameters seem correct.
SEARCH FOR   :- 4321432143214321
/dev/shm/ora_orcl_38895617_30 found string at 459100
4321432143214321
/dev/shm/ora_orcl_38895617_30 found string at 3244704
4321432143214321
/dev/shm/ora_orcl_38895617_29 found string at 2529984
4321432143214321
[oracle@localhost shared_memory]$ ./sga_search 5432543254325432
USAGE :- sga_search searchstring


Number of input parameters seem correct.
SEARCH FOR   :- 5432543254325432
/dev/shm/ora_orcl_38895617_30 found string at 459061
5432543254325432
/dev/shm/ora_orcl_38895617_30 found string at 4106466
5432543254325432
/dev/shm/ora_orcl_38895617_29 found string at 2075064
5432543254325432
/dev/shm/ora_orcl_38895617_29 found string at 2528552
5432543254325432
/dev/shm/ora_orcl_38895617_28 found string at 1549533
5432543254325432
[oracle@localhost shared_memory]$ ./sga_search 6543654365436543
USAGE :- sga_search searchstring


Number of input parameters seem correct.
SEARCH FOR   :- 6543654365436543
/dev/shm/ora_orcl_38895617_29 found string at 3801400
6543654365436543

The output shows that all 3 of the card_number values used in the demonstration can be found in SGA, sometimes in multiple locations.  Flushing the buffer cache did not clear the data from SGA, but flushing the shared pool did.  Further analysis is needed to confirm exactly where in the shared pool the unencrypted data is being stored to confirm if it is in sql statements, sql variables, or interim values kept by the encryption process.  Further testing is also needed to see if it is possible to avoid potential data leakage by using bind variables or wrapping sql in plsql.  In the meantime ... be aware that data you believe to be encrypted may actually be stored in memory in clear text visible to anyone with privileges to connect to the SGA.

Oracle TDE FAQ  :- http://www.oracle.com/technetwork/database/security/tde-faq-093689.html
States that "With TDE column encryption, encrypted data remains encrypted inside the SGA, but with TDE tablespace encryption, data is already decrypted in the SGA, which provides 100% transparency."
Categories: DBA Blogs

Partial Join Evaluation in Oracle 12c

Yann Neuhaus - Sun, 2014-07-13 21:21

Do you think that it's better to write semi-join SQL statements with IN(), EXISTS(), or to do a JOIN? Usually, the optimizer will evaluate the cost and do the transformation for you. And in this area, one more transformation has been introduced in 12c which is the Partial Join Evaluation (PJE).

First, let's have a look at the 11g behaviour. For that example, I use the SCOTT schema, but I hire a lot more employees in departement 40:

 

SQL> alter table EMP modify empno number(10);
Table altered.
SQL> insert into EMP(empno,deptno) select rownum+10000,40 from EMP,(select * from dual connect by level

 

Why department 40? I'll explain it below, but I let you think about it before. In the default SCOTT schema, there is a department 40 in DEPT table, but which has no employees in EMP. And the new transformation is not useful in that case.

 

11g behaviour

Now, I'm running the following query to check all the departments that have at least one employee:

I can write it with IN:

 

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------
SQL_ID  6y71msam9w32r, child number 0
-------------------------------------
select distinct deptno,dname from dept 
 where deptno in ( select deptno from emp)

Plan hash value: 1754319153

------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows | Buffers |
------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |      4 |      15 |
|*  1 |  HASH JOIN SEMI    |      |      1 |      4 |      4 |      15 |
|   2 |   TABLE ACCESS FULL| DEPT |      1 |      4 |      4 |       7 |
|   3 |   TABLE ACCESS FULL| EMP  |      1 |  15068 |    388 |       8 |
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("DEPTNO"="DEPTNO")

 

or with EXISTS:

 

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------
SQL_ID  cbpa3zjtzfzrn, child number 0
-------------------------------------
select distinct deptno,dname from dept 
 where exists ( select 1 from emp where emp.deptno=dept.deptno)

Plan hash value: 1754319153

------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows | Buffers |
------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |      4 |      15 |
|*  1 |  HASH JOIN SEMI    |      |      1 |      4 |      4 |      15 |
|   2 |   TABLE ACCESS FULL| DEPT |      1 |      4 |      4 |       7 |
|   3 |   TABLE ACCESS FULL| EMP  |      1 |  15068 |    388 |       8 |
------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("DEPTNO"="DEPTNO")

 

Both are good. We didn't have to read the whole EMP table. I have 15000 rows in my table, I do a full scan on it, but look at the A-Rows: only 388 rows were actually read.

The HASH JOIN first read the DEPT table in order to build the hash table. So it already knows that we cannot have more than 4 distinct departments.

Then we do the join to EMP just to check which of those departments have an employee. But we can stop as soon as we find the 4 departments. This is the reason why we have read only 388 rows here. And this is exactly what a Semi Join is: we don't need all the matching rows, we return at most one row per matching pair.

Ok. What if we write the join ourselves?

 

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------
SQL_ID  2xjj9jybqja87, child number 1
-------------------------------------
select distinct deptno,dname from dept join emp using(deptno)

Plan hash value: 2962452962

-------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows | Buffers |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      4 |     129 |
|   1 |  HASH UNIQUE        |      |      1 |  15068 |      4 |     129 |
|*  2 |   HASH JOIN         |      |      1 |  15068 |  14014 |     129 |
|   3 |    TABLE ACCESS FULL| DEPT |      1 |      4 |      4 |       7 |
|   4 |    TABLE ACCESS FULL| EMP  |      1 |  15068 |  14014 |     122 |
-------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("DEPT"."DEPTNO"="EMP"."DEPTNO")

 

Bad luck. We have to read all the rows. More rows and more buffers.

 

12c behaviour

Let's do the same in 12.1.0.1:

 

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------
SQL_ID  2xjj9jybqja87, child number 0
-------------------------------------
select distinct deptno,dname from dept join emp using(deptno)

Plan hash value: 1629510749

-------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows | Buffers |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      4 |      14 |
|   1 |  HASH UNIQUE        |      |      1 |      4 |      4 |      14 |
|*  2 |   HASH JOIN SEMI    |      |      1 |      4 |      4 |      14 |
|   3 |    TABLE ACCESS FULL| DEPT |      1 |      4 |      4 |       7 |
|   4 |    TABLE ACCESS FULL| EMP  |      1 |  15068 |    388 |       7 |
-------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("DEPT"."DEPTNO"="EMP"."DEPTNO")

 

Same plan but less rows have been read. If we check the outlines, we see the new feature:

 

      PARTIAL_JOIN(@"SEL$58A6D7F6" "EMP"@"SEL$1")

 

And here is what we see in the optimizer trace:

 

OPTIMIZER STATISTICS AND COMPUTATIONS
PJE: Checking validity of partial join eval on query block SEL$58A6D7F6 (#1)
PJE: Passed validity of partial join eval by query block SEL$58A6D7F6 (#1)
PJE: Partial join eval conversion for query block SEL$58A6D7F6 (#1).
PJE: Table marked for partial join eval: EMP[EMP]#1

 

The hints that controls the feature are PARTIAL_JOIN and NO_PARTIAL_JOIN and there are enabled by _optimizer_partial_join_eval which appeared in 12c.

But of course, the optimization is useful only when we have all the values at the beginning of the table. This is why I added at least one employee in department 40. If there are some rows in DEPT that have no matching row in EMP, then Oracle cannot know the result before reaching the end of the table.

Oracle encryption wallet password found in SGA

ContractOracle - Sun, 2014-07-13 20:51
If companies are worried about data privacy or leakage, they are often recommended to encrypt sensitive data inside Oracle databases to stop DBAs from accessing it, and implement "separation of duties" so that only the application or data owner has the encryption keys or wallet password.  One method to encrypt data is to use Oracle Transparent Database Encryption which stores keys in the Oracle wallet protected by a wallet password.  Best practice dictates using a very long wallet password to avoid rainbow tables and brute force attacks, and keep the key and password secret.

I wrote a simple program to search for data in Oracle shared memory segments, and it was able to find the Oracle wallet password, which means anyone who can connect to the shared memory can get the wallet password and access the encrypted data.  The following demonstrates this :-

First open and close the wallet using the password :-


CDB$ROOT@ORCL> alter system set encryption wallet open identified by "verylongverysecretwalletpassword1";

System altered.

CDB$ROOT@ORCL> alter system set wallet close identified by "verylongverysecretwalletpassword1";

System altered.


Now search for the wallet password in SGA :-
oracle@localhost shared_memory]$ ./sga_search verylongverysecretwalletpassword1USAGE :- sga_search searchstring

Number of input parameters seem correct.SEARCH FOR   :- verylongverysecretwalletpassword1/dev/shm/ora_orcl_35258369_30 found string at 3473189verylongverysecretwalletpassword1
The search found the password in SGA, so it should be possible to analyse the memory structure that currently stores the known password, and create another program to directly extract passwords on unknown systems.  It may also be possible to find the password by selecting from v$ or x$ tables.  I have not done that analysis, so don't know how difficult it would be, but if the password is stored, it will be possible to extract it, and even if it is mixed up with a lot of other sql text and variables it would be very simple to just try opening the wallet using every string stored in SGA.
The password is still in SGA after flushing the buffer cache.
CDB$ROOT@ORCL> alter system flush buffer_cache;
System altered.

[oracle@localhost shared_memory]$ ./sga_search verylongverysecretwalletpassword1USAGE :- sga_search searchstring

Number of input parameters seem correct.SEARCH FOR   :- verylongverysecretwalletpassword1/dev/shm/ora_orcl_35258369_30 found string at 3473189verylongverysecretwalletpassword1

After flushing the shared pool the password is no longer available.  
CDB$ROOT@ORCL> alter system flush shared_pool;
System altered.

[oracle@localhost shared_memory]$ ./sga_search verylongverysecretwalletpassword1USAGE :- sga_search searchstring

Number of input parameters seem correct.SEARCH FOR   :- verylongverysecretwalletpassword1[oracle@localhost shared_memory]$ 
As this password really should be secret, Oracle really should not store it.   More research is needed to confirm if the password can be hidden by using bind variables, obfuscation, or wrapping it in plsql.
Categories: DBA Blogs

Master Data Services installation for SQL Server 2012

Yann Neuhaus - Sun, 2014-07-13 20:32

This posting is a tutorial for installing Master Data Services on your Windows Server 2012. Microsoft SQL Server Master Data Services (MDS) is a Master Management product from Microsoft, code-named Bulldog. It is the rebranding of the Stratature MDM product, titled +EDM and acquired in June 2007 by Microsoft. Initially, it was integrated for the first time in Microsoft SQL Server 2008 as an additional installer. But since SQL Server 2012, Master Data Services is integrated as a feature within the SQL Server installer.

 

Introduction

Master Data Services is part of the Enterprise Information Management (EMI) technologies, provided by Microsoft, for managing information in an enterprise.

EMI technologies include:

  • Integration Services
  • Master Data Services
  • Data Quality Services

 

Components

Master Data Services covers five main components:

  • MDS Configuration Manager tool: used to configure Master Data Services
  • MDS Data Manager Web Application: used essentially to perform administrative tasks
  • MDS Web Service: used to extend or develop custom solutions
  • MDS Add-in for Excel: used to manage data, create new entities or attributes …

 

SQL Server Editions & Versions

Master Data Services can be installed only with the following SQL Server Editions & Versions:

  • SQL Server 2008 R2 edition: Datacenter or Enterprise versions
  • SQL Server 2012 or SQL Server 2014 editions: Enterprise or BI versions

 

Master Data Services prerequisites in SQL Server 2012

First, Master Data Services is based on an application web named Master Data Manager Web Application, in order to perform administrative task, for example. This web application is hosted by Internet Information Services (IIS), so it is a necessary prerequisite.

Furthermore, to be able to display the content from the web application, you need Internet Explorer 7 or later (Internet Explorer 6 is not supported) with Silverlight 5.

Moreover, if you planned to use Excel with Master Data Services, you also need to install Visual Studio 2010 Tools for Office Runtime, plus the Master Data Services Add-in for Microsoft Excel.

Finally, often forgotten, but PowerShell 2.0 is required for Master Data Services.

 Let’s resume the requirements for Master Data Services:

  • Internet Information Services (IIS)
  • Internet Explorer 7 or later
  • Silverlight 5
  •  PowerShell 2.0
  • Visual Studio 2010 Tools for Office Runtime and Excel Add-in for Microsoft Excel (only if you plan to use Excel with Master Data Services).

 

Configuration at the Windows Server level

In the Server Manager, you have to activate the Web Server (IIS) Server Roles to be able to host the Master Data Web Application, as well as the .Net 3.5 feature.

For the Server Roles, you have to select:

  • Web Server (IIS)

For the Server Features, you have to select:

- .NET Framework 3.5 Features
  - .NET Framework 3.5
  - HTTP Activation
- .NET Framework 4.5 features
  - .NET Framework 4.5
  - ASP.NET 4.5
  - WCF Services
    - HTTP Activation
    - TCP Port Sharing

 

MDS_features_selection_2.png

 

For the IIS features selection, you have to select:

- Web Server
  - Common HTTP Features
    - Default Document
    - Directory Browsing
    - HTTP Errors
    - Static Content
  - Health and Diagnostics
    - HTTP Logging
    - Request Monitor
  - Performance
    - Static Content Compression
  - Security
    - Request Filtering
    - Windows Authentication
  - Application Development
    - .NET Extensibility
    - .NET Extensibility 4.5
    - ASP.NET 3.5
    - ASP.NET 4.5
    - ISAPI Extensions
    - ISAPI Filters
  - Management Tools
    - IIS Management Console

 

MDS_features_selection.png

MDS_features_selection_3.png 

 

Installation of SQL Server 2012

Master Data Services stores its data on a SQL Server database, so you need a SQL Server Engine installed.

Of course, SQL Server Engine can be installed on a different Windows Server. So the Windows Server with Master Data Services installed is used as a Front Server.

Then, in order to personalize the roles of your Master Data Services, you also need to install Management Tools.

At the features installation step, you have to select:

  • Database Engine Services
  • Management Tools
  • Master Data Services

 

Conclusion

At this point, Master Data Services should be installed with all the needed prerequisites.


MDS_confirm_prerequisites.png

 

However, Master Data Services cannot be used without configuring it. Three main steps need to be performed through the MDS Configuration Manager:

  • First, you have to create a MDS database
  • Then, you have to create a MDS web application hosted in IIS
  • Finally, you have to link the MDS database with the MDS web application

Deferrable RI – 2

Jonathan Lewis - Sun, 2014-07-13 12:46

A question came up on Oracle-L recently about possible locking anomalies with deferrable referential integrity constraints.

An update by primary key is taking a long time; the update sets several columns, one of which is the child end of a referential integrity constraint. A check on v$active_session_history shows lots of waits for “enq: TX – row lock contention” in mode 4 (share), and many of these waits also identify the current object as the index that has been created to avoid the “foreign key locking” problem on this constraint (though many of the waits show the current_obj# as -1). A possible key feature of the issue is that foreign key constraint is defined as “deferrable initially deferred”. The question is, could such a constraint result in TX/4 waits.

My initial thought was that if the constraint was deferrable it was unlikely, there would have to be other features coming into play.

Of course, when the foreign key is NOT deferrable it’s easy to set up cases where a TX/4 appears: for example you insert a new parent value without issuing a commit then I insert a new matching child, at that point my session will wait for your session to commit or rollback. If you commit my insert succeeds if you rollback my session raises an error (ORA-02291: integrity constraint (schema_name.constraint_name) violated – parent key not found). But if the foreign key is deferred the non-existence (or potential existence, or not) of the parent should matter.  If the constraint is deferrable, though, the first guess would be that you could get away with things like this so long as you fixed up the data in time for the commit.

I was wrong. Here’s a little example:


create table parent (
	id	number(4),
	name	varchar2(10),
	constraint par_pk primary key (id)
)
;

create table child(
	id_p	number(4)
		constraint chi_fk_par
		references parent
		deferrable initially deferred,
	id	number(4),
	name	varchar2(10),
	constraint chi_pk primary key (id_p, id)
)
;

insert into parent values (1,'Smith');
insert into parent values (2,'Jones');

insert into child values(1,1,'Simon');
insert into child values(1,2,'Sally');

insert into child values(2,1,'Jack');
insert into child values(2,2,'Jill');

commit;

begin
	dbms_stats.gather_table_stats(user,'parent');
	dbms_stats.gather_table_stats(user,'child');
end;
/

pause Press return

update child set id_p = 3 where id_p = 2 and id = 2;

If you don’t do anything after the pause and before the insert then the update will succeed – but fail on a subsequent commit unless you insert parent 3 before committing. But if you take advantage of the pause to use another session to insert parent 3 first, the update will then hang waiting for the parent insert to commit or rollback – and what happens next may surprise you. Basically the deferrability doesn’t protect you from the side effects of conflicting transactions.

The variations on what can happen next (insert the parent elsewhere, commit or rollback) are interesting and left as an exercise.

I was slightly surprised to find that I had had a conversation about this sort of thing some time ago, triggered by a comment to an earlier post. If you want to read a more thorough investigation of the things that can happen and how deferrable RI works then there’s a good article at this URL.

 


RAC Commands : 1 -- Viewing Configuration

Hemant K Chitale - Sun, 2014-07-13 05:58
In 11gR2

Viewing the configuration of a RAC database

[root@node1 ~]# su - oracle
-sh-3.2$ srvctl config database -d RACDB
Database unique name: RACDB
Database name: RACDB
Oracle home: /u01/app/oracle/rdbms/11.2.0
Oracle user: oracle
Spfile: +DATA1/RACDB/spfileRACDB.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: RACSP
Database instances:
Disk Groups: DATA1,FRA,DATA2
Mount point paths:
Services: MY_RAC_SVC
Type: RAC
Database is policy managed
-sh-3.2$

So, we see that :
a) The database name is RACDB
b) It is a Policy Managed database (not Administrator Managed)
c) It is dependent on 3 ASM Disk Groups DATA1, DATA2, FRA
d) There is one service called MY_RAC_SVC configured
e) The database is in the  RACSP server pool
f) The database is configured to be Auto-started when Grid Infrastructure starts


Viewing the configuration of a RAC service

-sh-3.2$ srvctl config service -d RACDB -s MY_RAC_SVC
Service name: MY_RAC_SVC
Service is enabled
Server pool: RACSP
Cardinality: UNIFORM
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Service is enabled on nodes:
Service is disabled on nodes:
-sh-3.2$

So, we see that :
a) The service name is MY_RAC_SVC
b) The UNIFORM cardinality means that it is to run on all active nodes in the server pool
c) The server-side connection load balancing goal is LONG (for long running sessions)


Viewing the configuration of Server Pools

-sh-3.2$ srvctl config srvpool
Server pool name: Free
Importance: 0, Min: 0, Max: -1
Candidate server names:
Server pool name: Generic
Importance: 0, Min: 0, Max: -1
Candidate server names:
Server pool name: RACSP
Importance: 0, Min: 0, Max: 2
Candidate server names:
-sh-3.2$

So we see that :
a) The RACSP server pool is the only created (named) server pool
b) This server pool has a max of 2 nodes

Categories: DBA Blogs

A response to Bloomberg article on UCLA student fees

Michael Feldstein - Sat, 2014-07-12 13:56

Megan McArdle has an article that was published in Bloomberg this week about the growth of student fees. The setup of the article was based on a new “$4 student fee to pay for better concerts”.

To solve this problem, UCLA is introducing a $4 student fee to pay for better concerts. That illuminates a budgeting issue in higher education — and indeed among human beings more generally.

That $4 is not a large fee. Even the poorest student can probably afford it. On the other hand, collectively, UCLA’s student fees are significant: more than $3,500, or about a quarter of the mandatory cost of attending UCLA for a year.

Those fees are made up of many items, each trivial individually. Only collectively do they become a major source of costs for students and their families and potentially a barrier to college access for students who don’t have an extra $3,500 lying around.

I’m sympathetic to the argument that college often costs too much and that institutions can play revenue games to avoid the appearance of raising tuition. I also think that Megan is one of the better national journalists on the topic of the higher education finances.

UCLA Fees

However, this article is somewhat sloppy in a way that harms the overall message. I would like to clarify the student fees data to help show the broader point.

Let’s look at the actual data from UCLA’s web site. I assume that Megan is basing this analysis on in-state undergraduate full-time students. The data is listed per quarter, and UCLA has three quarters for a full academic year. I have summarized below summing three quarters into yearly data, and you can:

  • Hover over each measure to see the fee description from UCLA’s fee description page;
  • Click on each category that I added to see the component fees;
  • Sort either column; and
  • Choose which rows to keep or exclude.
  • NOTE: Static image above if you cannot see interactive graphics

UCLA Fees for In-State Undergrads (Total $3,749.97)

Learn About Tableau Some Clarifications Needed
  • The total of non-tuition fees is $3,750 per year, not $3,500; however, Megan is right that this represents “about a quarter of the mandatory cost of attending UCLA for a year” ($3,750 out of $14,970).
  • The largest single fee is the UC health insurance fee (UC-SHIP), which is more than half of the total non-tuition fees. This fact (noted by Michael Berman on Twitter) should have been pointed out, given the significant percentage of the total.
  • With the UC-SHIP at $1,938 and the student services fee at $972, I hardly consider these as “trivial individually”.
Broader Point on Budgeting

The article’s broader point is that using extraneous fees to create additional revenue leads to a flawed budgeting process.

As I’ve written before, this is a common phenomenon that you see among people who have gotten themselves into financial trouble — or, for that matter, people who are doing OK but complain that they don’t know where the money goes and can’t save for the big-ticket items they want. They consider each purchase individually, rather than in the context of a global budget, which means that they don’t make trade-offs. Instead of asking themselves “Is this what I want to spend my limited funds on, or would I rather have something else?” they ask “Can I afford this purchase on my income?” And the answer is often “Yes, I can.” The problem is that you can’t afford that purchase and the other 15 things that you can also, one by one, afford to buy on your income. This is how individual financial disasters occur, and it is also one way that college tuition is becoming a financial disaster for many families.

This point is very important. Look at the Wooden Center fee, described here (or by hovering over chart):

Covers repayment of the construction bond plus the ongoing maintenance and utilities costs for the John Wooden Recreation Center. It was approved by student referendum. The fee is increased periodically based on the Consumer Price Index.

To take Megan’s point, this fee “was approved by student referendum”, which means that UCLA has moved budgeting responsibility away from a holistic approach to saying “the students voted on it”. This makes no financial sense, nor does it make sense to shift bond repayment and maintenance and utilities cost onto student fees.

While this article had some sloppy reporting in terms of accurately describing the student fees, it does highlight an important aspect of the budget problems in higher education and how the default method is to shift the costs to students.

The post A response to Bloomberg article on UCLA student fees appeared first on e-Literate.

Downloading VirtualBox VM “Oracle Enterprise Manager 12cR4″

Marco Gralike - Sat, 2014-07-12 11:10
Strangely enough those cool VirtualBox VM machine downloads are nowadays a bit scattered on different Oracle places on http://otn.oracle.com and others. So in all that...

Read More

ADF 12c (12.1.3) Line Chart Overview Feature

Andrejus Baranovski - Sat, 2014-07-12 10:51
ADF 12c (12.1.3) is shipped with completely rewritten DVT components, there are no graphs anymore - they are called charts now. But there are much more, besides only a name change. Previous DVT components are still running fine, but JDeveloper wizards are not supporting them anymore. You should check ADF 12c (12.1.3) developer guide for more details, in this post I will focus on line chart overview feature. You should keep in mind, new DVT chart components do not work well with Google Chrome v.35 browser (supposed to be fixed in Google Chrome v.36) - check JDeveloper 12c (12.1.3) release notes.

Sample application - ADF12DvtApp.zip, is based on Employees data and displays line chart for the employee salary, based on his job. Two additional lines are displayed for maximum and minimum job salaries:


Line chart is configured with zooming and overview support. User can change overview window and zoom into specific area:


This helps a lot to analyse charts with large number of data points on X axis. User can zoom into peaks and analyse data range:


One important hint about new DVT charts - components should stretch automatically. Keep in mind -parent component (surrounding DVT chart) should be stretchable. As you can see, I have set type = 'stretch' for panel box, surrounding line chart:


Previous DVT graphs had special binding elements in Page Definition, new DVT charts are using regular table bindings - nothing extra:


Line chart in the sample application is configured with zooming and scrolling (there are different modes available - live, on demand with delay):


Overview feature is quite simple to enable - it is enough to add DVT overview tag to the line chart, and it works:

R12.2 :Modulus Check Validations for Bank Accounts

OracleApps Epicenter - Sat, 2014-07-12 08:45
The existing bank account number validations for domestic banks only check the length of the bank account number. These validations are performed during the entry and update of bank accounts.With R12.2 the account number validations for United Kingdom are enhanced to include a modulus check alongside the length checks. Modulus checking is the process of [...]
Categories: APPS Blogs

ORA-09925: Unable to create audit trail file

Oracle in Action - Sat, 2014-07-12 03:33

RSS content

I received this error message when I started my virtual machine and tried to logon to my database as sysdba to startup the instance.
[oracle@node1 ~]$ sqlplus / as sysdba

ERROR:
ORA-09925: Unable to create audit trail file
Linux Error: 30: Read-only file system
Additional information: 9925
ORA-09925: Unable to create audit trail file
Linux Error: 30: Read-only file system
Additional information: 9925

- I rebooted my machine and got following messages which pointed to some errors encountered during filesystem check and instructed to run fsck manually.

[root@node1 ~]# init 6

Checking filesystems

/: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
(i.e., without -a or -p options)
*** An error occurred during the filesystem check.
*** Dropping you to a shell; the system will reboot
*** when you leave the shell.
Give root password for maintenance
(or type Control-D to continue):

– I entered password for root to initiate filesystem check. As a result I was prompted multiple no. of times to allow fixing of  various filesystem errors.

(Repair filesystem) 1 # fsck
Fix(y)?

- After all the errors had been fixed, filesystem check was restarted

Restarting e2fsck from the beginning...

/: ***** FILE SYSTEM WAS MODIFIED *****
/: ***** REBOOT LINUX *****

- After filesystem had been finally checked to be correct, I exited for reboot to continue.

(Repair filesystem) 2 # exit

– After the reboot, I could successfully connect to my database as sysdba .

[oracle@node1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Sat Jul 12 09:21:52 2014

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to an idle instance.

SQL>

I hope this post was useful.

Your comments and suggestions are always welcome.

—————————————————————————————–

Related Links:

Home

Database Index

 

————-

 



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!You might be interested in this:  
Copyright © ORACLE IN ACTION [ORA-09925: Unable to create audit trail file], All Right Reserved. 2014.

The post ORA-09925: Unable to create audit trail file appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Swimming Progress

Tim Hall - Sat, 2014-07-12 02:59

While I was at BGOUG I went for swim each morning before the conference. That got me to thinking, perhaps I should start swimming again…

It’s been 4 weeks since I got back from the conference and I’ve been swimming very morning. It was a bit of a struggle at first. I think it took me 2-3 days to work up to a mile (1600M – about 9M short of a real mile). Since then I’ve been doing a mile each day and it’s going pretty well.

I’m pretty much an upper body swimmer it the moment. I kick my legs just enough to keep them from sinking, but don’t really generate any forward thrust with them. At this point I’m concentrating on my upper body form. When I think about it, my form is pretty good. When I get distracted, like when I am having to pass people, it breaks down a little. I guess you could say I am in state of “concious competence“. Over the next few weeks this should set in a bit and I can start working on some other stuff. It’s pointless to care too  much about speed at this point because if my form breaks down I end up having a faster arm turnover, but use more effort and actually swim slower. The mantra is form, form, form!

Breathing is surprisingly good. I spent years as a left side breather (every 4th stroke). During my last bout of swimming (2003-2008) I forced myself to switch to bilateral breathing, but still felt the left side was more natural. Having had a 6 year break, I’ve come back and both sides feel about the same. If anything, I would say my right side technique is slightly better than my left. Occasionally I will throw in a length of left-only or right-only (every 4th stroke) breathing for the hell of it, but at the moment every 3rd stroke is about the best option for me. As I get fitter I will start playing with things like every 5th stroke and lengths of no breathing just to add a bit of variety.

Turns are generally going pretty well. Most of the time I’m fine. About 1 in 20 I judge the distance wrong and end up having a really flimsy push off. I’m sure my judgement will improve over time.

At this point I’m taking about 33 minutes to complete a mile. The world record for 1500M short course (25M pool) is 14:10. My first goal is to get my 1600M time down to double the 1500M world record. Taking 5 minutes off my time seems like quite a big challenge, but I’m sure as I bring my legs into play and my technique improves my speed will increase significantly.

As I get more into the swing of things I will probably incorporate a bit of interval training, like a sprint length, followed by 2-3 at a more sedate pace. That should improve my fitness quite a lot and hopefully improve my speed.

For a bit of fun I’ve added a couple of lengths of butterfly after I finish my main swim. I used to be quite good at butterfly, but at the moment I’m guessing the life guards think I’m having a fit. It would be nice to be able to bang out a few lengths of that and not feel like I was dying. :)

I don’t do breaststroke any more, as it’s not good for my hips. Doing backstroke in a pool with other people in the lane sucks, so I can’t be bothered with that. Maybe on days when the pool is quieter I will work on it a bit, but for now the main focus is crawl.

Cheers

Tim…

PS. I reserve the right to get bored, give up and eat cake instead at any time… :)

Swimming Progress was first posted on July 12, 2014 at 9:59 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Finished first pass through Alapati 12c OCP upgrade book

Bobby Durrett's DBA Blog - Fri, 2014-07-11 17:29

I just finished reading Sam Alapati’s 12c OCP upgrade book for the first time and I really like it because of the content that it covered which I hadn’t discovered through my study of the Oracle manuals.  Also, it did a good job explaining some things that Oracle’s manuals left unclear.

After reading each chapter I took the end of chapter test and got between 60% and 75% of the questions right.  Next I plan to take the computer based test that was on the CD that came with the book and which covers both parts of the upgrade exam.

I did find minor errors throughout the book, but I still found it very useful especially after having already studied the same topics on my own without a study guide like this one to direct me.  The author’s insights into the test and the material it covers adds value because they guide me to the areas that I need to focus on.

- Bobby

Categories: DBA Blogs

OTN Latin America Tour, 2014

Hans Forbrich - Fri, 2014-07-11 17:12
The dates, and the speakers, for the Latin America Tour have been anounnced.

http://www.oracle.com/technetwork/es/community/user-groups/otn-latinoamerica-tour-2014-2213115-esa.html


Categories: DBA Blogs

EMC XtremIO – The Full-Featured All-Flash Array. Interested In Oracle Performance? See The Whitepaper.

Kevin Closson - Fri, 2014-07-11 16:32

NOTE: There’s a link to the full article at the end of this post.

I recently submitted a manuscript to the EMC XtremIO Business Unit covering some compelling lab results from testing I concluded earlier this year. I hope you’ll find the paper interesting.

There is a link to the full paper at the bottom of this block post. I’ve pasted the executive summary here:

Executive Summary

Physical I/O patterns generated by Oracle Database workloads are well understood. The predictable nature of these I/O characteristics have historically enabled platform vendors to implement widely varying I/O acceleration technologies including prefetching, coalescing transfers, tiering, caching and even I/O elimination. However, the key presumption central to all of these acceleration technologies is that there is an identifiable active data set. While it is true that Oracle Database workloads generally settle on an active data set, the active data set for a workload is seldom static—it tends to move based on easily understood factors such as data aging or business workflow (e.g., “month-end processing”) and even the data source itself. Identifying the current active data set and keeping up with movement of the active data set is complex and time consuming due to variability in workloads, workload types, and number of workloads. Storage administrators constantly chase the performance hotspots caused by the active dataset.

All-Flash Arrays (AFAs) can completely eliminate the need to identify the active dataset because of the ability of flash to service any part of a larger data set equally. But not all AFAs are created equal.

Even though numerous AFAs have come to market, obtaining the best performance required by databases is challenging. The challenge isn’t just limited to performance. Modern storage arrays offer a wide variety of features such as deduplication, snapshots, clones, thin provisioning, and replication. These features are built on top of the underlying disk management engine, and are based on the same rules and limitations favoring sequential I/O. Simply substituting flash for hard drives won’t break these features, but neither will it enhance them.

EMC has developed a new class of enterprise data storage system, XtremIO flash array, which is based entirely on flash media. XtremIO’s approach was not simply to substitute flash in an existing storage controller design or software stack, but rather to engineer an entirely new array from the ground-up to unlock flash’s full performance potential and deliver array-based capabilities that are unprecidented in the context of current storage systems.

This paper will help the reader understand Oracle Database performance bottlenecks and how XtremIO AFAs can help address such bottlenecks with its unique capability to deal with constant variance in the IO profile and load levels. We demonstrate that it takes a highly flash-optimized architecture to ensure the best Oracle Database user experience. Please read more:  Link to full paper from emc.com.


Filed under: All Flash Array, Flash Storage for Databases, oracle, Oracle I/O Performance, Oracle performance, Oracle Performnce Monitoring, Oracle SAN Topics, Oracle Storage Related Problems

Best of OTN - Week of July 6th

OTN TechBlog - Fri, 2014-07-11 11:13

Virtual Technology Summit - Content is now OnDemand!

In this four track virtual event attendees had the opportunity to learn firsthand from Oracle ACEs, Java Champions, and Oracle product experts, as they shared their insight and expertise on Java, systems, database and middleware. A replay of the sessions is now available for your viewing.

Architect Community

In addition to interviews with tech experts and community leaders, the OTN ArchBeat YouTube Channel also features technical videos, most pulled from various OTN online technical events. The following are the three most popular of those tech videos for the past seven days.

Debugging and Logging for Oracle ADF Applications
We're only human. Regardless how much work Oracle ADF does for us, or how powerful the JDeveloper IDE is, the inescapable truth is that as developers we will still make mistakes and introduce bugs into our ADF applications. In this video Oracle ADF Product Manager Chris Muir explores the sophisticated debugging tooling JDeveloper provides.

Developer Preview: Oracle WebLogic 12.1.3
Oracle WebLogic 12.1.3 includes some exciting developer-centric enhancements. IN this video Steve Button focuses on some of the more interesting updates around Java EE 7 features and examines how they will affect your development process.

Best Practices in Oracle ADF Development
In this video Frank Nimphius presents a brown-bag of ideas, hints and best practices that will help you to build better ADF applications.

Friday Funny
"I always wanted to be somebody, but now I realize I should have been more specific." - Lily Tomlin

Java Community 

Codename One & Java Code Geeks are giving away free JavaOne Tickets (worth $3,300)! Read More!

@Java RT @JDeveloper: Running Oracle ADF application High availability (HA)

Tech Article: Leap Motion and JavaFX

Database Community

OTN DBA/DEV Watercooler Blog - Database Application Development VM--Get It Now

Oracle DB Dev FaceBook Posts -

Systems Community

New Tech Article - Playing with ZFS Shadow Migration

New - Hangout: Which Virtualization Should I Use for What? with Brian Bream


Oracle-PeopleSoft is pleased to announce the general availability of PeopleTools 8.54

PeopleSoft Technology Blog - Fri, 2014-07-11 10:51
PeopleTools is proud to announce the release of PeopleTools 8.54.  This is a landmark release for PeopleSoft, one that offers remarkable advances to our applications and our customers.  We are particularly excited about the new PeopleSoft Fluid User Experience.  With this, our applications will offer a UI that is simple and intuitive, yet highly productive and that can be used on different devices from laptops to tablets and smart phones. 
We’ve also made important improvements in reporting and analytics, life-cycle management, security, integration technology, platforms and infrastructure, and accessibility.

To get the details about everything this wonderful new release has to offer, visit these sites:                  + Release Notes                 + Release Value Proposition                 + Cumulative Feature Overview Tool                 + Installation Guides                 + Certification Table                 + Browser Compatibility Guide                 + Licensing Notes Today, PeopleTools 8.54 is Generally Available for new installations.  Customers that want to upgrade to 8.54 from earlier releases will be able to upgrade in the near future when the 02 patch is available.  
Many of our customers have shown interest in Fluid and have asked us the best way to get productive quickly.  Our answer is to use the working examples they will find in the upcoming PSFT 9.2 application images.

E-Business Suite Applications Technology Group (ATG) - WebCast

Chris Warticki - Fri, 2014-07-11 10:20

Thursday July 17, 2014 at 18:00 UK / 10:00 PST / 11:00 MST / 13:00 EST

EBS REPORTS & PRINTING TROUBLESHOOTING

  • Analyzer: E-Business Reports & Printing
  • E-Business Reports Analysis
  • Recommended Reports Patching
  • Reports Profile Options
  • E-Business Printing Analysis
  • Recommended Printing Patching
  • Printer Profile Options
  • Best Practices

Details & Registration : Note 1681612.1

If you have any question about the schedules or if you have a suggestion for an Advisor Webcast to be planned in future, please send an E-Mail to Ruediger Ziegler.