Skip navigation.

Feed aggregator

Oracle Audit Vault - Custom Reports and BI Publisher

Custom reports can be created in Oracle Audit Vault using Oracle BI Publisher.  BI Publisher is an add-on to Microsoft Word and can be used to modify or create new reports.

For example, to modify a new report, to meet specific corporate or internal audit needs, download a standard Oracle Audit Vault report that is similar (Auditor -> Reports -> Custom Reports -> Uploaded Reports).  Click on the icon to download both the template and the report definition and load both files into BI Publisher.

Once complete, upload the report definition to the same location (Auditor -> Reports -> Custom Reports -> Uploaded Reports).

If you have questions, please contact us at mailto:info@integrigy.com

Reference

 

Tags: AuditingOracle Audit Vault
Categories: APPS Blogs, Security Blogs

Je suis Charlie

Frank van Bortel - Fri, 2015-01-09 03:34
Je suis Charlie Bien sûr, moi aussi, je suis Charlie, moi.Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

Histograms Tidbits

Pakistan's First Oracle Blog - Fri, 2015-01-09 03:00
Make sure histograms exist on columns with uneven data distributions to ensure that the optimizer makes the best choice between indexes and table scans.




For range scans on data that is not uniformly distributed, the optimizers’ decisions will be improved by the presence of a histogram.

Histograms increase the accuracy of the optimizer’s cost calculations but increase the overhead of statistics collections. It’s usually worth creating histograms for columns where you believe the data will have an irregular distribution, and where the column is involved in WHERE or JOIN expressions.

CREATING HISTOGRAMS WITH METHOD_OPT

The METHOD_OPT option controls how column level statistics, in particular histograms, are created. The default value is ‘FOR ALL COLUMNS SIZE AUTO’,

which enables Oracle to choose the columns that will have a histogram collected and set the appropriate histogram bucket size.
Categories: DBA Blogs

The reasons behind nation-state hackers

Chris Foot - Fri, 2015-01-09 01:15

There are the archetypal teenage hackers who advance their reputations by accessing restricted networks just for the thrill of it, and then there are cyberespionage masters who target the databases of nationwide financial enterprises and public entities. 

When one thinks of the latter, it's easy to imagine a character out of a modern spy movie. However, it's difficult to identify the exact reasons why a nation would use hackers to conduct covert cyber-operations on another country, or large businesses operating within a state of interest. 

Why nations infiltrate large banks 
According to BankInfoSecurity contributor Eric Chabrow, an attack on a major financial institution is usually conducted by a nation-state that is looking to obtain intelligence for the purpose of protecting or improving its economy. Bankers, analysts and economists working in the finance industry all have insight into how certain agreements, global shifts and other factors will affect the condition of national markets. 

Surprisingly enough, hackers contracted by a nation-state to infiltrate an organization such as JPMorgan Chase, for example, are likely not interested in stealing money or personally identifiable information. Philip Casesa, director of IT service operations at IT security education and certification company (ISC)2 agrees with this viewpoint. 

"A government-sponsored actor doesn't have the same goals as a crime organization – the objective is much bigger than that," said Casesa, as quoted by Chabrow. "It isn't stealing dollars – it's manipulating world politics by shifting the economic balance of power." 

Goals are elusive 
One of the reasons why many people opt to speculate as to what the intentions of hackers acting on behalf of nation-states are is that, sometimes, that's simply all that can be done. In a way, only organizations such as the U.S. National Security Agency and the Federal Bureau of Investigation identify concrete intentions behind a specific attacks. 

Yet there are times when journalists can scrutinize a clear pattern. Dark Reading noted that there have been a number of cases in which intellectual property owned by a person or organization within the U.S. was stolen by Chinese operatives. Think of the impact the automobile had on the 20th-century economy. If China could gain intelligence regarding a new invention that could impact the global market in such a way, it would establish itself as an economic superpower. 

All things considered, this particular topic deserves extensive coverage – the kind often found in a college dissertation. While a blog can provide a glance, a book can provide understanding. 

The post The reasons behind nation-state hackers appeared first on Remote DBA Experts.

APEX and Scalable Vector Icons (Icon Fonts)

Dimitri Gielis - Thu, 2015-01-08 17:30
For a couple of years now webdesigners and developers don't use image icons anymore, instead we moved to scalable vector icons.

Before you had to create different images for the different formats and colours you wanted. Then to gain performance we created one big image with all those smaller images in it (called a sprite).
Next with some CSS we showed a part of the bigger image. A hassle...

In fact the evolution of using icon images you can perfectly see in APEX too. If you go to your images folder you will see many .gif files, all different icons:


In a later release APEX (till APEX 4.2) moved to sprites (see /images/themes/theme_25/images/), for example the Theme 25 sprite you see here.


The scalable vector icons (or icon fonts) solve the issues the image icons had. With the vector icons you can define the color, size, background etc. all with CSS. It makes building responsive applications so much easier, the icons stay fresh and crisp regardless of the size of the device and it's next to no overhead. This is exactly what APEX 5.0 will bring to the table: nice scalable vector icons, handcrafted and made pixel perfect by the Oracle team.

Image from Font Awesome 
In fact you don't have to wait till APEX 5.0 is there, you can add icon fonts to your own APEX application today.

There're many icon fonts out there, but here're some I like:

  • Font Awesome - One of the most popular ones and probably included in APEX 5.0 too (next to Oracle's own library)
  • Glyphicons - This library is for example used by Bootstrap
  • Foundation Icon Fonts 3 - Used by the Foundation framework
  • NounProject - I saw the founder at the TEDx conference and was very intrigued by it - they build a visual language of icons anyone can understand
  • IcoMoon - This is more than an icon library, you can actually create your own library with their app. When my wife creates fonts in Illustrator, IcoMoon transforms them into a font.
  • Fontello - You can build your own library based on other libraries on the net (the site shows many other popular font libraries I didn't put above), so if you can't find your icon here, you probably want to build it yourself :)

In the next post I'll show you how to integrate one of those in your APEX app.
Categories: Development

Rittman Mead BI Forum 2015 Call for Papers Now Open – Closes on Jan 18th 2015

Rittman Mead Consulting - Thu, 2015-01-08 16:05

The Call for Papers for the Rittman Mead BI Forum 2015 is currently open, with abstract submissions open to January 18th 2015. As in previous years the BI Forum will run over consecutive weeks in Brighton, UK and Atlanta, GA, with the provisional dates and venues as below:

  • Brighton, UK : Hotel Seattle, Brighton, UK : May 6th – 8th 2015
  • Atlanta, GA : Renaissance Atlanta Midtown Hotel, Atlanta, USA : May 13th-15th 2015

Now on it’s seventh year, the Rittman Mead BI Forum is the only conference dedicated entirely to Oracle Business Intelligence, Oracle Business Analytics and the technologies and processes that support it – data warehousing, data analysis, data visualisation, big data and OLAP analysis. We’re looking for session around tips & techniques, project case-studies and success stories, and sessions where you’ve taken Oracle’s BI products and used them in new and innovative ways. Each year we select around eight-to-ten speakers for each event along with keynote speakers and a masterclass session, with speaker choices driven by attendee votes at the end of January, and editorial input from myself, Jon Mead and Charles Elliott and Jordan Meyer.

NewImage

Last year we had a big focus on cloud, and a masterclass and several sessions on bringing Hadoop and big data to the world of OBIEE. This year we’re interested in project stories and experiences around cloud and Hadoop, and we’re keen to hear about any Oracle BI Apps 11g implementations or migrations from the earlier 7.9.x releases. Getting back to basics we’re always interested in sessions around OBIEE, Essbase and data warehouse data modelling, and we’d particularly like to encourage session abstracts on data visualization, BI project methodologies and the incorporation of unstructured, semi-structured and external (public) data sources into your BI dashboards. For an idea of the types of presentations that have been selected in the past, check out the BI Forum 2014, 2013 and 2012 homepages, or feel free to get in touch via email at mark.rittman@rittmanmead.com

The Call for Papers entry form is here, and we’re looking for speakers for Brighton, Atlanta, or both venues if you can speak at both. All session this year will be 45 minutes long, all we’ll be publishing submissions and inviting potential attendees to vote on their favourite sessions towards the end of January. Other than that – have a think about abstract ideas now, and make sure you get them in by January 18th 2015 – just over a week from now!.

Categories: BI & Warehousing

Oracle GoldenGate Processes – Part 4 – Replicat

DBASolved - Thu, 2015-01-08 15:00

The replicat process is the apply process within the Oracle GoldenGate environment.  The replicat is responsible for reading the remote trail files and applying the data found in cronilogical order.  This ensures that the data is applied in the same order it was captured.  

Until recently there was only one version of a replicat, that version was the classic version.  As of 12.1.2.0, there are now three distinct versions of a replicat.  These replicat types are:

  • Classic Mode
  • Coordinated Mode
  • Integrated Mode

Each of on these modes provide some sort of benefit depending on the database being applied to.  Oracle is pushing everyone to a more integrated approach; however, you have to be on database version 11.2.0.4 at a minimum.  

To configure the replicat process is similar to the extract and data pump processes.

Adding a Replicat:

From GGSCI (classic):

$ cd $OGG_HOME
$ ./ggsci
GGSCI> add replicat REP, exttrail ./dirdat/rt

Note:  The add command is assuming that you already have a checkpoint table configured in the target environment.

Edit Replicat parameter file:

From GGSCI:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> edit params REP

From Command Line:

$ cd $OGG_HOME
$ cd ./dirprm
$ vi REP.prm

Example of Replicat Parameter file:

REPLICAT REP
SETENV (ORACLE_HOME=”/u01/app/oracle/product/11.2.0/db_3″)
SETENV (ORACLE_SID=”orcl”)
USERID ggate, PASSWORD ggate
ASSUMETARGETDEFS
DISCARDFILE ./dirrpt/REP.dsc, append, megabytes 500
WILDCARDRESOLVE IMMEDIATE
MAP SCOTT.*, TARGET SCOTT.*;

Start the Replicat process:

Depending on if you are starting the replicat for the first time or not; how you start is going to be similar yet different.

To star the Replicat after an inital load:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> start replicat REP, atcsn [ current_scn ]

Note: The current_scn needs be obtained from the source database prior to doing the inital load of data to the target.  This ensure the consistancy of the data and provides a starting point for the replicat to start applying data from.

To start Replicat normally:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> start replicat REP

Stop the Replicat process:

Stop replicat normally:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> stop replicat REP

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Oracle Priority Support Infogram for 08-JAN-2015

Oracle Infogram - Thu, 2015-01-08 14:08

RDBMS
Adaptive Query Optimization in Oracle Database 12c, from The ORACLE-BASE Blog.
Performance Problems with Dynamic Statistics in Oracle 12c, from Pythian.
From Robert G. Freeman on Oracle: Oracle Multitenant - Should you move to a single CDB/PDB architecture?
SQL Developer
From that JEFF SMITH: SQL Developer 4.1: Search & Replace Enhancements
Exalogic
From Oracle Exalogic: E-Business Suite 12.2.4 VM Templates for Exalogic
MySQL
From Oracle's MySQL Blog: Learn MySQL Performance Tuning From the Experts..APEX
From Dimitri Gielis Blog (Oracle Application Express - APEX): Generating sample data for your APEX application.
ADF and SOAP
Oracle Mobile Suite Service Bus REST and ADF BC SOAP, from Andrejus Baranovskis Blog.
Java
Validating Oracle Java Cloud Service HA, from Java.net.
NetBeans
From Geertjan's Blog: Classpath for Python Projects in NetBeans IDE.
Business Analytics
From Business Analytics - Proactive Support:
: Patch Set Update: 11.1.2.3.507 for Oracle Hyperion Reporting and Analysis for Interactive Reporting Release 11.1.2.3.000
Business Analytics Monthly Index - December 2014
WebCenter
Oracle Documents Cloud Service R2 – Now Available!, from the Oracle WebCenter Blog.
Security
From Integrigy: Oracle Audit Vault Reports
EBS
From Oracle E-Business Suite Technology:
Of Particular Importance:
àJava Auto-Update Upgrades JRE 7 to 8 in January 2015 ß

Database 12.1.0.2 Certified with E-Business Suite 12.2
Oracle VM Templates For EBS 12.2.4 For Exalogic Now Available
Enhance E-Business Suite Security By Switching to TLS
2014: A Year in Review for Oracle E-Business Suite
From Oracle E-Business Suite Support Blog:
Advanced Global Intercompany System Setup Diagnostic Available!
Are you occasionally getting complaints that the Purchase Order pdf file is not making it to the Supplier or Approver?
Get Notified When A New or Updated Product Analyzer or Diagnostic is Released
Webcast: DEM: Troubleshooting Engine Issues
Webcast: In-Memory Cost Management for Discrete Manufacturing
Logistics and Inventory Consolidated RUP11 Patches Released
Want to take a look (and replay) our 2014 Procurement Webcasts?
Did Your Consolidation Journals Post Twice Recently?
Budgeting For The New Year? Allow Us To Help!
Did You Check The Latest Rollup Patch for Report Manager?
Financial Reporting Using Excel? Get Some Hints!
…And Finally
Best of 2014: Communication Charts Help You Negotiate In Different Cultures, from PSFK. A lot of the charts look like artillery shells and hand grenades. I wonder if that indicates something about the nature of human negotiation overall? 
From Quantamagazine: A New Physics Theory of Life. Could be the biggest thing in biology since sliced bread (biologists love sandwiches), or…could be a big yawner if it doesn't pan out.

Oracle GoldenGate Processes – Part 3 – Data Pump

DBASolved - Thu, 2015-01-08 14:00

The Data Pump group is an secondary extract group that is used to help send data over a network.  Although a data pump is another extract group, don’t confuse it with the primary extrat group. The main purpose of the data pump extract is to write the captured data over the network to the remote trail files on the target system.  

Note: if the data pump is not confgured then the primary extract group will write directly to the remote trail file.

Why would you want to use a data pump process?  Two advantage of using a data pump are:

  • Protects against network failures
  • Helping to consolidate data, from multiple sources, into a single remote trail file

The data pump process, just as the extract process, uses a parameter file to run the process.  The parameter file can be edited either before or after adding the process to the Oracle GoldenGate environment.

Adding a Data Pump:

From GGSCI:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> add extract PMP, exttrailsource ./dirdat/lt
GGSCI> add rmttrail ./dirdat/rt, extract PMP, megabytes 200

Note: In the example above notice that the data pump is reading from ./dirdat/lt and then writing to ./dirdat/rt on the remote server.

Edit Data Pump parameter file:

From GGSCI:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> edit params PMP

From Command Line:

$ cd $OGG_HOME
$ cd ./dirprm
$ vi ./PMP.prm

Example of Data Pump parameter file:

EXTRACT PMP

PASSTHRU

RMTHOST 172.15.10.10, MGRPORT 15000, COMPRESS

RMTTRAIL ./dirdat/rt

TABLE SCOTT.*;

Start/Stop the Data Pump:

Start the Data Pump:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> start extract PMP

Stop the Data Pump:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> stop extract PMP

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Oracle GoldenGate Processes – Part 2 – Extract

DBASolved - Thu, 2015-01-08 13:00

The extract process of Oracle GoldenGate is used to perform change data capture from the source database.  The extract can be used to read the online transaction log (in Oracle the online redo logs) or the associated archive logs.  The data that is extracted from the source database is then placed into an trail file (another topic for a post) for shipping to the apply sided. 

To configure an extract process there needs to be a parameter file associated with it.  The parameter file can be edited after adding the extract to the Oracle GoldenGate environment if needed.  In order to configure an extract it needs to be added to the environment and assigned a local trail file location.

Adding an Extract and Local Trail File:

Using GGSCI:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> add extract EXT, tranlog, begin now
GGSCI> add exttrail ./dirdat/lt, extract EXT, megabytes 200

Note: The trail file is required to be prefixed with a two letter prefix.  

Edit Extract Parameter File:

From GGSCI:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> edit params EXT

Edit Extract Parameter File from Command Line:

$ cd $OGG_HOME
$ cd ./dirprm
$ vi ./ext.prm

Example of Extract Parameter File:

EXTRACT EXT
USERID ggate, PASSWORD ggate 
TRANLOGOPTIONS DBLOGREADER
SETENV (ORACLE_HOME=”/u01/app/oracle/product/11.2.0/db_3″)
SETENV (ORACLE_SID=”orcl”)
WARNLONGTRANS 1h, CHECKINTERVAL 30m
EXTTRAIL ./dirdat/lt
WILDCARDRESOLVE IMMEDIATE
TABLE SCOTT.*;

Start/Stop the Extract:

Start Extract:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> start extract EXT

Stop Extract:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> stop extract EXT

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

"SELECT * FROM TABLE" Runs Out Of TEMP Space

Randolf Geist - Thu, 2015-01-08 12:49
Now that I've shown in the previous post in general that sometimes Parallel Execution plans might end up with unnecessary BUFFER SORT operations, let's have a look what particular side effects are possible due to this.

What would you say if someone tells you that (s)he just did a simple, straightforward "SELECT * FROM TABLE" that took several minutes to execute without returning, only to then error out with "ORA-01652 unable to extend temp segment", and the TABLE in question is actually nothing but a simple, partitioned heap table, so no special tricks, no views, synonyms, VPD etc. involved, it's really just a plain simple table?

Some time ago I was confronted with such a case at a client. Of course, the first question is, why would someone run a plain SELECT * FROM TABLE, but nowadays with power users and developers using GUI based tools like TOAD or SQLDeveloper, this is probably the GUI approach of a table describe command. Since these tools by default show the results in a grid that only fetches the first n rows, this typically isn't really a threat even in case of large tables, besides the common problems with allocated PX servers in case the table is queried using Parallel Execution, and the users simply keep the grid/cursor open and hence don't allow re-using the PX servers for different executions.

But have a look at the following output, in this case taken from 12.1.0.2, but assuming the partitioned table T_PART in question is marked parallel, resides on Exadata, has many partitions that are compressed via HCC, that uncompressed represent several TB of data (11.2.0.4 on Exadata produces a similar plan):


SQL> explain plan for
2 select * from t_part p;

Explained.

SQL>
SQL> select * from table(dbms_xplan.display(format => 'BASIC PARTITION PARALLEL'));
Plan hash value: 2545275170

------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | | | |
| 1 | PX COORDINATOR | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10002 | | | Q1,02 | P->S | QC (RAND) |
| 3 | BUFFER SORT | | | | Q1,02 | PCWP | |
| 4 | VIEW | VW_TE_2 | | | Q1,02 | PCWP | |
| 5 | UNION-ALL | | | | Q1,02 | PCWP | |
| 6 | CONCATENATION | | | | Q1,02 | PCWP | |
| 7 | BUFFER SORT | | | | Q1,02 | PCWC | |
| 8 | PX RECEIVE | | | | Q1,02 | PCWP | |
| 9 | PX SEND ROUND-ROBIN | :TQ10000 | | | | S->P | RND-ROBIN |
| 10 | BUFFER SORT | | | | | | |
| 11 | PARTITION RANGE SINGLE | | 2 | 2 | | | |
| 12 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T_PART | 2 | 2 | | | |
|* 13 | INDEX RANGE SCAN | T_PART_IDX | 2 | 2 | | | |
| 14 | BUFFER SORT | | | | Q1,02 | PCWC | |
| 15 | PX RECEIVE | | | | Q1,02 | PCWP | |
| 16 | PX SEND ROUND-ROBIN | :TQ10001 | | | | S->P | RND-ROBIN |
| 17 | BUFFER SORT | | | | | | |
| 18 | PARTITION RANGE SINGLE | | 4 | 4 | | | |
| 19 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T_PART | 4 | 4 | | | |
|* 20 | INDEX RANGE SCAN | T_PART_IDX | 4 | 4 | | | |
| 21 | PX BLOCK ITERATOR | | 6 | 20 | Q1,02 | PCWC | |
|* 22 | TABLE ACCESS FULL | T_PART | 6 | 20 | Q1,02 | PCWP | |
| 23 | PX BLOCK ITERATOR | |KEY(OR)|KEY(OR)| Q1,02 | PCWC | |
|* 24 | TABLE ACCESS FULL | T_PART |KEY(OR)|KEY(OR)| Q1,02 | PCWP | |
------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

13 - access("P"."DT">=TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
20 - access("P"."DT">=TO_DATE(' 2003-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
filter(LNNVL("P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss')))
22 - filter("P"."DT">=TO_DATE(' 2005-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2020-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND (LNNVL("P"."DT"<TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2003-01-01 00:00:00',
'syyyy-mm-dd hh24:mi:ss'))) AND (LNNVL("P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2001-01-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))))
24 - filter("P"."DT"<TO_DATE(' 2005-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT">=TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') OR "P"."DT"<TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "P"."DT"<TO_DATE(' 2003-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "P"."DT">=TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Can you spot the problem? It's again the "unnecessary BUFFER SORTS" problem introduced in the previous post. In particular the operation ID = 3 BUFFER SORT is "deadly" if the table T_PART is large, because it needs to buffer the whole table data before any row will be returned to the client. This explains why this simple SELECT * FROM T_PART will potentially run out of TEMP space, assuming the uncompressed table data is larger in size than the available TEMP space. Even if it doesn't run out of TEMP space it will be a totally inefficient operation, copying all table data to PGA (unlikely sufficient) respectively TEMP before returning any rows to the client.

But why does a simple SELECT * FROM TABLE come up with such an execution plan? A hint is the VW_TE_2 alias shown in the NAME column of the plan: It's the result of the "table expansion" transformation that was introduced in 11.2 allowing to set some partition's local indexes to unusable but still make use of the usable index partitions of other partitions. It takes a bit of effort to bring the table into a state where such a plan will be produced for a plain SELECT * FROM TABLE, but as you can see, it is possible. And as you can see from the CONCATENATION operation in the plan, the transformed query produced by the "table expansion" then triggered another transformation, the "concatenation" transformation mentioned in the previous post, that then results in the addition of unnecessary BUFFER SORT operations when combined with Parallel Execution.

Here is a manual rewrite that corresponds to the query that is the result of both, the "table expansion" and the "concatenation" transformation:

select * from (
select /*+ opt_param('_optimizer_table_expansion', 'false') */ * from t_part p where
("P"."DT">=TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
union all
select * from t_part p where
("P"."DT">=TO_DATE(' 2003-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
and
(LNNVL("P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2001-01-01 00:00:00',
'syyyy-mm-dd hh24:mi:ss')))
union all
select * from t_part p where
("P"."DT">=TO_DATE(' 2005-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2020-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND (LNNVL("P"."DT"<TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2003-01-01 00:00:00',
'syyyy-mm-dd hh24:mi:ss'))) AND (LNNVL("P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE('
2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))))
)
union all
select * from t_part p where
("P"."DT"<TO_DATE(' 2005-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT">=TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') OR "P"."DT"<TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "P"."DT"<TO_DATE(' 2003-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "P"."DT">=TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
;

But if you run an EXPLAIN PLAN on above manual rewrite, then 12.1.0.2 produces the following simple and elegant plan:

--------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | | | |
| 1 | PX COORDINATOR | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | | | Q1,00 | P->S | QC (RAND) |
| 3 | UNION-ALL | | | | Q1,00 | PCWP | |
| 4 | VIEW | | | | Q1,00 | PCWP | |
| 5 | UNION-ALL | | | | Q1,00 | PCWP | |
| 6 | PX SELECTOR | | | | Q1,00 | PCWP | |
| 7 | PARTITION RANGE SINGLE | | 2 | 2 | Q1,00 | PCWP | |
| 8 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T_PART | 2 | 2 | Q1,00 | PCWP | |
|* 9 | INDEX RANGE SCAN | T_PART_IDX | 2 | 2 | Q1,00 | PCWP | |
| 10 | PX SELECTOR | | | | Q1,00 | PCWP | |
| 11 | PARTITION RANGE SINGLE | | 4 | 4 | Q1,00 | PCWP | |
| 12 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T_PART | 4 | 4 | Q1,00 | PCWP | |
|* 13 | INDEX RANGE SCAN | T_PART_IDX | 4 | 4 | Q1,00 | PCWP | |
| 14 | PX BLOCK ITERATOR | | 6 | 20 | Q1,00 | PCWC | |
|* 15 | TABLE ACCESS FULL | T_PART | 6 | 20 | Q1,00 | PCWP | |
| 16 | PX BLOCK ITERATOR | |KEY(OR)|KEY(OR)| Q1,00 | PCWC | |
|* 17 | TABLE ACCESS FULL | T_PART |KEY(OR)|KEY(OR)| Q1,00 | PCWP | |
--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

9 - access("P"."DT">=TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
13 - access("P"."DT">=TO_DATE(' 2003-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
filter(LNNVL("P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2001-01-01 00:00:00',
'syyyy-mm-dd hh24:mi:ss')))
15 - filter((LNNVL("P"."DT"<TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2003-01-01 00:00:00',
'syyyy-mm-dd hh24:mi:ss'))) AND (LNNVL("P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2001-01-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))))
17 - filter("P"."DT"<TO_DATE(' 2005-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT">=TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') OR "P"."DT"<TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "P"."DT"<TO_DATE(' 2003-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "P"."DT">=TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

I've disabled the "table expansion" transformation in this case, because it kicks in again when optimizing this query and just adds some harmless (and useless) branches to the plan that confuse the issue. Without those additional, useless branches it is very similar to the above plan, but without any BUFFER SORT operations, hence it doesn't cause any overhead and should return the first rows rather quickly, no matter how large the table is.

The 11.2.0.4 optimizer unfortunately again adds unnecessary BUFFER SORT operations even to the manual rewrite above, so as mentioned in the previous post the problem of those spurious BUFFER SORTs isn't limited to the CONCATENATION transformation.

Of course, since all this is related to Parallel Execution, a simple workaround to the problem is to run the SELECT * FROM TABLE using a NO_PARALLEL hint, and all those strange side effects of BUFFER SORTS will be gone. And not having unusable local indexes will also prevent the problem, because then the "table expansion" transformation won't kick in.

Interestingly, if the optimizer is told about the true intention of initially fetching only the first n rows from the SELECT * FROM TABLE - for example simply by adding a corresponding FIRST_ROWS(n) hint - at least in my tests using 12.1.0.2 all the complex transformations were rejected and a plain (parallel) FULL TABLE SCAN was preferred instead, simply because it is now differently costed, which would allow working around the problem, too.

If you want to reproduce the issue, here's a sample table definition, along with some comments what I had to do to bring it into the state required to reproduce:

-- The following things have to come together to turn a simple SELECT * from partitioned table into a complex execution plan
-- including Table Expansion and Concatenation:
--
-- - Unusable index partitions to trigger Table Expansion
-- - Partitions with usable indexes that are surrounded by partitions with unusable indexes
-- - And such a partition needs to have an index access path that is cheaper than a corresponding FTS, typically by deleting the vast majority of rows without resetting the HWM
-- - All this also needs to be reflected properly in the statistics
--
-- If this scenario is combined with Parallel Execution the "Parallel Concatenation" bug that plasters the plan with superfluous BUFFER SORT will lead to the fact
-- that the whole table will have to be kept in memory / TEMP space when running SELECT * from the table, because the bug adds, among many other BUFFER SORTs, one deadly BUFFER SORT
-- on top level before returning data to the coordinator, typically operation ID = 3
--
create table t_part (dt not null, id not null, filler)
partition by range (dt)
(
partition p_1 values less than (date '2001-01-01'),
partition p_2 values less than (date '2002-01-01'),
partition p_3 values less than (date '2003-01-01'),
partition p_4 values less than (date '2004-01-01'),
partition p_5 values less than (date '2005-01-01'),
partition p_6 values less than (date '2006-01-01'),
partition p_7 values less than (date '2007-01-01'),
partition p_8 values less than (date '2008-01-01'),
partition p_9 values less than (date '2009-01-01'),
partition p_10 values less than (date '2010-01-01'),
partition p_11 values less than (date '2011-01-01'),
partition p_12 values less than (date '2012-01-01'),
partition p_13 values less than (date '2013-01-01'),
partition p_14 values less than (date '2014-01-01'),
partition p_15 values less than (date '2015-01-01'),
partition p_16 values less than (date '2016-01-01'),
partition p_17 values less than (date '2017-01-01'),
partition p_18 values less than (date '2018-01-01'),
partition p_19 values less than (date '2019-01-01'),
partition p_20 values less than (date '2020-01-01')
)
as
with generator as
(
select /*+ cardinality(1000) */ rownum as id, rpad('x', 100) as filler from dual connect by level <= 1e3
)
select
add_months(date '2000-01-01', trunc(
case
when id >= 300000 and id < 700000 then id + 100000
when id >= 700000 then id + 200000
else id
end / 100000) * 12) as dt
, id
, filler
from (
select
(a.id + (b.id - 1) * 1e3) - 1 + 100000 as id
, rpad('x', 100) as filler
from
generator a,
generator b
)
;

delete from t_part partition (p_2);

commit;

exec dbms_stats.gather_table_stats(null, 't_part')

create unique index t_part_idx on t_part (dt, id) local;

alter index t_part_idx modify partition p_1 unusable;

alter index t_part_idx modify partition p_3 unusable;

alter index t_part_idx modify partition p_5 unusable;

alter table t_part parallel;

alter index t_part_idx parallel;

set echo on pagesize 0 linesize 200

explain plan for
select * from t_part p;

select * from table(dbms_xplan.display(format => 'BASIC PARTITION PARALLEL'));

Oracle Documents Cloud Service R2 – Now Available!

WebCenter Team - Thu, 2015-01-08 12:38
By: John Klinke, Oracle WebCenter Product Management
Business success depends on effective collaboration both inside and outside of your organization – with customers, partners, suppliers, remote employees – anytime, anywhere, on any device.
The latest release of Oracle Documents Cloud Service came out this week, and it delivers on this promise. With its fast, intuitive web interface and easy-to-use desktop and mobile applications, people can view and collaborate on files even when offline, keeping your organization running efficiently and your employees staying productive from any location.
Improved User Experience Getting employees started on Oracle Documents Cloud Service has never been easier. New users now have access to welcome tours to educate them on how to use each client. Overlay help is also available to point out features. Welcome emails have been updated to provide new users extra guidance in getting started and the online help provides quick answers to frequently asked questions.
Enhanced Productivity Improving productivity when it comes to content collaboration is one of the key objectives of Oracle Documents Cloud Service. Productivity improvements in this release can be seen across all the end-user applications – web , desktop, and mobile.  For quickly navigating to the most important files, you can now mark content as a favorite from the web UI and access all your favorite files with one click.  Another new capability is the ability to reserve content for editing so others are notified that you are working on it.  For mobile users, this release includes updated mobile apps with improvements around managing and sharing documents and folders from your smartphone and tablet. 
Increased Security

This release of Oracle Documents Cloud Service includes additional security around the use of public links for content sharing. Public links now have optional expirations dates for you to control how long links are active. You can have added security around your public links by setting up access codes, an 8-character or longer password that must be entered for anyone to access your link.

Seamless Integration The Oracle Documents Cloud Service Developer Platform also saw numerous improvements in this release. Both the REST APIs and the embedded UI capabilities were enhanced to make it easier for customers and partners to integrate document-centric collaboration into existing applications and business processes. Developer platform enhancements include additional APIs for securely managing folders and fetching user information. For embedding Oracle Documents Cloud Service into other applications, enhancements include support for previewing documents and videos in an embedded iFrame as well as role-based embedded links.
Oracle Documents Cloud Service is Oracle’s next-generation content collaboration solution. Combined with Oracle WebCenter, you get a unified hybrid ECM solution for secure cloud-based content collaboration tied back to your ECM system of record, minimizing the costs, risks and complexity of managing your enterprise content.
Next-Gen ECM  Oracle Documents Cloud Service delivers next-generation ECM  -- providing file synchronization and sharing capabilities. Employees need these functions to address the control and security your organization requires. Oracle Documents Cloud Service has to tools to provide the hybrid solution of secure cloud-based file sharing and collaboration, as well as on-premise ECM integration that embodies a Next Generation Content Management System. 
For more information about Oracle Documents Cloud Service and Oracle’s hybrid ECM solution, please visit: https://cloud.oracle.com/documents
To make a purchase, you can do so here

Oracle GoldenGate Processes – Part 1 – Manager

DBASolved - Thu, 2015-01-08 12:00

Oracle GoldenGate is made up of processes that are used to ensure replication.  These processes include a manager process, an extract and a replicat process. The primise of this post is to focus on the manager process.

In order to configure and run an Oracle GoldenGate enviornment, a manager process must be running on all the source, target, and intermediary servers in the configuration.  This is due to the manager process being the controller process for the Oracle GoldenGate processes, allocates ports and performs file maintenance.  The manager process peforms the following functions within the Oracle GoldenGate isntance:

  • Starts Processes
  • Starts Dynamic Processes
  • Start the Controller process
  • Manage the port numbers for the processes
  • Trail File Management
  • Create reports for events, error and threshold

Note: There is only one manager processes per Oracle GoldenGate instance.

Configure Manager Process
Before a manager process can be started it needs to be configured.  There are many different parameters than can be used in the manager parameter file, but the only required one is PORT parameter.  The default port for a manager is 7809.  In order to edit the manager parameter file, it can be done either from the command line or from within the GGSCI utility.

Edit via command line:

$ cd $OGG_HOME/dirprm
$ vi mgr.prm

Edit via GGSCI:

GGSCI> edit params mgr

Example of a Manager parameter file:

PORT 15000
AUTOSTART ER *
AUTORESTART ER * , RETRIES 5, WAITMINUTES 5
PURGEOLDEXTRACTS ./dirdat/lt*, USECHECKPOINTS, MINKEEPHOURS 2

Start the Manager Process

Starting the manager process is pretty simple.  The process can be started either from the command line or from the GGSCI utility.

Start from command line:

$ cd $OGG_HOME
$ mgr paramfile ./dirprm/mgr.prm reportfile ./dirrpt/mgr.rpt

Start from GGSCI:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> start manager  
or
GGSCI> start mgr

Stop the Manager Process

Stopping the manager process is pretty simple as well (from the GGSCI).

Stop from GGSCI:

$ cd $OGG_HOME
$ ./ggsci
GGSCI> stop manager [ ! ]

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

555 Timer simulator with HTML5 Canvas

Paul Gallagher - Thu, 2015-01-08 11:02
The 555 timer chip has been around since the '70s, so does the world really need another website for calculating the circuit values?

No! But I made one anyway. It's really an excuse to play around with HTML5 canvas and demonstrate a grunt & coffeescript toolchain.

See this running live at visual555.tardate.com, where you can find more info and links to projects and source code on GitHub.


(blogarhythm ~ Time's Up / Living Colour)

Securing Big Data - Part 3 - Security through Maths

Steve Jones - Thu, 2015-01-08 09:00
In the first two parts of this I talked about how Securing Big Data is about layers, and then about how you need to use the power of Big Data to secure Big Data.  The next part is "what do you do with all that data?".   This is where Machine Learning and Mathematics comes in, in other words its about how you use Big Data analytics to secure Big Data. What you want to do is build up a picture of
Categories: Fusion Middleware

Oracle University Leadership Circle 2015

The Oracle Instructor - Thu, 2015-01-08 04:56

I’m in the Leadership Circle again :-) That is a quarterly Corporate Award for the best instructors worldwide according to customer feedback.

Oracle University has generally high quality standards, so it is hard to stand out individually. The more I am proud to be listed together with these great colleagues:

Oracle University Leadership Circle 2015


Categories: DBA Blogs

How an employee mishap can reveal database login credentials

Chris Foot - Thu, 2015-01-08 04:02

Sometimes, the most grievous data breaches are not incited by sophisticated cybercriminals using the latest hacking techniques, but everyday employees who ignore basic protocols. 

Internal threat 
Last year, Symantec and the Ponemon Institute conducted a study on data breaches that occurred throughout 2012. The two organizations discovered that an astounding two-thirds of these incidents were caused by human errors and system issues. Most of these situations were spawned by workers mishandling confidential information, organizations neglecting industry and government regulations and lackluster system controls. 

"While external attackers and their evolving methods pose a great threat to companies, the dangers associated with the insider threat can be equally destructive and insidious," said Ponemon Institute Chairman Larry Ponemon. "Eight years of research on data breach costs has shown employee behavior to be one of the most pressing issues facing organizations today, up 22 percent since the first survey."

Facebook's mistake 
ITWire's David Williams noted that Facebook employees accidentally divulged the username and password of its MySQL database by using Pastebin.com. For those who aren't familiar with the service, Pastebin allows IT specialists to send bits of code via a compact URL, allowing professionals to share the code through an email, social media post or simple Web search. 

As URLs are designed so that anyone can view a Web page, it's possible for a random individual to accidentally come across a URL created by Pastebin, allowing him or her to read the content within the URL. As it turns out, Sintehtic Labs' Nathan Malcolm learned that Facebook programmers were exchanging error logs and code snippets to one another through Pastebin. 

By perusing the Pastebin URLs, Malcom discovered Facebook shell script and PHP code. Williams maintained that none of this data was obtained illegally, nor did he receive it from a Facebook engineers. Instead, the code was "simply lying around the Internet in public view." 

MySQL entry 
It just so happened that one of the URLs contained source code that revealed Facebook's MySQL credentials. The server address, the database name as well as the username and password were available to the public. Although Facebook has likely changed these access permissions since the accident occurred, it's still an example of how neglect can lead to stolen information. 

Implementing database security monitoring software is one thing, but ensuring workers are following policies that prevent data from accidentally being divulged to the public is another – it's a step that shouldn't be ignored. 

The post How an employee mishap can reveal database login credentials appeared first on Remote DBA Experts.

How an employee mishap can reveal database login credentials

Chris Foot - Thu, 2015-01-08 04:02

Sometimes, the most grievous data breaches are not incited by sophisticated cybercriminals using the latest hacking techniques, but everyday employees who ignore basic protocols. 

Internal threat 
Last year, Symantec and the Ponemon Institute conducted a study on data breaches that occurred throughout 2012. The two organizations discovered that an astounding two-thirds of these incidents were caused by human errors and system issues. Most of these situations were spawned by workers mishandling confidential information, organizations neglecting industry and government regulations and lackluster system controls. 

"While external attackers and their evolving methods pose a great threat to companies, the dangers associated with the insider threat can be equally destructive and insidious," said Ponemon Institute Chairman Larry Ponemon. "Eight years of research on data breach costs has shown employee behavior to be one of the most pressing issues facing organizations today, up 22 percent since the first survey."

Facebook's mistake 
ITWire's David Williams noted that Facebook employees accidentally divulged the username and password of its MySQL database by using Pastebin.com. For those who aren't familiar with the service, Pastebin allows IT specialists to send bits of code via a compact URL, allowing professionals to share the code through an email, social media post or simple Web search. 

As URLs are designed so that anyone can view a Web page, it's possible for a random individual to accidentally come across a URL created by Pastebin, allowing him or her to read the content within the URL. As it turns out, Sintehtic Labs' Nathan Malcolm learned that Facebook programmers were exchanging error logs and code snippets to one another through Pastebin. 

By perusing the Pastebin URLs, Malcom discovered Facebook shell script and PHP code. Williams maintained that none of this data was obtained illegally, nor did he receive it from a Facebook engineers. Instead, the code was "simply lying around the Internet in public view." 

MySQL entry 
It just so happened that one of the URLs contained source code that revealed Facebook's MySQL credentials. The server address, the database name as well as the username and password were available to the public. Although Facebook has likely changed these access permissions since the accident occurred, it's still an example of how neglect can lead to stolen information. 

Implementing database security monitoring software is one thing, but ensuring workers are following policies that prevent data from accidentally being divulged to the public is another – it's a step that shouldn't be ignored. 

The post How an employee mishap can reveal database login credentials appeared first on Remote DBA Experts.

5 Linux distributions for servers

Chris Foot - Thu, 2015-01-08 01:06

When a professional says he or she specializes in Linux operating systems, some may be cheeky enough to ask "which one?"

The truth is, depending on how knowledgeable a Linux administrator is, he or she could create dozens of unique iterations of the OS. Generally, there are a handful that have either been developed by companies who then redistribute the open-source OS. Iterations vary depending on the functions and settings certain professionals require of the OS. Listed below are five different Linux distributions for servers.

1. Debian 
According to Tecmint contributor Avishek Kumar, Debian is an OS that works best in the hands of system administrators or users possessing extensive experience with Linux. He described it as "extremely stable," making it a good option for servers. It has spawned several other iterations, Ubuntu and Kali being two of them. 

2. SUSE Linux Enterprise Server 
TechTarget's Sander Van Vugt lauded SUSE Linux as one of the most accessible Linux distributions available, also recognizing it for its administrator-friendly build. The latter feature may be due to its integration with Yet another Setup Tool, a Linux OS configuration program that enables admins to install software, configure hardware, develop networks and servers and several other much-needed tasks. 

3. Red Hat Enterprise Linux 
Kumar maintained that RHEL was the first Linux distribution designed for the commercial market, and is compatible with x86 and x86_64 server architectures. Due to the support that Red Hat provides for this OS, it is often the server OS of choice for many sysadmins. The only "drawback" of this solution is that it isn't available for free distribution, although a beta release can be downloaded for educational use. 

4. Kali Linux 
As was mentioned above, this particular iteration is an offshoot of Debian. While not necessarily recommended for servers (and one of the latest Linux distributions) it has primarily been developed to conduct penetration testing. One of the advantages associated with Kali is that Debian's binary packages can be installed on Kali. It serves as a fantastic security assessment program for users concerned with database or WiFi security.

5. Arch Linux 
Kumar maintained that one of the advantages associated with Arch is that it is designed as a rolling release OS, meaning every time a new version is unrolled, those who have already installed it won't have to re-install the program again. It is designed for the X86 processor architecture. 

The post 5 Linux distributions for servers appeared first on Remote DBA Experts.

Here Are Your First Links of 2015

Oracle AppsLab - Wed, 2015-01-07 20:16

Our team has been busy since the New Year, competing in the AT&T Developer Summit hackathon, which is Noel’s (@noelportugal) Everest, i.e. he tries to climb it every year, see 2013 and 2014.

If you follow our Twitter (@theappslab) or Facebook page, you might have seen the teaser. If not, here it is:

Image courtesy of AT&T Developer Program

Image courtesy of AT&T Developer Program’s Facebook page

Look for details later this week.

While you wait for that, enjoy these tidbits from our Oracle Applications User Experience colleagues.

Fit for Work: A Team Experience of Wearable Technology

Wearables are a thing, just look at the CES 2015 coverage, so Misha (@mishavaughandecided to distribute Fitbits among her team to collect impressions.

Good idea, get everyone to use the same device, collect feedback, although it seems unfair, given Ultan (@ultan) is perhaps the fittest person I know. Luckily, this wasn’t a contest of fitness or of most-wrist-worn-gadgets. Rather, the goal was to gather as much anecdotal experience as possible.

Bonus, there’s a screenshot of the Oracle HCM Cloud Employee Wellness prototype.

¡Viva Mexico!

Fresh off a trip to Jolly Old England, the OAUX team will be in Santa Fe, Mexico in late February. Stay tuned for details.

Speaking of, one of our developers in Oracle’s Mexico Development Center, Sarahi Mireles (@sarahimireles) wrote up her impressions and thoughts on the Shape and ShipIt we held in November, en español.

And finally, OAUX and the Oracle College Hire Program

Oracle has long had programs for new hires right out of college. Fun fact, I went through one myself many eons ago.

Anyway, we in OAUX have been graciously invited to speak to these new hires several times now, and this past October, Noel, several other OAUX luminaries and David (@dhaimes) were on a Morning Joe panel titled “Head in the Clouds,” focused loosely around emerging technologies, trends and the impact on our future lives.

Ackshaey Singh (from left to right), DJ Ursal (@djursal), Misha Vaughan (@mishavaughan), Joe Goldberg, Noel Portugal (@noelportugal), and David Haimes (@dhaimes)

 

Interesting discussion to be sure, and after attending three of these Morning Joe panels now, I’m happy to report that the attendance seems to grow with each iteration, as does the audience interaction.

Good times.Possibly Related Posts: