Skip navigation.

Feed aggregator

Asus Zenfone Smartphone Android Terbaik

Daniel Fink - Sat, 2014-08-09 18:51
Asus Zenfone Smartphone Android Terbaik - This isn’t unhealthy albeit you have got ne'er taken baking categories before as a result of the training method takes you from basic skills to additional advance work. a number of these colleges additionally supply berth programs thus you'll observe what you have got learned below the direction of a master chocolatier.

But nothing compares to education right there within the room. you simply got to create the time and if you can’t create it to categories on weekdays, see if there area unit those being offered on weekends.

The cost of tuition in chocolate creating colleges varies reckoning on the kind of program and additionally if this can be tired the room or reception. For those that commit to learn within the room, they don’t need to worry as a result of all the materials they have can already be provided. For those reception, they need to shop for these from the craft store and move with what they need.

Learning the way to create chocolate with the assistance of trained professionals is way higher than attempting to good however it's done through trial and error. After all, it's a certain science once it involves intermixture the ingredients and somewhat little bit of selling if you're attending to sell this product within the market.

Once you get the suspend of things, you'll strive some experiments to create concoctions of your own. After all, chocolates don't forever are available boxes. The Different Processes in creating Chocolate There area unit alternative ways within which you'll learn in creating chocolate. the primary issue that you simply need to fathom is wherever do these delicious treats come back from? Most of you'll already understand the solution. Chocolates area unit made up of the beans of cocoa.

From the trees to the chocolate manufacturers, however such processes extremely evolve? Through time, there are several developments relating to chocolate creating. Technology has benefited plenty of life's endeavors. This additionally applies to the method of chocolate creating.

But such advancement solely applies on the gathering half. The process primarily remains constant, the recent standard means. As what is been same, don't fix a issue if it is not broken. perhaps constant rule is being applied to the current venture.

It feels smart to eat chocolates. however does one wish to grasp regarding the Zenfone various strategies that go behind such concept? Here area unit some.

Roasting It takes a decent quantity of cooking moreover as cocoa seed fermentation to return up with the standard of chocolate that you simply area unit searching for. within the pre-roasting stage, the beans area unit directed to infrared beaming heaters. This method can exclude the nibs of the beans from the shells. The temperature for this half is one hundred to a hundred and forty degree Celsius. This takes regarding twenty up to forty minutes.

Roasting also can be done directly. when the beans area unit roast, the shells are often simply removed. this can be favored by most chocolate manufacturers as a result of it retains the flavour of the beans. For this half, the temperature is at one hundred fifty to a hundred and sixty degree Celsius.

Fermentation This is done to decrease the extent of sugar, aldohexose moreover as laevulose and additionally amino acids within the beans. This brings within the flavor of the beans that the method of cooking are going to be able to enhance. however not a soul will try this. It takes a master to hone this craft. Beans will rot if one thing goes wrong with this method.

Shelling To be able to take away the shells from the beans, it takes additional processes than you'll ever imagine. This includes edge, then winnowing and last, winnowing. each step is vital thus on come back up with the grains that have the correct size. Tasting If you think that that this can be a simple task, well, that appears to be not the case. This involves ability and experience. One should have studied each style of the various types and variations of chocolates to be able to proclaim that they'll perform well on this and be a choose on what varieties ought to be given to the market.

These individuals are often compared to wine specialists. simply a bite from a chocolate treat can tell them what processes it went through, what quite beans was used or wherever it had been really created. And there area unit Asus Zenfone nonetheless totally different sorts of chocolates out there within the market. Imagine what all those need to bear simply to be able to reach your favorite food market in order that you'll purchase them for your own consumption.

You don't need to be Associate in Nursing skilled in creating chocolate. however you'll begin following some techniques within the tasting half. If you're treated with a crammed chocolate, let it linger on your mouth till it melts and you'll style all its flavors. you'll then chew it for regarding 5 times, enough for the flavour and therefore the coating to mix in.

Chocolate has its unchanged charm that hooks many another person with a appetency. Then again, a number of the chocolates area unit extremely expensive . In reality, given many tips and tricks, you'll really produce your own chocolate and save yourself cash and increase your delight due to your self-creation.

Federal Reserve Board backs up e-Literate in criticism of Brookings report on student debt

Michael Feldstein - Sat, 2014-08-09 13:30

I have been very critical of the Brookings Institution report on student debt, particularly in my post “To see how illogical the Brookings Institution report on student loans is, just read the executive summary”.

D’oh! It turns out that real borrowers with real tax brackets paying off off real loans are having real problems. The percentage at least 90 days delinquent has more than doubled in just the past decade. In fact, based on another Federal Reserve report, the problem is much bigger for the future, “44% of borrowers are not yet in repayment, and excluding those, the effective 90+ delinquency rate rises to more than 30%”.

More than 30% of borrowers who should be paying off their loans are at least 90 days delinquent? It seems someone didn’t tell them that their payment-to-income ratios (at least for their mythical average friends) are just fine and that they’re “no worse off”.

Well now the Federal Reserve Board themselves weighs in on the subject with a new survey, at least as described by an article in The Huffington Post.  I have read the Fed report and concur with HP analysis – it does argue against the Brookings findings.

Among the emerging risks spotlighted by the survey is the nation’s $1.3 trillion in unpaid student debt, suggesting that high levels of student debt are crimping the broader economy. Nearly half of Americans said they had to curb their spending last year in order to make payments on student loans, adding weight to the fear among federal financial regulators that the burden of student debt on households will depress economic growth for years to come.

Some 35 percent of survey respondents who are paying back student loans said they had to reduce their spending by “a little” over the past year to keep up with their student debt payments. Another 11 percent said they had to cut back their spending by “a lot.”

The Fed’s findings appear to challenge recent research by a pair of economists at the Brookings Institution, highlighted in The New York Times and cited by the White House, that argues that households with student debt are no worse off today than they were two decades ago.

The full Fed report can be found here. Much of the survey was focused on borrowers and their perceptions of how their student loans impact them, which is much more reliable than Brookings’ assumptions on how convoluted financial ratios should affect borrowers. In particular, consider this table:

Fed Table 11

Think about this situation – amongst borrowers who have completed their degrees, almost equal numbers think the financial benefits of a degree outweigh the costs as think the opposite (41.5% to 38.1%). I don’t see this as an argument against getting a degree, but rather as clear evidence that the student loan crisis is real and will have a big impact on the economy and future student decision-making.

Thanks to the Federal Reserve Board for helping us out.

Update: Clarified that this is Federal Reserve Board and not NY Fed.

The post Federal Reserve Board backs up e-Literate in criticism of Brookings report on student debt appeared first on e-Literate.

Required Field Validation in Oracle MAF

Shay Shmeltzer - Fri, 2014-08-08 16:38

A short entry to explain how to do field validation in Oracle MAF. As an example let's suppose you want a field to have a value before someone clicks to do an operation.

To do that you can set the field's attribute for required and "show required" like this:

  <amx:inputText label="label1" id="it1" required="true" showRequired="true"/>

 Now if you run your page, leave the field empty and click a button that navigates to another page, you'll notice that there was no indication of an error. This is because you didn't tell the AMX page to actually do a validation. 

 To add validation you use an amx:validationGroup tag that will surround the fields you want to validate.

For example:

     <amx:validationGroup id="vg1">

    <amx:inputText label="label1" id="it1" required="true" showRequired="true"/>

    </amx:validationGroup>

Then you can add a amx:validateOperation tag to the button that does navigation and tell it to validate the group you defined before (vg1 in our example).

       <amx:commandButton id="cb2" text="go" action="gothere">

        <amx:validationBehavior id="vb1" group="vg1"/>

      </amx:commandButton>

Now when you run the page and try to navigate you'll get your validation error.

Categories: Development

Partner Webcast - The Revolution of Oracle Java 8

Java 8, released in March 2014, is a revolutionary release of the world’s #1 development platform. It is the single largest upgrade ever to the programming model, with coordinated core code evolution...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Social Commerce: Shopping Inside of Social

Linda Fishman Hoyle - Fri, 2014-08-08 13:12

A Guest Post by Mike Stiles, Senior Content Manager for Oracle Social (pictured left)

We know the value of friends recommending products to friends, but are we seeing these motivated transactions conducted immediately on the social platforms themselves? Is social commerce still a thing?

 What really seems to matter most is whether or not brand participation on social channels is generating incoming traffic to wherever transactions happen to be transacted. In fact, the very definition of sCommerce has quietly morphed over the years from sales conducted on Facebook, to sales resulting from social.

On-Facebook stores are still available, of course. Brands like J.C. Penney, GNC, Levi’s and 1-800-Flowers have done it or are doing it. But the real drive, budget-wise, is to use social to generate traffic and leads as opposed to building social stores. Social budgets are also moving to rounding up leads and sales as opposed to branding. The expectations for pre-sold shoppers to come from social to the brand’s transaction location and make the purchase are high.

And yet…despite a Shopify survey that found Facebook driving almost two-thirds of social visits to Shopify stores and claiming a 129% year over year increase of orders from social, and despite the barely known Polyvore driving the top average order value of $66.75, less than 2% of traffic to retailers’ sites comes from social. And almost half of retailers said less than 1% of social shoppers wound up buying anything. The best social conversion rate is Facebook’s 1.85%.

So what’s broken? Every hoop a buyer has to jump through is a golden opportunity for that buyer to reconsider, change their mind, or put off the purchase. The shortest, most frictionless path from discovery to reassurance to sale should be every brand’s Apollo mission. And since two of those three things are happening primarily on social, sales inside of social, that original definition of sCommerce, might be worth a solid second look.

The social nets are inching forward. Pinterest, the proclaimed king of purchase intent, has rich pins so prices and inventory can be updated real-time. You can reply to tweets with Amazon product links adding #AmazonCart and throw the item into your shopping cart. You can make AMEX purchases by adding a hashtag. But these things amount to better social catalog experiences or buy link usage, not purchase-inside-social opportunities.

Pictures leaked from Fancy in January gave us a peek at Twitter Commerce. Brand tweets can be expanded to show a Buy button, from which you could purchase the item inside the Twitter app. Now we’re talking. OpenSky is trying to get there as well.

The goal is to capitalize on everything social brings in terms of shopping and exposure to products tied to users’ visible interests, capitalize on the trusted recommendations of social connections, use content as your virtual end-aisle displays, use the ongoing social relationships you have with customers and rich social data to keep bumping them toward a purchase, customize their experiences, and find the quickest way to satisfy the buying impulse when it strikes.

Finding something you want to buy in a store and then being told by the clerk you have to go two buildings down to buy it sounds silly. Digital hoops are equally silly.

Grid/CRS AddNode or runInstaller fails with NullPointerException

Jeremy Schneider - Fri, 2014-08-08 12:43

Posting this here mostly to archive it, so I can find it later if I ever see this problem again.

Today I was repeatedly getting this error while trying to add a node to a cluster:

(grid)$ $ORACLE_HOME/oui/bin/addNode.sh -silent -noCopy CRS_ADDNODE=true CRS_DHCP_ENABLED=false INVENTORY_LOCATION=/u01/oraInventory ORACLE_HOME=$ORACLE_HOME "CLUSTER_NEW_NODES={new-node}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={new-node-vip}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 24575 MB    Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.

Exception java.lang.NullPointerException occurred..
java.lang.NullPointerException
        at oracle.sysman.oii.oiic.OiicAddNodeSession.initialize(OiicAddNodeSession.java:524)
        at oracle.sysman.oii.oiic.OiicAddNodeSession.<init>(OiicAddNodeSession.java:133)
        at oracle.sysman.oii.oiic.OiicSessionWrapper.createNewSession(OiicSessionWrapper.java:884)
        at oracle.sysman.oii.oiic.OiicSessionWrapper.<init>(OiicSessionWrapper.java:191)
        at oracle.sysman.oii.oiic.OiicInstaller.init(OiicInstaller.java:512)
        at oracle.sysman.oii.oiic.OiicInstaller.runInstaller(OiicInstaller.java:968)
        at oracle.sysman.oii.oiic.OiicInstaller.main(OiicInstaller.java:906)
SEVERE:Abnormal program termination. An internal error has occured. Please provide the following files to Oracle Support :

"Unknown"
"Unknown"
"Unknown"

There were two notes on MOS related to NullPointerExceptions from runInstaller (which is used behind the scenes for addNode in 11.2.0.3 on which I had this problem). Note 1073878.1 describes addNode failing in 10gR2, and the root cause was that the home containing CRS binaries was not registered in the central inventory. Note 1511859.1 describes attachHome failing, presumably on 11.2.0.1 – and the root cause was file permissions that blocked reading of oraInst.loc.

Based on these two notes, I had a suspicion that my problem had something to do with the inventory. Note that you can get runInstaller options by running “runInstaller -help” and on 11.2.0.3 you can debug with “-debug -logLevel finest” at the end of your addNode command line. The log file is produced in a logs directory under your inventory. However in this case, it produces absolutely nothing helpful at all…

After quite a bit of work (even running strace and ltrace on the runInstaller, which didn’t help one bit)… I finally figured it out:

(grid)$ grep oraInst $ORACLE_HOME/oui/bin/addNode.sh
INVPTRLOC=$OHOME/oraInst.loc

The addNode script was hardcoded to look only in the ORACLE_HOME for the oraInst.loc file. It would not read the file from /etc or /var/opt/oracle because of this parameter.

On this particular server, there was not an oraInst.loc file in the grid ORACLE_HOME. Usually the file is there when you do a normal cluster installation. In our case, it’s absence was an artifact of the specific cloning process we use to rapidly provision clusters. As soon as I copied the file from /etc into the grid ORACLE_HOME, the addNode process continued as normal.

Sometimes it would be nice if runInstaller could give more informative error messages or tracing info!

12c: Fun with WITH!

Pythian Group - Fri, 2014-08-08 11:30

Last night I couldn’t sleep and what else you’re going to do? I was thinking about Oracle stuff.

In Oracle version 12, Oracle has enhanced the WITH clause – traditionally used for sub-query factoring – to allow the declaration of functions and procedures. This can be (ab)used to create a very interesting scenario, that is not very common in Oracle: Reading data within the same SELECT statement, but from two different points in time. And the points in time are in the future, and not in the past.

Let’s say I want to take a snapshot of the current SCN, and then another one 5 or 10 seconds after that. Traditionally we’d have to store that somewhere. What if I could take two snapshots – at different SCNs – using a single SELECT statement ? Without creating any objects ?

col value for a50
set lines 200 pages 99

with  
procedure t (secs in number, scn out varchar2)
  is
    pragma autonomous_transaction;
  begin
    dbms_lock.sleep(secs);
    select 'at ' || to_char(sysdate,'HH24:MI:SS') || ' SCN: ' 
                 || dbms_flashback.get_system_change_number 
      into scn 
      from dual;
  end;
function wait_for_it (secs in number) 
 return varchar2 is
    l_ret varchar2(32767);
  begin
    t(secs, l_ret);
    return l_ret;
  end;
select 1 as time, 'at ' || to_char(sysdate,'HH24:MI:SS') || ' SCN: ' 
                || dbms_flashback.get_system_change_number as value 
  from dual
union all
select 5, wait_for_it(5) from dual
union all
select 10, wait_for_it(5) from dual
/

And the result is:

      TIME VALUE
---------- --------------------------------------------------
         1 at 09:55:49 SCN: 3366336
         5 at 09:55:54 SCN: 3366338
        10 at 09:55:59 SCN: 3366339

 


We can clearly see there, that the SCN is different, and the time shown matches the intervals we’ve chosen, 5 seconds apart. I think there could be some very interesting uses for this. What ideas can you folks come up with ?

Categories: DBA Blogs

We Have Slap Bands

Oracle AppsLab - Fri, 2014-08-08 09:34

As part of a secret project Noel (@noelportugal) and Raymond are cooking up, Noel ordered some AppsLab-branded slap bands.

appslab-slap-band-1

The bands were produced by Amazing Wristbands (@AMZG_Wristbands), and Noel has nothing but good things to say about them, in case you’re looking for your own slap bands.

Anyway, I’m sure we’ll have some left over after the double-secret project. So, if you want one, let us know.

Find the comments.Possibly Related Posts:

Transaction guard

Laurent Schneider - Fri, 2014-08-08 08:05

Getting the logical transaction id in 12c will greatly simplify your error handling and enhance your business continuity in your application.

In 11g and below, your java code use to look like


try {
  insert into...
} catch () {
  error_handling()
}

but one probably assumed the insert failed when it was committed (e.g. database server process core dump).

Now in 12c, you can get a logical transaction id and then later, from another session, check if that transaction was committed. Which solves quite a bunch of integrity issues (e.g. duplicate rows)

Let’s try


import java.sql.*;
import oracle.jdbc.pool.*;
import oracle.jdbc.*;

public class TG {
  public static void main(String argv[]) throws
      SQLException {
    String url = "jdbc:oracle:thin:@(DESCRIPTION"
      +"=(ADDRESS=(PROTOCOL=TCP)(Host=srv01)("
      +"Port=1521))(CONNECT_DATA=(SERVICE_NAME="
      +"svc01)))";
    OracleDataSource ods=new OracleDataSource();
    ods.setURL(url);
    ods.setUser("SCOTT");
    ods.setPassword("tiger");
    OracleConnection conn = (OracleConnection) 
      ods.getConnection();
    LogicalTransactionId ltxid = conn.
      getLogicalTransactionId();
    try {
      System.out.println("Start");
      conn.prepareStatement(
        "insert into t values (1)").execute();
      if (Math.random() > .5) {
        throw new Exception();
      }
      System.out.println("OK");
    } catch (Exception e) {
      System.out.println("ERROR");
      OracleConnection conn2 = 
        (OracleConnection) ods.getConnection();
      CallableStatement c = conn2.prepareCall(
        "declare b1 boolean; b2 boolean; begin" 
        +"DBMS_APP_CONT.GET_LTXID_OUTCOME(?,b1,"
        +"b2); ? := case when B1 then "
        +"'COMMITTED' else 'UNCOMMITTED' end; "
        +"end;");
      c.setBytes(1, ltxid.getBytes());
      c.registerOutParameter(2, 
        OracleTypes.VARCHAR);
      c.execute();
      System.out.println("Status = "+
        c.getString(2));
    }
  }
}

getLogicalTransactionId gives me a transaction id (this is internally saved in SYS.LTXID_TRANS so it survives reboots, failover and disconnections) and GET_LTXID_OUTCOME gets the outcome.

There is few preparation steps


GRANT EXECUTE ON DBMS_APP_CONT TO SCOTT;
declare PARAMETER_ARRAY dbms_service.
  svc_parameter_array; 
begin 
  PARAMETER_ARRAY('COMMIT_OUTCOME'):='true';
  dbms_service.create_service(
    'SVC01','TNS01',PARAMETER_ARRAY); 
  dbms_service.start_service('SVC01'); 
end;
/
CREATE TABLE SCOTT.T(x number);

Due to my Random() call, I get exceptions sometimes, but it is always commits


C:\> java TG
Start
OK

C:\> java TG
Start
ERROR
Status = COMMITTED

C:\> java TG
Start
ERROR
Status = COMMITTED

No need to redo the insert.

Now I dropped the table t and run the same code


SQL> drop table scott.t;

Table dropped.

C:\>java TG
Start
ERROR
Status = UNCOMMITTED

Now it fails and I know it!

Log Buffer #383, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-08-08 07:34

This Log Buffer Edition picks few of the informative blog posts from Oracle, SQL Server, and MySQL fields of database.


Oracle:

g1gc logs – Ergonomics -how to print and how to understand

In Solaris 11.2, svcs gained a new option, “-L”.  The -L option allows a user to easily look at the most recent log events for a service.

ADF Thematic Map component from DVT library was updated in ADF 12c with marker zoom option and area layer styling

When cloning pluggable databases Oracle gives you also SNAPSHOT COPY clause to utilize storage system snapshot capabilities to save on storage space.

It is normal for bloggers including myself to post about the great things they have done.

SQL Server:

In six years Microsoft has come from almost zero corporate knowledge about how cloud computing works to it being an integral part of their strategy.

A brief overview of Columnstore index and its usage with an example.

The Road To Hell – new article from the DBA Team

Encryption brings data into a state which cannot be interpreted by anyone who does not have access to the decryption key, password, or certificates.

How to test what a SQL Server application would do in the past or in the future with date and time differences.

MySQL:

MySQL for Visual Studio 1.2.3 GA has been released

An approach to MySQL dynamic cross-reference query.

The MySQL replication and load balancing plugin for PHP, PECL/mysqlnd_ms, aims to make using a cluster of MySQL servers instead of a single server as transparent as possible.

Picking the Right Clustering for MySQL: Cloud-only Services or Flexible Tungsten Clusters? New webinar-on-demand.

Collation options for new MySQL schemas and tables created in MySQL for Excel

Categories: DBA Blogs

Oracle Database RAC Diagnostics and Tuning

Oracle Real Application Clusters (Oracle RAC) is a clustered version of Oracle Database based on a comprehensive high-availability stack that can be used as the foundation of a database cloud system...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Priority Service Infogram for 07-AUG-2014

Oracle Infogram - Thu, 2014-08-07 15:05

OpenWorld
It’s closing on us fast! From Proactive Support - Portals: Learn, Connect and Explore at Oracle OpenWorld 2014
Security
Five Computer Security Myths, Debunked by Experts, from LifeHacker.
A new white paper: Mobile Security in a BYOD World.
Oracle issues a virtual strongbox for enterprise encryption keys, from PCWorld.
Neowin is the bearer of good news: CryptoLocker malware has been cracked, victims able to decrypt their files for free.
RDBMS
From Oracle DB/EM Support: Master Note for ORA-1555 Errors.
SQL
From Galo Balda's Blog: New in Oracle 12c: Querying an Associative Array in PL/SQL Programs.
Solaris
From OSTATIC: Oracle Delivers Solaris 11.2 with OpenStack, Integrated SDN Features.
HA-LDOM live migration in Oracle Solaris Cluster 4.2, from the Oracle Solaris Cluster Oasis.
Java
From The Java Source: Tech Article: Generics: How They Work and Why They Are Important.
From DZone: Using Oracle AQ in Java Won’t Get Any Easier Than This.
EPM
Infogram contributor Yin-Jen Su pointed out this great YouTube channel: Oracle EPM Products YouTube channel.
Here’s the announcement from Jan Greenburg:
I’m pleased to announce 3 new videos on our YouTube Channel (https://www.youtube.com/user/OracleEPMWebcasts/videos)!
For Oracle Planning and Budgeting Cloud Service (PBCS): Using Predictive Planning  -- demonstrates generating statistical predictions based on historical data.Managing Database Properties- - demonstrates accessing database properties.For on-premise EPM:
Four-part series on deploying EPM System Products:Part 1 Overview -- demonstrates the standard deployment methodology. It contains links to parts 2, 3 and 4.Part 2 Preparing for Deployment -- demonstrates preparing for standard deployment.Part 3 Installing and Configuring an Initial Instance -- demonstrates installing and configuring an initial instance. Part 4 Scaling Out and Installing EPM System Clients -- demonstrates scaling EPM System components and installing EPM System client software.
Fyi … in addition to accessing videos from our YouTube channel, you can also access our videos from these Oracle Learning Libraries (OLL):
EPM Consolidation and Planning Videos Oracle Learning Library (on premise videos): https://apex.oracle.com/pls/apex/f?p=44785:141:25017935607895::NO:141:P141_PAGE_ID%2CP141_SECTION_ID:133%2C959Oracle Planning and Budgeting Cloud Service Library: https://apex.oracle.com/pls/apex/f?p=44785:141:108373392382468::NO:141:P141_PAGE_ID%2CP141_SECTION_ID:91%2C658
OLL provides social networking capabilities that allow you to bookmark, share items through social media, review items, recommend items and create collections that can be private or public.
Oracle Community
Lots of goings on at Oracle Community.
EBS
From the Oracle E-Business Suite Support Blog:
Introducing the Trading Community Architecture APIs Information Center
Value Chain Planning, Advanced Supply Chain Planning, and Inventory Optimization Safety Stock
Use the Item Open Interface to Quickly Add or Update Inventory Items
So How is Everyone Doing Submitting Enhancements in the Procurement Community?
StartUP Demantra, Configuration & Troubleshooting, Steps, Tips & Tricks
Oracle Application Management Pack for Oracle E-Business Suite (AMP) Release 12.1.0.3.0 is Available
Overview of Inventory Transaction Accounting, Part 1 of 3
New Upgrade, Patching & Maintenance Advisors for R12.2.x
Guided Resolution Now Available for Cancel or Discard Issues!
What's New in the My Oracle Support Community (MOSC)
From Oracle E-Business Suite Technology
JRE 1.7.0_67 Certified with Oracle E-Business Suite
New Solaris SPARC OS Requirements for EBS R12
Business
10 Things Speakers Should Never Do, from collaborate.
…and Finally

An Oracle that was not us: The Oak Ridge Automatic Computer and Logical Engine (ORACLE), Oak Ridge National Laboratory, 1953, from Adafruit.

Create with WLST a SOA Suite, Service Bus 12.1.3 Domain

Edwin Biemond - Thu, 2014-08-07 13:14
When you want to create a 12.1.3 SOA Suite, Service Bus Domain, you have to use the WebLogic config.sh utility.  The 12.1.3 config utility is a big improvement when you compare this to WebLogic 11g. With this I can create some complex cluster configuration without any after configuration. But if you want to automate the domain creation and use it in your own (provisioning) tool/script then you

Space used by objects

DBA Scripts and Articles - Thu, 2014-08-07 12:35

Calculate the space used by a single object This script will help you calculate the size of a single object : [crayon-5404203ae6da2958343807/] Calculate the space used by a whole schema If you want the space used by a whole schema, then here is a variation of the first query : [crayon-5404203ae6daf936161893/]

The post Space used by objects appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

ADF Thematic Map in ADF 12c (12.1.3)

Andrejus Baranovski - Thu, 2014-08-07 12:03
ADF Thematic Map component from DVT library was updated in ADF 12c with marker zoom option and area layer styling (ADF 12c (12.1.3) new features). I have decided to check how it works and implemented quick sample application - ThematicMapApp.zip.

I was using world GDP data (SQL script is available together with sample application) and displayed it using ADF Thematic Map. World country borders are hidden on purpose, borders are visible by default:


While zooming, marker points are growing - very useful feature:


Data for thematic map is fetched using SQL based VO. I'm calculating total GDP and taking percentage from total for the country, this allows to scale marker points better:


ADF Thematic Map is configured to support zooming for markers:


Countries area layer is set to be hidden, although data is still gets attached to the countries:


Marker is configured with dynamic scaling, this is how each country gets appropriate marker size, based on its GDP value:


Marker colour property is set to be dynamically calculated in marker Attribute Groups section:

Whistler, Microsoft and how far cloud has come

Steve Jones - Thu, 2014-08-07 09:00
In six years Microsoft has come from almost zero corporate knowledge about how cloud computing works to it being an integral part of their strategy.  Sure back in early 2008 there were some pieces of Microsoft that knew about cloud but that really wasn't a corporate view it was what a very few people inside the company knew. How do I know this? Well back in 2008 I was sitting on the top of a
Categories: Fusion Middleware

Oracle WebCenter Case Study: Improving Invoice to Cash Process

WebCenter Team - Thu, 2014-08-07 08:51

&amp;amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;gt;

 Kevin is the IT Director for a top-quality Less-than-Truckload carrier servicing eight Midwestern states. 

A recent industry survey showed the company’s website was falling short of customer expectations in the following three areas: 

  • Ease of use 
  • Providing useful information, and 
  • Utilizing effective technology and tracking systems

Kevin looked to Redstone Content Solutions to improve the website’s functionality utilizing award-winning Oracle WebCenter technologies. 

Actian Vector Hadoop Edition

DBMS2 - Thu, 2014-08-07 05:12

I have a small blacklist of companies I won’t talk with because of their particularly unethical past behavior. Actian is one such; they evidently made stuff up about me that Josh Berkus gullibly posted for them, and I don’t want to have conversations that could be dishonestly used against me.

That said, Peter Boncz isn’t exactly an Actian employee. Rather, he’s the professor who supervised Marcin Zukowski’s PhD thesis that became Vectorwise, and I chatted with Peter by Skype while he was at home in Amsterdam. I believe his assurances that no Actian personnel sat in on the call. :)

In other news, Peter is currently working on and optimistic about HyPer. But we literally spent less tana minute talking about that

Before I get to the substance, there’s been a lot of renaming at Actian. To quote Andrew Brust,

… the ParAccel, Pervasive and Vectorwise technologies are being unified under the Actian Analytics Platform brand. Specifically, the ParAccel technology … is being re-branded Actian Matrix; Pervasive’s technologies are rechristened Actian DataFlow and Actian DataConnect; and Vectorwise becomes Actian Vector.

and

Actian … is now “one company, with one voice and one platform” according to its John Santaferraro

The bolded part of the latter quote is untrue — at least in the ordinary sense of the word “one” — but the rest can presumably be taken as company gospel.

All this is by way of preamble to saying that Peter reached out to me about Actian’s new Vector Hadoop Edition when he blogged about it last June, and we finally talked this week. Highlights include: 

  • Vectorwise, while being proudly multi-core, was previously single-server. The new Vector Hadoop Edition is the first version with node parallelism.
  • Actian’s Vector Hadoop edition uses HDFS (Hadoop Distributed File System) and YARN to manage an Actian-proprietary file format. There is currently no interoperability whereby Hadoop jobs can read these files. However …
  • … Actian’s Vector Hadoop edition relies on Hadoop for cluster management, workload management and so on.
  • Peter thinks there are two paying customers, both too recent to be in production, who between then paid what I’d call a remarkable amount of money.*
  • Roadmap futures* include:
    • Being able to update and indeed trickle-update data. Peter is very proud of Vectorwise’s Positional Delta Tree updating.
    • Some elasticity they’re proud of, both in terms of nodes (generally limited to the replication factor of 3) and cores (not so limited).
    • Better interoperability with Hadoop.

Actian actually bundles Vector Hadoop Edition with DataFlow — the old Pervasive DataRush — into what it calls “Actian Analytics Platform – Hadoop SQL Edition”. DataFlow/DataRush has been working over Hadoop since the latter part of 2012, based on a visit with my then clients at Pervasive that December.

*Peter gave me details about revenue, pipeline, roadmap timetables etc. that I’m redacting in case Actian wouldn’t like them shared. I should say that the timetable for some — not all — of the roadmap items was quite near-term; however, pay no attention to any phrasing in Peter’s blog post that suggests the roadmap features are already shipping.

The Actian Vector Hadoop Edition optimizer and query-planning story goes something like this:

  • Vectorwise started with the open-source Ingres optimizer. After a query is optimized, it is rewritten to reflect Vectorwise’s columnar architecture. Peter notes that these rewrites rarely change operator ordering; they just add column-specific optimizations, whatever that means.
  • Now there are rewrites for parallelism as well.
  • These rewrites all seem to be heuristic/rule-based rather than cost-based.
  • Once Vectorwise became part of the Ingres company (later renamed to Actian), they had help from Ingres engineers, who helped them modify the base optimizer so that it wasn’t just the “stock” Ingres one.

As with most modern MPP (Massively Parallel Processing) analytic RDBMS, there doesn’t seem to be any concept of a head-node to which intermediate results need to be shipped. This is good, because head nodes in early MPP analytic RDBMS were dreadful bottlenecks.

Peter and I also talked a bit about SQL-oriented HDFS file formats, such as Parquet and ORC. He doesn’t like their lack of support for columnar compression. Further, in Parquet there seems to be a requirement to read the whole file, to an extent that interferes with Vectorwise’s form of data skipping, which it calls “min-max indexing”.

Frankly, I don’t think the architectural choice “uses Hadoop for workload management and administration” provides a lot of customer benefit in this case. Given that, I don’t know that the world needs another immature MPP analytic RDBMS. I also note with concern that Actian has two different MPP analytic RDBMS products. Still, Vectorwise and indeed all the stuff that comes out Martin Kersten and Peter’s group in Amsterdam has always been interesting technology. So the Actian Vector Hadoop Edition might be worth taking a look at before you redirect your attention to products with more convincing track records and futures.

Categories: Other

ECEMEA Webcast - Getting Started with your Big Data project

Big data is a new kind of power that transforms everything it touches in business, government, and private life. As a result, bringing big data to your company has the potential to provide big...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Documentum: some useful queries (DQL/IAPI)

Yann Neuhaus - Thu, 2014-08-07 02:37

In this blog post I want to sharing some useful DQL and IAPI queries that I am using all the time. They are more dedicated to Documentum support/debug and grouped by components. In order to use them, I recommend Qdman: it is the best tool I know to use DQL and IAPI scripts.

1. xPlore

It regroups both the search engine and the index management.

DQL Dsearchselect * from dm_document search document contains 'manual.doc'

 

This query will perform a search just as if you put the 'manual.doc' in the search field of DA. It can be used to check if the dsearch is working fine and if the indexing has been performed correctly. If not, it will return 0 results for a document that you know does exist in the docbase.

Index Waitingselect count(*) as awaiting_15 FROM dmi_queue_item a, dm_sysobject (all) b WHERE b.r_object_id = a.item_id AND a.name='dm_fulltext_index_user' AND date_sent > date(now)-(15*1/24/60) AND (a.task_state is NULL or a.task_state = ' ')

 

This query will return the number of items waiting to be indexed since 15 minutes. The parameter can be changed to 60 minutes or whatever, you just have to change the '15' in bold in the previous query.

Index In Progressselect count(*) as in_progress_15 FROM dmi_queue_item a, dm_sysobject (all) b WHERE b.r_object_id = a.item_id AND a.name='dm_fulltext_index_user' AND date_sent > date(now)-(15*1/24/60) AND a.task_state = 'acquired'

 

This one is similar to the previous, but it returns the number of 'in progress' indexing requests. Note that the parameter can still ne changed.

Index By Stateselect task_state,count(*) from dmi_queue_item where name = 'dm_fulltext_index_user' group by task_state

 

This query lists the number of indexes by state:

  • blank -> awaiting indexing
  • aquired -> in progress
  • warning
  • error
  • done

Delete Indexing Requestdelete dmi_queue_item object where item_id='09xxxxxxxxxxxxxx'

 

Sometimes I noticed there are indexing requests on deleted documents. In fact, it can happen if someone saved a document, then deleted it right after. The indexing request remains in the queue for life. Thus, you may want to delete it. First, check if the file is deleted by running the IPAI: dump,c,09xxxxxxxxxxxxxx. If an error occurs telling the document doesn't exist anymore, you can delete it.

Index Agent Infoselect fti.index_name,iac.object_name as instance_name from dm_fulltext_index fti, dm_ftindex_agent_config iac where fti.index_name = iac.index_name and fti.is_standby = false and iac.force_inactive = false

 

This query returns your configured index agent information. It is useful for the IAPI command returning the index agent status (see below).

Index Agent Statusapply,c,NULL,FTINDEX_AGENT_ADMIN,NAME,S,DOCBASE_ftindex_01,AGENT_INSTANCE_NAME,S, HOST._9200_IndexAgent,ACTION,S,status
next,c,q0
get,c,q0,status
close,c,q0

 

This script returns the Index Agent Status (Running, Stopped, and so on). Note that you have to replace the Indey Agent information in the first line by your Index Agent. You can get these information thanks to the DQL query above.

Manually Queue Index Requestqueue,c,09xxxxxxxxxxxxxx,dm_fulltext_index_user,dm_force_ftindex

 

This one simply puts an indexing request in the queue. You have to replace 09xxxxxxxxxxxxxxx by the r_object_id of the document you want to queue.

Display Dsearch Portcat $DOCUMENTUM/dsearch/admin/xplore.properties | grep port

 

For this one you have to go to the xPlore server, it shows the configured dsearch port.

2. Rendering (ADTS)

The following queries concern the rendition component. It regroups the rendition queue check and the way to manually ask a rendition through IAPI.

 

Manually Queue Rendering Requestqueue,c,09xxxxxxxxxxxxxx,dm_autorender_win31,rendition,0,F,,rendition_req_ps_pdf

 

As the indexing request, this one puts a PDF rendition request in the queue. It can be useful when scripting or in DFC programs.

Rendering By Stateselect task_state,count(*) from dmi_queue_item where name = 'dm_autorender_win31' group by task_state

 

This returns the rendition requests by state.

Rendering Queueselect * from dmi_queue_item where name ='dm_autorender_win31' order by date_sent desc

 

This query returns all documents present in the rendering queue. That means all document waiting for rendition.

Rendition Failedselect r_object_id,item_id,name,item_name,date_sent from dmi_queue_item where event ='DTS' order by date_sent desc

 

This Query returns the failed renditions. Be aware of the date_sent field, because this queue is not cleared. This means that if a rendition request failed 3 times in a row and succeed the last time, there will be 3 rows in the failed queue, but the rendition did succeed. So you should verify that the rendition did succeed and if so, you can delete the row form of the failed queue.

Check Rendition Successfulselect r_object_id from dm_document where object_name='DOCUMENT' and exists(select * from dmr_content where any parent_id=dm_document.r_object_id and full_format='pdf')

 

This query checks if a rendition is present for the given DOCUMENT name. If the pdf rendition exists, it returns its r_object_id. If no rendition is present for the given document, it returns nothing.

3. Audit Trail

 

Failed Login Since 1hselect user_name,count(*) as logon_failure from dm_audittrail where event_name='dm_logon_failure' and time_stamp > (date(now)-(60*1/24/60)) group by user_name order by 2

 

This query displays the number of failed logons in the docbase per user since 60 minutes. The parameter 60 can be changed.

Purge Logon FailureEXECUTE purge_audit WITH delete_mode='PREDICATE', dql_predicate='dm_audittrail where event_name=''dm_logon_failure'''

 

This statement purges the audit trail queue by deleting all logon failure entries. Be aware that it can take a while depending on the number of entries you have.

Number Of Logon Failureselect count(*) as logon_failure from dm_audittrail where event_name='dm_logon_failure'

 

This query simply shows the number of logon failures in the queue.

4. Miscellaneous

 

IAPI Purge Cachesflush,c,ddcache,dm_type
flush,c,ddcache,dmi_type_info
flush,c,ddcache,dm_aggr_domain
flush,c,ddcache,dm_domain
flush,c,ddcache,dm_dd_info
flush,c,ddcache,dm_nls_dd_info
flush,c,ddcache,dm_foreign_key
flush,c,persistentcache
publish_dd,c
reinit,c

 

This query flushes caches, it can be used when trying to install ADTS dars and fails due to version mismatch.

Check ADTS Installer Versionjava -cp adtsWinSuiteSetup.jar DiShowVersion
Multi-installer Suite 6.7.2000.42
Installer-Version: 6.7.2000.42 build 1
Installer-Build-Date: 1/11/13 12:28 AM

 

Go to the ADTS installer directory and issue this query. It shows the version of the installer.

Encrypt dm_bof_registry Passwordjava com.documentum.fc.tools.RegistryPasswordUtils PASSWORD

 

This one encrypts the dm_bof_registry password in order to use it in dfc.properties. Not that the encryption process is different on xPlore and ADTS but you can use it on the content server and all DFC related programs. Replace the PASSWORD in the query by your clear password.