Skip navigation.

Feed aggregator

Is OAM alone enough or should I also learn OIM/SOA for Apps DBA ?

Online Apps DBA - Thu, 2015-06-18 14:49

We recently launched Oracle Access Manager Training (next batch starting on 6th July), one question that I’ve seen quite regularly is “I am an Oracle Apps DBA and learning OAM alone (without OIM/SOA and WebLogic) with Apps DBA will give me better opportunity or is it required to learn entire Identity Management Suite

Since OAM/OIM is very close to my heart (I wrote my first book on this topic) and I meet lot of Apps DBAs with similar question, I thought I should cover this question here .

Before I answer this question, let me first explain about OAM/OAM/SOA and other IAM products. Oracle Access Manager is recommended Web Single Sign-On product from Oracle Identity & Access Management Suite where other products include OIM, OID, OUD, OVD, OES, OMS, OAAM, OES, eSSO etc

Oracle Identity Manager is identity provisioning and management product that uses Oracle SOA Suite for approval based workflows and as Orchestration engine.

OAM and OIM products can be implemented independently and if you need just Single Sign-On (or Access Management) then Oracle Identity Manager (OIM) is not required . If you are using Oracle E-Business Suite (App R12) and just require SSO integration with Microsoft Active Directory (MS-AD)/Windows Native Authentication or with other oracle products like OBIEE, WebCenter etc then OIM/SOA is not required.

Having said that, it is better to know more so learning OIM will definitely help but I always believe in starting with small (keep things simple). I learnt OAM (Oblix that time) in 2003 and started OIM (Xelleterate at that time) 2 years later in 2005.

 

Note: OAM is deployed on WebLogic Server so basic WebLogic Concepts must also be learnt as part of OAM hence we cover WebLogic Domain, Admin/Managed Servers, Pack/Unpack, JDBC etc in our OAM 11gR2 Training/Workshop .

Tip for Apps DBAs to get better day rate/bigger role : Learn OAM and integrate EBS (R12) with OAM for Single Sign-On .

Having any doubt on what topics to learn then check what our OAM experts have to say  or leave a comment here for any other query .

Previous in series Related Posts for Access Manager
  1. Integration Steps – 10g AS with OAM (COREid)
  2. OAS – OAM (Access Manager / Oblix COREid) Integration Architecture
  3. Oblix COREid and Oracle Identity Management
  4. Installing Oracle Access Manager (Oblix COREid / Netpoint)
  5. Oracle Access Manager (Oblix COREid) 10.1.4.2 Upgrade
  6. Access Manager: WebGate Request Flow
  7. Introduction to Oracle Access manager : Identity and Access System – WebPass , Webgate, Policy Manager
  8. Certified Directory Server (AD, OID, Tivoli, Novell, Sun or OVD) and their version with Oracle Access Manager
  9. Install Oracle Access Manager (OAM) 10.1.4.3 Identity Server, WebPass, Policy Manager, Access Server, WebGate
  10. Multi-Language or multi-lingual Support/Documentation for Oracle Access Manager (OAM)
  11. OAM Policy Manager Setup Issue “Error in setting Policy Domain Root” : OAM with AD and Dynamic Auxiliary Class
  12. OAM 10.1.4.3 Installation Part II – Indentity Server Installation
  13. OAMCFGTOOL : OAM Configuration Tool for Fusion Middleware 11g (SOA/WebCenter) Integration with OAM
  14. Oracle Access Manager Installation Part III : Install WebPass
  15. OAM : Access Server Service Missing when installing Access Manager with ADSI for AD on Windows
  16. OAM : Create User Identity – You do not have sufficient rights : Create User Workflow
  17. Password Policy in Oracle Access Manager #OAM
  18. Changes in Oracle Access Manager 11g R1 (11.1.1.3)
  19. Agents in OAM 11g (WebGate 10g/11g, OSSO/mod_osso, AccessGate IDM Domain agent) aka PEP (Policy Enforcement Points)
  20. How to install Patches in Oracle Access Manager 10g : Bundle Patch / BPXX
  21. Session Management in #OAM 11g : SME , Idle Timeout, Session Lifetime
  22. Part IX : Install OAM Agent – 11g WebGate with OAM 11g
  23. How to integrate OAM 11g with OID 11g for User/Identity Store
  24. How to install Bundle Patch (BP) on OAM 11.1.1.3 – BP02 (10368022) OAM 11.1.1.3.2
  25. Error starting OAM on IBM AIX : AMInitServlet : failed to preload on startup oam java. lang. Exception InInitializer Error
  26. OAMCFG-60024 The LDAP operation failed. OAMCFG-60014 Oracle Access Manager is not configured with this directory
  27. How to Edit (create, delete, modify) Identity Store of OAM 11g from command line (WLST) – editUserIdentityStoreConfig
  28. OAM WebGate Registration RREG – Resource URL format is not valid
  29. Blank Screen on OAM 10g Identity Server Console : /identity/oblix
  30. Oracle 10g/11g webgate software download location
  31. How to find Webgate 10g/11g Version and Patches Applied
  32. OAM integration with OIF : Authentication Engine or Service Provider
  33. OAM 11g integration with Microsoft Windows Active Directory (WNA, IWA, Kerberos) for Zero Sign-On
  34. OAM 11g : How to change Security Mode (OPEN, SIMPLE, CERT) – WebGate to Access Server Communication
  35. Forgot Password link on OAM Login Page
  36. OIM-OAM-OAAM integration – Account Lockout in OAM obLoginTryCount , oblockouttime, MaxRetryLimit
  37. How to identify which LDAP (OID/AD/OVD) server OAM 11g connects to and as what user ?
  38. OAM 10g WebGate installation failed with Sorry Invalid User or Invalid Group
  39. Beware if you are running OAM in SIMPLE mode with 10g WebGate : Oracle AccessGate API is not initialized
  40. Troubleshooting : 11g WebGate with OHS 11g integrated with OAM 11g : OBWebGate_AuthnAndAuthz: Oracle AccessGate API is not initialized
  41. Deploying OAM in high availability across data centres in Active Active cluster : New Feature in OAM 11gR2 PS2
  42. New OAMConsole in OAM 11gR2 PS2 : Enabling Federation, STS, Mobile & Social in Oracle Access Management Suite 11.1.2.2
  43. OAM/WebGate troubleshooting : WebGate on Apache/OHS Unable to read the configuration file
  44. Is OAM alone enough or should I also learn OIM/SOA for Apps DBA ?

The post Is OAM alone enough or should I also learn OIM/SOA for Apps DBA ? appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

New Monitor… Again…

Tim Hall - Thu, 2015-06-18 12:40

220px-Commodore_PET2001I’ve just bought myself a Dell U3415W 34-Inch IPS LCD Monitor for use with the laptop. It’s quite an extravagant purchase, but it’s pretty amazing. Having 3440×1440 resolution on a single screen feels much more useful than sitting a couple of smaller monitors next to each other. It feels almost like having 3-4 screens in one.

I bought it to replace the Asus PB298Q 29 inch Widescreen AH-IPS Multimedia Monitor I got about 7 months ago. The resolution of 2560×1080 is pretty darn decent, but I don’t like having a depth of 1080. When you are using a wider screen, the limited height feels really restrictive for some reason.

Currently I have both screens plugged into the laptop, but I can’t see it staying that way. I’ve really had no reason to look at the MacBook or ASUS screen yet. I’ll see how I feel over the next few days. If I’m happy to let it go I’ll probably take the ASUS screen to work and use it there. It’s better than either of my work monitors. :)

Ditching the second screen will also free up some room on my desk, which is looking a little crazy at the moment… :(

Cheers

Tim…

New Monitor… Again… was first posted on June 18, 2015 at 7:40 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Create #em12c users fast and easy!

DBASolved - Thu, 2015-06-18 11:57

Over the last few months, I’ve been working a project where I’ve started to dive into EM CLI and the value that EM CLI brings to cutting down on doing things like creating Enterprise Manager users. Hence the reason for this post.

Note: If you haven’t looked into EM CLI yet, I encourage you to do so. A good starting point is here. Plus there is a whole book written on the topic by some friends and guru’s of mine, here.

Creating users in Enterprise Manager 12c is pretty simple as it is. Simply go to Setup -> Security -> Administrators. When you get this screen, then click on either the Create or Create Like buttons.

After clicking Create or Create Like, Enterprise Manger takes you to a five (5) step wizard for creating a user. This wizard allows you to provide details about the user, assign roles, assign target privileges, assign resource privileges and then review what you have done.

Depending on how many users you have to create, this wizard is either an great way of creating user or a slow way for creating users. Using EM CLI, users can be created from the command line very quickly and easily and no need to use the GUI wizard either.. :)

The syntax to create a user from the command line is as follows:

emcli create_user
-name="name"
-password="password"
[-type="user_type"]
[-roles="role1;role2;..."]
[-email="email1;email2;..."]
[-privilege="name[;secure-resource-details]]"
[-separator=privilege="sep_string"]
[-subseparator=privilege="subsep_string"]
[-profile="profile_name"]
[-desc="user_description"]
[-expired="true|false"]
[-prevent_change_password="true|false"]
[-department="department_name"]
[-cost_center="cost_center"]
[-line_of_business="line_of_business"]
[-contact="contact"]
[-location="location"]
[-input_file="arg_name:file_path"]

The beautiful part of EM CLI is that is can be used with any scripting language. Since I like to use PERL, I decided to write a simple script that can be used to create a user from the command line using EM CLI.

#!/usr/bin/perl -w
use strict;
use warnings;

#Parameters
my $oem_home_bin = “$OMS_HOME/bin";
my ($username, $passwd, $email) = @ARGV;
my $pwdchange = ‘false';

#Program
if (not defined $username or not defined $passwd or not defined $email)
{    
    print "\nUsage: perl ./emcli_create_em_user.pl username password email_address\n\n";    
    exit;
}

system($oem_home_bin.'/emcli login -username=sysman);
system($oem_home_bin.'/emcli sync');
my $cmd = 'emcli create_user -name='.$username.' -password='.$passwd.' -email='.$email.' -prevent_change_password='.$pwdchange;
#print $cmd."\n";
system($oem_home_bin.'/'.$cmd);
system($oem_home_bin.'/emcli logout');

Now using this bit of code, I’m able to create users very rapidly using EM CLI with a command like this:

perl ./emcli_create_em_user.pl <username> <password for user> <email address>

Well, I hope this helps other look at and start using EM CLI when managing their EM environments.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: EMCLI, OEM
Categories: DBA Blogs

Select One Choice with Select Items Tag

Andrejus Baranovski - Thu, 2015-06-18 08:26
If you need to implement select one choice based on alternative data sources (not based on ADF BC) - you shouldn't use af:forEach inside af:selectOneChoice component. Don't get confused with af:forEach tag, this tag is supposed to generate multiple components and not to iterate over a collection of objects and render HTML for each item. There could be cases, when choice list data will come duplicated, with af:forEach tag applied. I would suggest to construct array of SelectItem objects and return it to the f:selectItems tag to be rendered.

This is how proper af:selectOneChoice definition should look like. Array of items is being rendered in the choice list through f:selectItems tag:


Value property for f:selectItems can be entered manually or through the wizard, when creating af:selectOneChoice - this should point to the custom method, where array of SelectItem objects is constructed:


Custom method could read data from any source and construct array of SelectItem objects. This is the input for f:selectItems tag, it knows how to render a list of choice list items out of it:


This is how it looks on runtime - choice list is working fine, no need to use af:forEach:


Download sample application - CustomSelectListApp.zip.

Introducing Rittman Mead’s New User Engagement Service

Rittman Mead Consulting - Thu, 2015-06-18 05:15

NewImage

Although you’ve been hearing a lot on the blog recently about extending Oracle BI&DW with Hadoop and big data technologies, another initiative Rittman Mead have been working on over the past couple of months is user engagement. All-too often developers create BI systems that meet all an organisation’s technical goals, but then fail to get adopted by users for reasons we can’t understand; all we know is that our brilliantly-architected system just doesn’t seem to engage users, and that’s just as fatal to a project as the ETL not working or the underlying technology being out-of-date.

Surveys have shown that adoption rates for new BI systems are often as low as 25%, because we don’t focus enough on the user experience (UX) or user interface, queries run too slow, the data or reports just aren’t relevant or the overall experience just isn’t up to the standard that internet users expect now. If you’re a BI manager or a CIO for your organisation it’s essential that you know whether the BI systems you’re providing for your users are actually being used, and it’s quite often the case that a BI system provided by the IT department is thought to be well-used by the end-users, but when you check the usage stats you’ll find that engagement has really fallen-off since the initial rollout of the system – and the worst thing is, you probably don’t really know why this is, all you know is that users aren’t happy and they’re now looking to implement some self-service tools that’ll break your data governance model.

To help you improve user engagement for your BI system and increase the return on your investment in Oracle technology, Rittman Mead are developing a User Engagement Service to address these issues along with a User Engagement Toolkit developed by the likes of Robin Moffatt, Jordan Meyer, Tom Underhill and other OBIEE and UX experts within Rittman Mead. Initial details of the service are on our Rittman Mead User Engagement Service homepage and the Rittman Mead User Engagement Service Datasheet, look-out for more information on these services over the coming days and weeks, and if you’re interested in getting-involved in our early-adopter program you can sign-up using the form below and we’ll get back to you shortly.

#mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */

Categories: BI & Warehousing

SQL Server 2016 CTP2: Stretch database feature - Part 2

Yann Neuhaus - Thu, 2015-06-18 02:55

In my previous blog SQL Server 2016 CTP2: Stretch database feature - Part 1, I explained how to prepare your instance and your database to enable the Stretch feature for your table.

Now I present you how to enable the feature for your tables!

Prerequisites

Enabling Stretch Database at the table level requires ALTER permissions on this table.

 

Limitations

For the moment in the SQL Server 2016 CTP2, there are several and important limitations, which drastically reduces the scope of use of Stretch Database feature.

These limitations include Table Properties, Data Types, Column Properties, Column Types and Constraints and Indexes.

You can obtain the detailed list with the following Microsoft article: Requirements and limitations for Stretch Database.

 

Enable Stretch for a Table

First, I create a new table to avoid all limitations I explained above. Here is the SQL script:

Use AdventureWorks2014;
CREATE TABLE Stretch_Table
(
    Column1 nvarchar(50),
    Column2 int,
    Column3 nchar(10)
)

If you have followed all steps in my previous blog, you must be ready for enabling Stretch feature for your table!

As all have been pre-configured, you just need to enable the feature for the targeted table.

enable_table_for_stretch.png

 

If we take a look at the SQL Database server in Azure, we must be able to visualize the "Stretch_Table" table:

SQL_azure_20150618-090340_1.png

 

You can notice that a new column named "batchID" has been included in the original table. Indeed, a non-null bigint is incremented each time you insert a new row in your table: it is the PRIMARY KEY of your table.

Moreover, your table still remains visible on your on-premise instance and you can perform your normal work.

 

Next step

Performing backup of the database with Stretch feature enabled will not include the data stored in Azure. It means you also needs to perform a backup in Azure.

I will detail this part in my next blog.

Please Sell

Floyd Teter - Wed, 2015-06-17 18:12
Oracle's financial results for Q4 of their fiscal year 2015 came out a few minutes ago.  Seems they missed targets on license revenues and earnings per share.  So the stock may be headed for the professional investor's dog house.  I've even read of an analyst or two publishing a "sell" rating on Oracle stock.

Geez, please sell.  Drive the price down.  I can buy some more shares on the cheap and laugh all the way to the bank.  Let me explain.

First, license revenues shrank.  Gee, no kidding?  Oracle is transitioning away from licensed software to cloud and license revenues shrank?  (insert sarcasm tag here) Better dump the stock before the bottom falls out! (end of sarcasm)

Second, Oracle (like every other tech firm recently) was theoretically dinged by exchange rates.  The dollar fell against the yuan, peso, ruble, ducat, yen, etc.  But currency rates average out...even over the short run.  Take a snapshot after the Greeks and the European Union work out their issues in a few weeks...regardless of how they work it out, bet that exchange rate issue becomes less of an issue.

Third, the name of the game in measuring success in providing whatever-as-a-service is recurring revenue.  You'll compromise margins on new subscriptions to grow share, then work hard to minimize churn...which maximizes very high margin recurring revenue.  So the telling numbers for Oracle's future as a cloud provider:  subscription revenue growth, recurring revenue growth, and recurring revenue margins.  Which I do believe were the high points in the results.

Fourth, the technical fundamentals...which is really the most important factor...are very good.  Solid products with lots of functionality.  I'm not too concerned about Oracle's financial viability as long as they keep producing great products.

So only am I not worried, I'm actually pretty enthusiastic about the results and what they really mean.

So please sell...I'd like to increase my minuscule Oracle holdings.  If enough folks sell, I'll be able to do so on the cheap.

The Role of Today's CIO in Business Transformation

WebCenter Team - Wed, 2015-06-17 13:58
CIOs are extremely important to their executives and businesses, and the cloud empowers them to take the lead. The best CIOs in the industry are beginning to grasp that and train their staff to become experts in cloud technologies.

Epectations of business-driven IT are higher than ever before.  Words like agility, mobility, digital, data, cloud and customer centricity are common and constant from Line of Business (LOBs) executives like CMOs. Today’s business must move at the pace of – scratch that – ahead of the pace of today’s consumer. Businesses must anticipate customers’ needs to be truly customer-centric organizations.  Innovation through the cloud is a big force driving customer experience and customer centricity.

Kurian believes the CIO will play a pivotal role with analytics, integration, security and privacy, and ultimately business transformation. In this video he states “cloud actually empowers CIOs to take the lead with innovation.” He goes on to say that today’s CIO role is not diminished just different, requiring a new business mindset. The CIO is very much essential to this new digital, customer-centric business revolution.

Watch the video here.

Scripting Languages & Oracle: Blogs to Follow

Christopher Jones - Wed, 2015-06-17 13:58
If you haven't come across them, keep an eye on the blogs of Blaine Carter and Dan McGhan. They will be posting mostly on developing with Python and Javascript, respectively. Blaine & Dan work at Oracle under the leadership of Steven Feuerstein who is well known in the Oracle PL/SQL world for his enthusiasm for application development.

Combining Oracle Big Data Discovery and Oracle Visual Analyzer on BICS

Rittman Mead Consulting - Wed, 2015-06-17 13:11

So now that Oracle Visual Analyzer is out as part of Oracle BI Cloud Service, and Visual Analyzer (VA) is due to ship on-premise as part of OBIEE12c sometime in the next twelve months, several of our customers have asked us if they need both VA and Oracle Big Data Discovery if they’re looking to analyse Hadoop data as part of a BI project. It’s an interesting question so I thought it’d be useful to go through my thoughts on how the two tools work together, when to use one, and when to use the other.

Taking our standard “big data” dataset of website log activity, Twitter mentions and page details from our WordPress blogging software, before Visual Analyzer came along the two usual ways we’d want to analyze these datasets is either a traditional BI metrics analysis-type scenario, and a data discovery/visualization scenario where we’re more interested in the content of the data rather than precise metrics. My half of the recent BI Forum 2015 Masterclass goes through these two scenarios in detail (presentation slides in PDF format here), and it’s Big Data Discovery that provides the more “Tableau”-type experience with fast point-and-click access to both datasets joined together on their common website page URL details.

NewImage

Now we have Visual Analyzer though, things get interesting; in my article on Visual Analyzer within BICS I showed a number of data visualisations that look pretty similar to what you’d get with Big Data Discovery, and when we have VA available on-site as part of OBIEE12c we’ll be able to connect it directly to Hadoop via Cloudera Impala, potentially analyzing the whole dataset rather than the (representative) sample that Big Data Discovery loads into its Endeca Server-based engine.

NewImage

So if the customer is looking to analyze data held in Hadoop and Visual Analyzer is available, where’s the value in Big Data Discovery (BDD)? To my mind there’s three areas where BDD goes beyond what VA can do, or helps you perform tasks that you’ll need to do before you can work with your data in VA:

  • The initial data discovery, preparation and cleansing that you’d otherwise have to do using HiveQL or an ETL tool such as ODI12c
  • Providing you with a high-level overview and landscape of your data, when that’s more important to you at the time than precise counts and drill-down analysis
  • Understanding how data joins together, and how best to use your datasets in terms of metrics, facts, dimensions and so forth

Taking the data preparation and cleansing part first, I’ve covered in several blogs over the past couple of years how tools such as ODI can be used to create formal, industrialized data pipelines to ingest, prepare and then summarise data coming into your Hadoop system, and how you can drop-down to languages such as HiveQL, Pig and Spark to code these data transformations yourself. In the case of my webserver log, twitter and page details datasets this work would include standardising URL formats across the three sources, geocoding the IP addresses in the access logs to derive the country and city for site visitors, turning dates and times in different formats into ones that work as Hive timestamps, and so forth. Doing this all using ODI and/or HiveQL can be a pretty technical task, so where BDD comes in useful even – if VA and an OBIEE RPD is the final destination for the data.

NewImage

Datasets that you transform and enrich in Big Data Discovery can be saved back to Hive as new Hive tables, or exported out as files for you to load into Oracle using SQL*Developer, or upload into BICS to use in Visual Analyzer. Where BDD then becomes useful is giving you a quick, easy to use overview of your dataset before you get into the serious business of defining facts, dimensions and aliases against these three Hive tables. The screenshots below show a couple of typical Big Data Discovery Studio data visualisation pages against the webserver logs dataset, and you can see how easy it is to create simple charts, tag clouds and maps against the data you’re working with – the aim being to give you an overview of the data you’re working with, help you understand its contents and “shape”, before moving further down the curation process and applying formal structures to the data.

NewImage

Where things get harder to do within Big Data Discovery is when more-and-more formatting, complex joining and “arranging” of the data is required; for example, BDD gives you a lot of flexibility in how you join datasets, but this flexibility can be confusing for end-users if they’re then presented with every possible variation of a three-table join rather than having the data presented to them as simple facts and dimensions. But this is how we’d really expect it – if you go back to the logical data architecture I went through in the blog post a while ago about the updated Oracle Information Management Reference Architecture, the trade-off in using schema-on-read data reservoirs is that this data, although quick and cheap to store, requires a lot more work to be done each time you access the data to get “value” from it.

NewImage

OBIEE, in contrast, makes you define your data structures in-full before you present data to end-users, dividing data in the three datasets into measures (for the fact tables) and attributes (for dimensions) and making it possible to add more dimension lookups (for a date dimension, for Twitter users in this case) and separate the overall set of data into more focused subject areas. Working with the dataset on the on-premise version of OBIEE first, the RPD that I created to present this data in a more formal, dimensional and hierarchical way to users looked like this:

NewImage

I can leave this RPD connected directly to the underlying Hive and Impala tables if I want to use just Answers and Dashboards, but for the time being I either need to export the underlying Hive tables into CSV files or into an Oracle Database before uploading into Visual Analyzer, but come OBIEE12c this should all be seamless. What users are then presented with when they go into Visual Analyzer is then something like this:

NewImage

Notice how the various attributes of interest are grouped into fact and dimension table folders, and there’s a simple means to add calculations, change the visualisation type and swap chart settings around. Note also that the count on the screen is the actual count of records in the full dataset, not the sample that BDD takes in order to provide an overview of values and distribution in the full dataset. Whilst it’s relatively easy to create a line chart, for example, to show tweets per user within BDD, using Visual Analyzer it’s just a case of double-clicking on the relevant measures and attributes on one side of the page, selecting and arranging the visualisation and applying any filters using dialog boxes and value-selectors – all much more familiar and obvious to BI users.

NewImage

Enrichment to the data that I’ve done in Big Data Discovery should in most cases be able to come through to Visual Analyzer; for example, I used Big Data Discovery’s text enrichment features to determine the sentiment of blog post titles, tweets and other commentary data, I could use the latitude and longitude values derived during the visitor IP address geocoding to plot site visitors on a map. Using the sentiment value derived from the post title, tweet contents and other textual data, I can create a chart of our most popular posts mentioned on Twitter and colour bars to show how positive, or negative, the comments about the post were.

NewImage

The only thing that Visual Analyzer can’t yet do that would be useful, is to be able to include more than one subject area in a project. To analyze the number of tweets and the number of page views for posts in a scatter chart, for example, I currently have to create a separate subject area that includes both sets of facts and dimensions, though I understand BICS on VA will have the ability to include multiple subject areas in a forthcoming release.

NewImage

So in summary, I’d say that Big Data Discovery, and Visual Analyzer as part of BI Cloud Service, are complementary tools rather than one being able to replace the other in a big data context. I find that Big Data Discovery is a great tool to initially understand, catalog and view at a high-level data sources going into VA, and then  to do some user-driven cleaning-up of the data, enhancing it and enriching it before committing it to the formal dimensional model that Visual Analyzer requires.

In its BICS guise there’s the additional step of having to export the Hadoop data out of your Big Data Appliance or other Hadoop cluster and upload it in the form of files using BICS’s data load or the new Data Sync utility, but when VA comes as part of OBIEE12c in the next twelve months you’ll be able to directly connect to the Hadoop cluster using Impala ODBC and analyse the data directly, in-place.

I’ll be covering more on BICS over the next few weeks, including how I got data from Hadoop into BICS using the new Data Sync utility.

Categories: BI & Warehousing

How to learn Oracle Access Manager (OAM) 11gR2

Online Apps DBA - Wed, 2015-06-17 12:57

Screen shot 2015-06-17 at 00.38.01

Identity Management Jobs have grown exponentially (specially in last one year) and Oracle is among Leader in Gartner’s Magic Quadrant for IAM. In this post I am going to cover how to learn Oracle Access Manager and things you should learn for Oracle Access Manager (OAM).

What is Oracle Access Manager : If you don’t know already Oracle Access Manager (OAM) is Oracle’s recommended Single Sign-On (SSO) solution for Web Access Management.

Why should you learn OAM : Single Sign-On and Web Access Management is very important for securing applications. With Cloud bases SAAS applications, it is more important for enterprises to implement federated Sign-On (Federation is now part of OAM in 11gR2 version). Oracle Access Manager (OAM) is also mandatory in Oracle Fusion Applications.

What roles are available for OAM : You can be an OAM Architect, Administrator, Implementor, or Developer.

What should I learn in OAM : For all the OAM roles, you should have fair understanding of Architecture, Component, and functionality of OAM. If you are an Architect, Administrator or Implementor then you also know Installation, Configuration, Integration , High Availability & Disaster Recovery setup.  If you are developer then you should be able to write authentication modules, policies , custom login pages etc.

Where can I learn OAM : If you prefer self learning then you can refer to Oracle’s Documentation on OAM or attend Oracle University Course (costs 4200 USD) or attend our Online Live Training on OAM (costs 997 USD) – next batch starts on 4th July (We provide Full Money back guarantee for 7 Days) .

What topics should I look in OAM Training :  To start with , you should learn minimum

  • Architecture of Oracle Access Manager (OAM)
  • Overview of WebLogic Server and Fusion Middleware
  • Overview of Oracle Identity & Access Management (OAM, OIM, OID, OUD, OAAM, OES..)
  • Installation & Configuration of OAM
  • Install & Configuration of OHS & WebGates
  • Migration of OSSO 10g to OAM 11g
  • Authentication & Authorization policies in OAM
  • Protecting resources using SSO
  • OAM Integration with LDAP Server (OID or OUD)
  • Deploying OAM in High Availability
  • Common Integration Scenarios for OAM
  • Overview of Oracle Identity

 

I am Oracle Apps DBA /DBA should I also learn OAM : Yes, you should learn Oracle Access Manager (OAM) as Apps DBAs with OAM experience earn 25-40% more. Single Sign-On is quite common these days and with Oracle Fusion Applications (OAM is mandatory in Fusion Apps), it is important that you learn Oracle Access Manager (OAM).

I still have some more queries related to OAM : Contact our OAM experts for any query related to OAM training requirements or post a comment here .

 

Register for our Online Live Training on OAM (costs 997 USD) –batch starts on 4th July, register early to avoid disappointment as seats are limited  (Our Oracle Fusion Middleware Course was sold out long before start date)

 

Related Posts for Oracle Access Manager
  1. How to learn Oracle Access Manager (OAM) 11gR2

The post How to learn Oracle Access Manager (OAM) 11gR2 appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

Oracle Enterprise Manager Cloud Control 12c Release 5 (12.1.0.5) : My first two installations

Tim Hall - Wed, 2015-06-17 09:55

em-12cI’ve done a couple of play installations of EM12c 12.1.0.5, just to get a feel for it. You can see the result of that here.

From an installation perspective, everything was pretty similar to the previous releases. I tried the installation on both OL5 and OL6, in both cases using 12c as the database repository. No dramas there.

A couple of things of note.

  1. The 12c repository template database is a Non-CDB architecture.
  2. The Weblogic installation uses Java6.

Interesting…

The next step is to try some upgrades from EM 12.1.0.4 (on DB 11.2.0.4) to EM 12.1.0.5, which is what I’ll need for my upgrades at work. The testing is quite time consuming and boring, but it’s got to be done before I can unleash this on the company. :)

Cheers

Tim…

PS. Remember to download from edelivery.oracle.com (in a couple of days) for your production installations. Apparently there is a difference to the license agreement.

Oracle Enterprise Manager Cloud Control 12c Release 5 (12.1.0.5) : My first two installations was first posted on June 17, 2015 at 4:55 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

A natural born tree right in my own backyard!

FeuerThoughts - Wed, 2015-06-17 06:33
As some of my readers may have noticed, I spend a lot of time these days among trees, paying attention to trees, cutting back invasive trees to save native trees, etc.

And one thing that I came to realize is that at least in an area like Chicagoland, humans tightly control the reproduction of trees. 
I live on a lovely tree-lined street. Big trees - 100 ft tall or more. Maples, oaks, ash....but there are no baby trees, except for smallish trees that the city plants when they have to remove diseased trees (such as all the ash trees, under assault from ash borers).
It makes me sad to think of how impoverished my immediate surroundings are, how unnatural. We don't even let trees - majestic living things that make our lives possible, that live through many of our own generations - live out natural life cycles. 
In fact, I have come to accept that trees planted singly along streets to enhance our lives are really just ornaments. If "a man is not an island" then certainly a tree is not a forest. And very few trees live naturally outside of forests of many, many trees.
Well, enough of sadness. Veva and I were sitting on our patio last week, enjoying the (finally) warm weather and our lovely garden (thanks to Veva), when she pointed out something truly wonderful:

Can you see it? We planted the birch trees years ago. They are now 40 feet tall, but nestled in between? A natural born baby birch tree! Can't see it? Here maybe this will help:

I feel so much better now. The (minimal) wildness of our garden (as in: no grass) made it possible for a birch seed to take hold and grow. A tree that humans did not plant and hopefully will allow to grow to maturity.
Humans love to debate things like "Do plants feel?" Of course, it is terribly difficult for us to imagine such a thing - because the way that plants would think and feel would be so different from us. So we will likely never really be able to answer the question.
Which means it would make a lot more sense to err on the side of caution and assume that trees and plants and creatures do feel, do think in their own way, do take joy in life.
And watching this natural born tree grow, it is certainly easy to believe that it is joyful. I sure am.


Categories: Development

Reverse Key

Jonathan Lewis - Wed, 2015-06-17 06:11

A question came up on the OTN database forum recently asking if you could have a partitioned index on a non-partitioned table.

(Aside: I’m not sure whether it would be quicker to read the manuals or try the experiment – either would probably be quicker than posing the question to the forum. As so often happens in these RTFM questions the OP didn’t bother to acknowledge any of the responses)

The answer to the question is yes – you can create a globally partitioned index, though if it uses range partitioning you have to specify a MAXVALUE partition. The interesting thing about the question, though is that several people tried to guess why it had been asked and then made suggestions based on the most likely guess (and wouldn’t it have been nice to see some response from the OP ). The common guess was that there was a performance problem with the high-value block of a sequence-based (or time-based) index – a frequent source of “buffer busy wait” events and other nasty side effects.

Unfortunately too many people suggesting reverse key as a solution to this “right-hand” problem. If you’re licensed for partitioning it’s almost certain that a better option would simple be to use global hash partitioning (with 2^N for some N) partitions. Using reverse keys can result in a bigger performance than the one you’re trying to avoid – you may end up turning a little time spent on buffer busy waits into a large amount of time spent on db file sequential reads. To demonstrate the issue I’ve created a sample script – and adjusted my buffer cache down to the appropriate scale:

create table t1(
	id	not null
)
nologging
as
with generator as (
	select	--+ materialize
		rownum id 
	from dual 
	connect by 
		rownum <= 1e4
)
select
	1e7 + rownum	id
from
	generator	v1,
	generator	v2
where
	rownum <= 1e7 
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1'
	);
end;
/

alter table t1 add constraint t1_pk primary key(id) 
using index 
	reverse 
	nologging 
;

alter system flush buffer_cache;
alter session set events '10046 trace name context forever, level 8';

begin
	for i in 20000001..20010000 loop
		insert into t1 values(i);
	end loop;
end;
/

I’ve created a table with 10,000,000 rows using a sequential value as the primary key, then inserted “the next” 10,000 rows into the table in order. The index occupied about about 22,000 blocks, so to make my demonstration show you the type of effect you could get from a busy production system with more tables and many indexes I ran my test with the buffer cache limited to 6,000 blocks – a fair fraction of the total index size. Here’s a small section of the trace file from the test running 10.2.0.3 on an elderly machine:


WAIT #43: nam='db file sequential read' ela= 13238 file#=6 block#=12653 blocks=1 obj#=63623 tim=3271125590
WAIT #43: nam='db file sequential read' ela=  7360 file#=6 block#=12749 blocks=1 obj#=63623 tim=3271133150
WAIT #43: nam='db file sequential read' ela=  5793 file#=6 block#=12844 blocks=1 obj#=63623 tim=3271139110
WAIT #43: nam='db file sequential read' ela=  5672 file#=6 block#=12940 blocks=1 obj#=63623 tim=3271145028
WAIT #43: nam='db file sequential read' ela= 15748 file#=5 block#=13037 blocks=1 obj#=63623 tim=3271160998
WAIT #43: nam='db file sequential read' ela=  8080 file#=5 block#=13133 blocks=1 obj#=63623 tim=3271169314
WAIT #43: nam='db file sequential read' ela=  8706 file#=5 block#=13228 blocks=1 obj#=63623 tim=3271178240
WAIT #43: nam='db file sequential read' ela=  7919 file#=5 block#=13325 blocks=1 obj#=63623 tim=3271186372
WAIT #43: nam='db file sequential read' ela= 15553 file#=6 block#=13549 blocks=1 obj#=63623 tim=3271202115
WAIT #43: nam='db file sequential read' ela=  7044 file#=6 block#=13644 blocks=1 obj#=63623 tim=3271209420
WAIT #43: nam='db file sequential read' ela=  6062 file#=6 block#=13741 blocks=1 obj#=63623 tim=3271215648
WAIT #43: nam='db file sequential read' ela=  6067 file#=6 block#=13837 blocks=1 obj#=63623 tim=3271221887
WAIT #43: nam='db file sequential read' ela= 11516 file#=5 block#=13932 blocks=1 obj#=63623 tim=3271234852
WAIT #43: nam='db file sequential read' ela=  9295 file#=5 block#=14028 blocks=1 obj#=63623 tim=3271244368
WAIT #43: nam='db file sequential read' ela=  9466 file#=5 block#=14125 blocks=1 obj#=63623 tim=3271254002
WAIT #43: nam='db file sequential read' ela=  7704 file#=5 block#=14221 blocks=1 obj#=63623 tim=3271261991
WAIT #43: nam='db file sequential read' ela= 16319 file#=6 block#=14444 blocks=1 obj#=63623 tim=3271278492
WAIT #43: nam='db file sequential read' ela=  7416 file#=6 block#=14541 blocks=1 obj#=63623 tim=3271286129
WAIT #43: nam='db file sequential read' ela=  5748 file#=6 block#=14637 blocks=1 obj#=63623 tim=3271292163
WAIT #43: nam='db file sequential read' ela=  7131 file#=6 block#=14732 blocks=1 obj#=63623 tim=3271299489
WAIT #43: nam='db file sequential read' ela= 16126 file#=5 block#=14829 blocks=1 obj#=63623 tim=3271315883
WAIT #43: nam='db file sequential read' ela=  7746 file#=5 block#=14925 blocks=1 obj#=63623 tim=3271323845
WAIT #43: nam='db file sequential read' ela=  9208 file#=5 block#=15020 blocks=1 obj#=63623 tim=3271333239
WAIT #43: nam='db file sequential read' ela=  7708 file#=5 block#=15116 blocks=1 obj#=63623 tim=3271341141
WAIT #43: nam='db file sequential read' ela= 15484 file#=6 block#=15341 blocks=1 obj#=63623 tim=3271356807
WAIT #43: nam='db file sequential read' ela=  5488 file#=6 block#=15437 blocks=1 obj#=63623 tim=3271362623
WAIT #43: nam='db file sequential read' ela= 10447 file#=6 block#=15532 blocks=1 obj#=63623 tim=3271373342
WAIT #43: nam='db file sequential read' ela= 12565 file#=6 block#=15629 blocks=1 obj#=63623 tim=3271386741
WAIT #43: nam='db file sequential read' ela= 17168 file#=5 block#=15725 blocks=1 obj#=63623 tim=3271404135
WAIT #43: nam='db file sequential read' ela=  7542 file#=5 block#=15820 blocks=1 obj#=63623 tim=3271411882
WAIT #43: nam='db file sequential read' ela=  9400 file#=5 block#=15917 blocks=1 obj#=63623 tim=3271421514
WAIT #43: nam='db file sequential read' ela=  7804 file#=5 block#=16013 blocks=1 obj#=63623 tim=3271429519
WAIT #43: nam='db file sequential read' ela= 14470 file#=6 block#=16237 blocks=1 obj#=63623 tim=3271444168
WAIT #43: nam='db file sequential read' ela=  5788 file#=6 block#=16333 blocks=1 obj#=63623 tim=3271450154
WAIT #43: nam='db file sequential read' ela=  9630 file#=6 block#=16429 blocks=1 obj#=63623 tim=3271460008
WAIT #43: nam='db file sequential read' ela= 10910 file#=6 block#=16525 blocks=1 obj#=63623 tim=3271471174
WAIT #43: nam='db file sequential read' ela= 15683 file#=5 block#=16620 blocks=1 obj#=63623 tim=3271487065
WAIT #43: nam='db file sequential read' ela=  8094 file#=5 block#=16717 blocks=1 obj#=63623 tim=3271495454
WAIT #43: nam='db file sequential read' ela=  6670 file#=5 block#=16813 blocks=1 obj#=63623 tim=3271502293
WAIT #43: nam='db file sequential read' ela=  7852 file#=5 block#=16908 blocks=1 obj#=63623 tim=3271510360
WAIT #43: nam='db file sequential read' ela= 10500 file#=6 block#=17133 blocks=1 obj#=63623 tim=3271521039
WAIT #43: nam='db file sequential read' ela= 11038 file#=6 block#=17229 blocks=1 obj#=63623 tim=3271532275
WAIT #43: nam='db file sequential read' ela= 12432 file#=6 block#=17325 blocks=1 obj#=63623 tim=3271544974
WAIT #43: nam='db file sequential read' ela=  7784 file#=6 block#=17421 blocks=1 obj#=63623 tim=3271553331
WAIT #43: nam='db file sequential read' ela=  7774 file#=5 block#=17517 blocks=1 obj#=63623 tim=3271561346
WAIT #43: nam='db file sequential read' ela=  6583 file#=5 block#=17613 blocks=1 obj#=63623 tim=3271568146
WAIT #43: nam='db file sequential read' ela=  7901 file#=5 block#=17708 blocks=1 obj#=63623 tim=3271576231
WAIT #43: nam='db file sequential read' ela=  6667 file#=5 block#=17805 blocks=1 obj#=63623 tim=3271583259
WAIT #43: nam='db file sequential read' ela=  9427 file#=6 block#=18029 blocks=1 obj#=63623 tim=3271592988
WAIT #43: nam='db file sequential read' ela= 52334 file#=6 block#=18125 blocks=1 obj#=63623 tim=3271646055
WAIT #43: nam='db file sequential read' ela= 50512 file#=6 block#=18221 blocks=1 obj#=63623 tim=3271697284
WAIT #43: nam='db file sequential read' ela= 10095 file#=6 block#=18317 blocks=1 obj#=63623 tim=3271708095

Check the block number for this list of single block reads – we’re jumping through the index about 100 blocks at a time to read the next block where an index entry has to go. The jumps are the expected (and designed) effect of reverse key indexes: the fact that the jumps turn into physical disc reads is the (possibly unexpected) side effect. Reversing an index makes adjacent values look very different (by reversing the bytes) and go to different index leaf blocks: the purpose of the exercise is to scatter concurrent similar inserts across multiple blocks, but if you scatter the index entries you need to buffer a lot more of the index to keep the most recently used values in memory. Reversing the index may eliminate buffer busy waits, but it may increase time lost of db file sequential reads dramatically.

Here’s a short list of interesting statistics from this test – this time running on 11.2.0.4 on a machine with SSDs) comparing the effects of reversing the index with those of not reversing the index – normal index first:


Normal index
------------
CPU used by this session               83
DB time                                97
db block gets                      40,732
physical reads                         51
db block changes                   40,657
redo entries                       20,174
redo size                       5,091,436
undo change vector size         1,649,648

Repeat with reverse key index
-----------------------------
CPU used by this session              115
DB time                               121
db block gets                      40,504
physical reads                     10,006
db block changes                   40,295
redo entries                       19,973
redo size                       4,974,820
undo change vector size         1,639,232

Because of the SSDs there’s little difference in timing between the two sets of data and, in fact, all the other measures of work done are very similar except for the physical read, and the increase in reads is probably the cause of the extra CPU time thanks to both the LRU manipulation and the interaction with the operating system.

If you want to check the effect of index reversal you can take advantage of the sys_op_lbid() function to sample a little of your data – in my case I’ve queried the last 10,000 rows (values) in the table:


select 
	/*+ 
		cursor_sharing_exact 
		dynamic_sampling(0) 
		no_monitoring 
		no_expand 
		index_ffs(t1,t1_i1) 
		noparallel_index(t,t1_i1) 
	*/ 
	count (distinct sys_op_lbid( &m_ind_id ,'L',t1.rowid)) as leaf_blocks
from 
	t1
where 
	id between 2e7 + 1 and 2e7 + 1e4
;

The &m_ind_id substition variable is the object_id of the index t1_i1.

In my case, with an index of 22,300 leaf blocks, my 10,000 consecutive values were scattered over 9,923 leaf blocks. If I want access to “recent data” to be as efficient as possible I need to keep that many blocks of the index cached, compared to (absolute) worst case for my data 100 leaf blocks. When you reverse key an index you have to think about how much bigger you have to make your buffer cache to keep the performance constant.


HOWTO: Check if an XMLType View or Table is Hierarchy Enabled

Marco Gralike - Wed, 2015-06-17 03:53
The following simple, code snippet, demonstrates how you can check if an XMLType view or…

Introducing Formspider 1.9, the Web Application Development Framework for PL/SQL Developers.

Gerger Consulting - Wed, 2015-06-17 02:50
The new version of Formspider is coming out this summer. Join our webinar and find out its new features and how your organization can benefit from them. 

The following topics will be covered during the webinar: 
- New features in Formspider version 1.9 
- Formspider architecture&benefits 
- Introduction to development with Formspider 

You can sign up for the webinar at this link.
Categories: Development

ODA workshop at Arrow ECS

Yann Neuhaus - Wed, 2015-06-17 02:30
On the 16th and 17th of June David Hueber, Georges Grey and myself had the chance to attend the ODA hands on workshop at Arrow ECS. Lead Trainer Ruggero Citton (Oracle ODA Product Development) did the first day with plenty of theory and examples. On the second day we had the opportunity to play on a brand new ODA X5-2:

SQL Server 2014: Analysis, Migrate and Report (AMR) - a powerful In-Memory migration tool

Yann Neuhaus - Wed, 2015-06-17 02:28

An important new functionality of Microsoft SQL Server 2014 is the In-Memory OLTP engine, which enable you to load Tables and also Stored Procedures In-Memory for a very fast response time.
The goal is not to load all the database In-Memory but just Tables with critical performances and Stored Procedures with complex logical calculations.

To identify which Tables or Stored Procedures will give you the best performance gain after migration; Microsoft SQL Server 2014 has introduced a new tool: Analysis, Migrate and Report (AMR).

This tool will collect statistics about Tables and Stored Procedures in order to analyze the current workload. It will give you advice on the migration benefits of the different Tables or Stored Procedures. It will also give you an overview of the time/work needed to push Tables or Stored Procedures In-Memory.

In the following article I will show you how to setup and use this Tool.

Configuration of the Management Data Warehouse

The AMR Tool is built into SQL Server Management Studio.
It consists of:

  • Reports which come from a Management Data Warehouse and give recommendations about tables and Stored procedures which could be migrated to In-Memory OLTP
  • Memory Optimization Advisor which will help you during the migration process of a disk table to a Memory Optimized table
  • Native Compilation Advisor which will help you migrated a Stored Procedure to a Natively Compiled Stored Procedure

AMR Tool leverages the Management Data Warehouse and the Data Collector, with his new Transaction Performance Collection Sets, for gathering information about workloads.

AMR will analyze the collected data and provide recommendations via reports.
First, we have to configure the Management Data Warehouse.

To start the configuration, open Management Studio, go to Object Explorer, then Management folder, and right-click on Data Collection. Then select Tasks and click on Configure Management Data Warehouse as shown below:

AMR_picture1.png

On the Configure Management Data Warehouse Storage screen, enter the server name and the database name where you Management Data Warehouse will be host. AMR tool will collect, via its collection sets, data from three Dynamic Management Views every fifteen minutes and will save those data in the MDW database. Uploading data will have minimal performance impact.

If you already have a database, enter her name. If not, click the New button to create a new one.
On the Map logins and Users page, if needed, you can map a user to administer, read, or write the Data Management Warehouse database.
Verify the Management Data Warehouse configuration and proceed to the configuration.
When the configuration of the Management Data Warehouse has been successfully finalized, you should see the following screen:

 AMR_picture2.png

The Management Data Warehouse setup is finished.

Configuration of the Data collection

Take care, SQL Server agent has to be started on the instance that will collect the data.
To collect data, we will enable the new Transaction Performance Collection set which is composed of two new collection sets:

  • Stored Procedure Usage Analysis: used to capture statistics about Stored Procedures which could be migrate to Natively Compiled Stored Procedures
  • Table Usage Analysis: takes information about disk based tables for a future migration to Memory Optimized tables.

 In order to configure the Data Collection, go to Object Explorer, then Management folder, right-click on Data Collection, select Tasks, and click on Configure Data Collection, as shown below:

AMR_picture3.png

After having skipped the Welcome page, you have to select a server and a database name that will host the Management Data Warehouse.

Now, you need to select the data collector sets. In the wizard, check “Transaction Performance Collection Set” in the list of collection sets. This will collect statistics for transaction performance issues.

If the Management Data Warehouse is located on a different SQL Server instance from the data collector and SQL Server agent is not running under a domain account which has dc_admin permissions on the remote instance you have to use a SQL Server Agent proxy.

AMR_picture4.png

After having performed the Data Collection configuration with success, you will have an enabled Data Collection which will collect information about all user databases.

Via SQL Server Agent Folder Jobs, you are now able to see new collection jobs used to collect data from your workloads with names like collection_set_N_collection and jobs used to populate our new Management Data Warehouse database with names like collection_set_N_upload.

It is also good to know that upload jobs will be run every thirty minutes for Stored Procedure Usage Analysis (job: collection_set_5_upload) and every fifteen minutes for Table Usage Analysis (job: collection_set_6_upload). So if you want to speed your upload, you can execute these jobs manually.
 

Reports

To access recommendations based on collected information about all user databases on the workload server, you have to right-click on your Management Data Warehouse database, select Reports, then Management Data Warehouse, and finally Transaction Performance Analysis.

In the Transaction Performance Analysis Overview report, you can choose among three reports, depending on what you want or need:

  • Usage analysis for tables
  • Contention analysis for tables
  • Usage analysis for Stored procedures

AMR_picture5.png

Usage analysis for tables

This report, based on table's usage, shows you best candidate tables that could be pushed In-Memory.
On the left side of the report you have the possibility to select the database and the number of tables you would like to see.
The central part is a chart with two axis:

  • Horizontal axis represents significant to minimal works needed to migrate a table to In-Memory OLTP
  • Vertical axis represents increasing gains you will benefit after having moved the table to In-Memory OLTP

The best part of this graph is the top right corner, which shows tables that could be migrated easily In-Memory, but will give you the best performance gain.

AMR_picture6.png

When you click a table point on the graph, you will access a more detailed statistics report.

This report shows access characteristics (lookup statistics, range scan statistics, etc.) and also contention statistics (latches statistics, lock statistics, etc.) of the concerning table and for the monitoring time period of your instance’s workload with the Transaction Performance Collection Set.

AMR_picture7.png

Contention analysis for table

This report is based on table's contention instead of usage. It shows you best candidate tables that could be migrated In-Memory.
As before, on the left side of the report you have the possibility to select the database and the number of tables you would like to see.
The central part is a chart with two axis:

  • Horizontal axis represents significant to minimal works needed to migrate a table to In-Memory OLTP
  • Vertical axis represents increasing gains you will benefit after having moved the table to In-Memory OLTP

The best part of this graph is the top right corner, showing tables that can be easily migrated In-Memory, but will give you best performance gain.

AMR_picture8.png

As for the usage analysis report, you can also click a table name on the graph to see the statistics details of the table.

Usage analysis for Stored Procedures

This report contains the top candidate stored procedures for an In-Memory OLTP migration with regards to their usage. This report is based on the Total CPU Time.

You also have the possibility to select the database and the number of stored procedure you would like to see.

AMR_picture9.png

If you want to see the usage statistics for a specific stored procedure, you can click on the blue bar. You will then have a more detailed report.

AMR_picture10.png

Now, you know which Tables and Stored Procedures will give you best performance gain after migration to In-Memory OLTP.
AMR provide us two Advisors which will help you to manage the transformation of your disk tables to Memory Optimized Tables as well as your Stored Procedures to Natively Compiled Stored Procedures. To know more about those advisors, please have a look to my blog.

WWDC 2015: Apple Push goes HTTP2 for APNs

Matthias Wessendorf - Wed, 2015-06-17 02:19

Last week was WWDC 2015 and one session got my attention: What’s New in Notifications!

The session is a two part session, focusing on iOS notifications (local/remote) and new features, like text-apply, but the most interesting part for me was the second half, which announced some coming APNs changes!

The big news is, Apple will have HTTP/2 API to send notification requests to APNs.

YAY!

Here is a little summary of more details:

  • request/reponse (aka stream) for every sent (more reliable (e.g. JSON reason for a ‘bad request’ or 410 if the token is invalid))
    • allows ‘instant’ feedback (no separate feedback service!), since details are on the HTTP/2 response
  • multiplexing: multiple requests (to APNs) on a single connection
  • binary
  • simpler certificate handling: Just a single cert! (no separate for dev/prod, VOIP etc)
  • 4KB size of payload (for all versions of iOS/OSX), but just on the new HTTP/2 API

The new HTTP/2 API for APNs will be available in “Summer 2015″ for the development environment and will be made available for production “later this year”. No exact dates were given.

 

I really like this move, and this means for our AeroGear UnifiedPush Server, we will be busy implementing this new Apple APIs!


DOAG Database 2015

Yann Neuhaus - Wed, 2015-06-17 00:17

It was my first time at DOAG Datenbank in Dusseldorf.