Skip navigation.

Feed aggregator

Instance stats

Jonathan Lewis - Wed, 2015-05-13 12:31

+

While reading a posting by Martin Bach on a new buffering option for 12c I was prompted to take a look at another of his posts on the instance activity stats, which reminded me that the class column on v$statname is a bit flag, which we can dissect using the bitand() function to pick out the statistics that belong to multiple classes. I’ve got 2 or 3 little scripts that do this one, for example, picks out all the statistics relating to RAC, another is just a cross-tab of the class values used and their breakdown by class.  Originally this latter script used the “diagonal” method of decode() then sum() – but when the 11g pivot() option appeared I used it as an experiment on pivoting.

This is the script as it now stands, with the output from 12.1.0.2




select
        *
from    (
        select
                st.class,
                pwr.class_id,
                case bitand(st.class, pwr.expn)
                        when 0 then to_number(null)
                               else 1
                end     class_flag
        from
                v$statname      st,
                (select
                        level                   class_id,
                        power(2,level - 1)      expn
                from
                        dual
                connect by level <= 8
                )       pwr
        where
                bitand(class,pwr.expn) = pwr.expn
        )
pivot   (
                sum(class_flag)
        for     class_id in (
                        1 as EndUser,
                        2 as Redo,
                        3 as Enqueue,
                        4 as Cache,
                        5 as OS,
                        6 as RAC,
                        7 as SQL,
                        8 as Debug
                )
        )
order by
        class
;


     CLASS    ENDUSER       REDO    ENQUEUE      CACHE         OS        RAC        SQL      DEBUG
---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
         1        130
         2                    68
         4                                9
         8                                         151
        16                                                     16
        32                                                                35
        33          3                                                      3
        34                     1                                           1
        40                                          53                    53
        64                                                                          130
        72                                          15                               15
       128                                                                                     565
       192                                                                            2          2

13 rows selected.

The titles given to the columns come from Martin’s blog, but the definitive set is in the Oracle documentation in the reference manual for v$statname. (I’ve changed the first class from “User” to “EndUser” because of a reserved word problem, and I abbreviated the “RAC” class for tidiness.) It’s interesting to note how many of the RAC statistics are also about the Cache layer.


Ed Tech World on Notice: Miami U disability discrimination lawsuit could have major effect

Michael Feldstein - Wed, 2015-05-13 11:53

By Phil HillMore Posts (319)

This week the US Department of Justice, citing Title II of ADA, decided to intervene in a private lawsuit filed against Miami University of Ohio regarding disability discrimination based on ed tech usage. Call this a major escalation and just ask the for-profit industry how big an effect DOJ intervention can be. From the complaint:

Miami University uses technologies in its curricular and co-curricular programs, services, and activities that are inaccessible to qualified individuals with disabilities, including current and former students who have vision, hearing, or learning disabilities. Miami University has failed to make these technologies accessible to such individuals and has otherwise failed to ensure that individuals with disabilities can interact with Miami University’s websites and access course assignments, textbooks, and other curricular and co-curricular materials on an equal basis with non-disabled students. These failures have deprived current and former students and others with disabilities a full and equal opportunity to participate in and benefit from all of Miami University’s educational opportunities.

The complaint then calls out the nature of assistive technologies that should be available, including screen readers, Braille display, audio descriptions, captioning, and keyboard navigation. The complaint specifies that Miami U uses many technologies and content that is incompatible with these assistive technologies.

The complaint is very specific about which platforms and tools are incompatible:

  • The main website www.maimioh.edu
  • Vimeo and YouTube
  • Google Docs
  • TurnItIn
  • LearnSmart
  • WebAssign
  • MyStatLab
  • Vista Higher Learning
  • Sapling

Update: It is worth noting the usage of phrase “as implemented by Miami University” in most of these examples.

Despite the complaint listing the last 6 examples as LMS, it is notable that the complaint does not call out the school’s previous LMS (Sakai) nor its current LMS (Canvas). Canvas was selected last year to replace Sakai, and I believe both are in usage. Does this mean that Sakai and Canvas pass ADA muster? That’s my guess, but I’m not 100% sure.

The complaint is also quite specific about the Miami U services that are at fault. For example:

When Miami University has converted physical books and documents into digital formats for students who require such conversion because of their disabilities, it has repeatedly failed to do so in a timely manner. And Miami University has repeatedly provided these students with digitally-converted materials that are inaccessible when used with assistive technologies. This has made the books and documents either completely unusable, or very difficult to use, for the students with these disabilities.

Miami University has a policy or practice by which it converts physical texts and documents into electronic formats only if students can prove they purchased (rather than borrowed) the physical texts or documents. Miami University will not convert into digital formats any physical texts or documents from its library collections and it will not seek to obtain from other libraries existing copies of digitally-converted materials. This has rendered many of the materials that Miami University provides throughout its library system and which it makes available to its students unavailable to students who require that materials be converted into digital formats because of a disability.

The complaint also specifies the required use of clickers and content within PowerPoint.

This one seems to be a very big deal by nature of the DOJ intervention and the specifics of multiple technologies and services.

Thanks to Jim Julius for alerting me on this one.

.@PhilOnEdTech have you seen the Miami of Ohio accessibility complaint? This is going to generate shock waves. http://t.co/STA6Rw6nrR

— Jim Julius (@jjulius) May 13, 2015

The post Ed Tech World on Notice: Miami U disability discrimination lawsuit could have major effect appeared first on e-Literate.

Matching SQL Plan Directives and extended stats

Yann Neuhaus - Wed, 2015-05-13 11:33

This year is the year of migration to 12c. Each Oracle version had its CBO feature that make it challenging. The most famous was the bind variable peeking in 9iR2. Cardinality feedback in 11g also came with some surprises. 12c comes with SPD in any edition, which is accompanied by Adaptive Dynamic Sampling. If you want to know more about them, next date is in Switzerland: http://www.soug.ch/events/sig-150521-agenda.html

SQL Plan Directives in USABLE/MISSING_STATS state can create column groups and extended stats on it at the next dbms_stats gathering. When the next usage of the SPD validates that static statistics are sufficient to get good cardinality estimates, then the SPD goes into the SUPERSEDED/HAS_STATS state. If an execution still see misestimates on them, then the state will go to SUPERSEDED/PERMANENT and dynamic sampling will be used forever. Note that disabled SPD can still trigger the creation of extended statistics but not the dynamix sampling.

Query

If you want to match the directives (from SQL_PLAN_DIRECTIVES) with the extended statistics (from DBA_STATS_EXTENSION) there is no direct link. Both list the columns, but not in the same order and not in the same format:

SQL> select extract(notes,'/spd_note/spd_text/text()').getStringVal() from dba_sql_plan_directives where directive_id in ('11620983915867293627','16006171197187894917');

EXTRACT(NOTES,'/SPD_NOTE/SPD_TEXT/TEXT()').GETSTRINGVAL()
--------------------------------------------------------------------------------
{ECJ(STOPSYS.EDGE)[CHILDID, CHILDTYPE, EDGETYPE]}
{EC(STOPSYS.EDGE)[CHILDID, CHILDTYPE, EDGETYPE]}

those SPD has been responsible for the creation of following column groups:
SQL> select owner,table_name,extension from dba_stat_extensions where extension_name='SYS_STSDXN5VXXKAWUPN9AEO8$$W$J';

OWNER    TABLE_NA EXTENSION
-------- -------- ------------------------------------------------------------
STOPSYS  EDGE     ("CHILDTYPE","CHILDID","EDGETYPE")

So I've made the following query to match both:

SQL> column owner format a8
SQL> column table_name format a30
SQL> column columns format a40 trunc
SQL> column extension_name format a20
SQL> column internal_state format a9
SQL>
SQL> select * from (
    select owner,table_name,listagg(column_name,',')within group(order by column_name) columns
     , extension_name
    from dba_tab_columns join dba_stat_extensions using(owner,table_name)
    where extension like '%"'||column_name||'"%'
    group by owner,table_name,extension_name
    order by owner,table_name,columns
    ) full outer join (
    select owner,object_name table_name,listagg(subobject_name,',')within group(order by subobject_name) columns
     , directive_id,max(extract(dba_sql_plan_directives.notes,'/spd_note/internal_state/text()').getStringVal()) internal_state
    from dba_sql_plan_dir_objects join dba_sql_plan_directives using(directive_id)
    where object_type='COLUMN' and directive_id in (
        select directive_id
        from dba_sql_plan_dir_objects
        where extract(notes,'/obj_note/equality_predicates_only/text()').getStringVal()='YES'
          and extract(notes,'/obj_note/simple_column_predicates_only/text()').getStringVal()='YES'
        and object_type='TABLE'
    )
    group by owner,object_name,directive_id
    ) using (owner,table_name,columns)
   order by owner,table_name,columns
  ;
This is just the first draft. I'll probably improve it when needed and your comments on that blog will help.

Example

Here is an exemple of the output:

OWNER  TABLE_NAME                COLUMNS             EXTENSION_ DIRECTIVE_ID INTERNAL_
------ ------------------------- ------------------- ---------- ------------ ---------
STE1SY AUTOMANAGE_STATS          TYPE                             1.7943E+18 NEW
STE1SY CHANGELOG                 NODEID,NODETYPE                  2.2440E+18 PERMANENT
...
SYS    AUX_STATS$                SNAME                            9.2865E+17 HAS_STATS
SYS    CDEF$                     OBJ#                             1.7472E+19 HAS_STATS
SYS    COL$                      NAME                             5.6834E+18 HAS_STATS
SYS    DBFS$_MOUNTS              S_MOUNT,S_OWNER     SYS_NC0000
SYS    ICOL$                     OBJ#                             6.1931E+18 HAS_STATS
SYS    METANAMETRANS$            NAME                             1.4285E+19 MISSING_S
SYS    OBJ$                      NAME,SPARE3                      1.4696E+19 NEW
SYS    OBJ$                      OBJ#                             1.6336E+19 HAS_STATS
SYS    OBJ$                      OWNER#                           6.3211E+18 PERMANENT
SYS    OBJ$                      TYPE#                            1.5774E+19 PERMANENT
SYS    PROFILE$                  PROFILE#                         1.7989E+19 HAS_STATS
SYS    SCHEDULER$_JOB            JOB_STATUS          SYS_NC0006
SYS    SCHEDULER$_JOB            NEXT_RUN_DATE       SYS_NC0005
SYS    SCHEDULER$_WINDOW         NEXT_START_DATE     SYS_NC0002
SYS    SYN$                      OBJ#                             1.4900E+19 HAS_STATS
SYS    SYN$                      OWNER                            1.5782E+18 HAS_STATS
SYS    SYSAUTH$                  GRANTEE#                         8.1545E+18 PERMANENT
SYS    TRIGGER$                  BASEOBJECT                       6.0759E+18 HAS_STATS
SYS    USER$                     NAME                             1.1100E+19 HAS_STATS
SYS    WRI$_ADV_EXECUTIONS       TASK_ID                          1.5494E+18 PERMANENT
SYS    WRI$_ADV_FINDINGS         TYPE                             1.4982E+19 HAS_STATS
SYS    WRI$_OPTSTAT_AUX_HISTORY  SAVTIME             SYS_NC0001
SYS    WRI$_OPTSTAT_HISTGRM_HIST SAVTIME             SYS_NC0001

Conclusion

Because SPD are quite new, I'll conclude with a list of questions:

  • Do you still need extended stats when a SPD is in PERMANENT state?
  • Do you send to developers the list of extended stats for which SPD is in HAS_STATS, so that they integrate them in their data model? Then, do you drop the SPD when new version is released or wait for retention?
  • When you disable a SPD and an extended statistic is created, do you re-enable the SPD in order to have it in HAS_STAT?
  • Having too many extended statistics have an overhead during statistics gathering (especially when having histograms on them). But it helps to have better estimations. Do you think that having a lot of HAS_STATS is a good thing or not?
  • Having too many usable (MISSING_STATS or PERMANENT) SPD has an overhead during optimization (dynamic sampling) . But it helps to have better estimations. Do you think that having a lot of PERMANENT is a good thing or not?
  • Do you think that only bad data models have a lot of SPD? Then why SYS (the oldest data model optimized at each release) is the schema with most SPD?
  • Do you keep your SQL Profiles when upgrading, or do you think that SPD can replace most of them.

Don't ignore them. SQL Plan Directive is a gread feature but you have to manage them.

Monitoring BRM Host Processes using Metric Extension in EM12c

Arun Bavera - Wed, 2015-05-13 10:01
image

#/bin/sh
export CURRENT_USER=brm
#echo 'PROCESS_NAME'  'COUNT'
for p in dm_oracle cm dm_aq dm_ifw_sync wirelessRealtime.reg
do
CNT=`ps -ef | grep ${CURRENT_USER} | grep ${p} | grep -v grep | wc -l`
echo ${p} '|' ${CNT}
done

image
Categories: Development

Using VSS snapshots with SQL Server - part I

Yann Neuhaus - Wed, 2015-05-13 09:55

 

This is probably a series of blog posts about some thoughts concerning VSS snapshots with database servers. Let’s begin with this first story:

Some time ago, I implemented a backup strategy at one of my customers based on FULL / DIFF and log backups. No issues during a long time but one day a call of my customer who told me that since some days, the differential backup didn’t work anymore with the following error message:

 

Msg 3035, Level 16, State 1, Line 1 Cannot perform a differential backup for database "demo", because a current database backup does not exist. Perform a full database backup by reissuing BACKUP DATABASE, omitting the WITH DIFFERENTIAL option. Msg 3013, Level 16, State 1, Line 1 BACKUP DATABASE is terminating abnormally.

 

After looking at the SQL Server error log message I was able to find out some characteristic entries:

 

I/O is frozen on database demo. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.

...

I/O was resumed on database demo. No user action is required.

 

Just in case, did you have implemented a snapshot of your database server? And effectively the problem came from the implementation of Veeam backup software for bare-metal recovery purpose. In fact after checking out the Veeam backup software user guide, I noticed that my customer forgot to switch the transaction log option value to the “perform backup only” with application-aware image processing method.

This is a little detail that makes the difference here. Indeed, in this case, Veeam backup software relies on VSS framework and using process transaction log option doesn’t preserve the chain of full/differential backup files and transaction logs. For those who like internal stuff you can interact with the VSS writers by specifying some options during the initialization of the backup dialog. The requestor may configure VSS_BACKUP_TYPE option by using the IVssBackupComponents interface and SetBackupState method.

In this case, configuring the “perform backup only” means that Veeam backup software will specify to the SQL writer to use the option VSS_BT_COPY rather than VSS_BT_FULL to preserve the log of the databases. There are probably other specific tools that will run on the same way, so you will have to check outeach related user guide.

Let’s demonstrate the kind of issue you mayface in this case.

First let’s perform a full database backup as follows:

 

BACKUP DATABASE demo TO DISK = 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup\demo.bak' WITH INIT, STATS = 10;

 

Next, let’s take a snapshot. If you take a look at the SQL Server error log you will find the related entries that concern I/O frozen an I/O resume operations for your databases.

Moreover, thereis another way to retrieve snapshot events. Let’s have a look at the msdb.dbo.backupset table. You can identify a snapshot by referring to the is_snapshot column value

 

USE msdb; GO   SELECT        backup_start_date,        backup_finish_date,        database_name,        backup_set_id,        type,        database_backup_lsn,        differential_base_lsn,        checkpoint_lsn,        first_lsn,        last_lsn,        is_snapshot FROM msdb.dbo.backupset WHERE database_name = 'demo' order by backup_start_date DESC;

 

blog_43_-_1_-_backup_history

 

… and this time the backup failed with the following error message:

 

Msg 3035, Level 16, State 1, Line 1 Cannot perform a differential backup for database "demo", because a current database backup does not exist. Perform a full database backup by reissuing BACKUP DATABASE, omitting the WITH DIFFERENTIAL option. Msg 3013, Level 16, State 1, Line 1 BACKUP DATABASE is terminating abnormally.

 

In fact, the differential database backup relies on the last full database backup (most recent database_backup_lsn value) which is a snapshot and a non-valid backup in this case.

Probably the best advice I may provide here is to double check potential conflicts you may get from your existing backup processes and additional stuff like VSS snapshots. The good thing is that one of my other customersthat uses Veeam backup software was aware of this potential issue but we had to deal with other interesting issue. I will discuss about it to the next blog post dedicated to VSS snapshots.

PeopleTools CPU analysis and supported versions of PeopleTools (update for April 2015 CPU)

PeopleSoft Technology Blog - Wed, 2015-05-13 09:30

Questions often arise on the PeopleTools versions for which Critical Patch Updates have been published, or if a particular PeopleTools version is supported. 

The attached page shows the patch number for PeopleTools versions associated with a particular CPU publication. This information will help you decide which CPU to apply and when to consider upgrading to a more current release.

The link in "CPU Date" goes to the landing page for CPU advisories, the link in the individual date, e.g. Apr-10, goes to the advisory for that date.

The page also shows the CVE's addressed in the CPU, a synopsis of the issue and the Common Vulnerability Scoring System (CVSS) value.

To find more details on any CVE, simply replace the CVE number in the sample URL below.

http://www.cvedetails.com/cve/CVE-2010-2377

Common Vulnerability Scoring System Version 2 Calculator

http://nvd.nist.gov/cvss.cfm?calculator&adv&version=2

This page shows the components of the CVSS score

Example CVSS response policy http://www.first.org/_assets/cvss/cvss-based-patch-policy.pdf

All the details in this page are available on My Oracle Support and public sites.

The RED column indicates the last patch for a PeopleTools version and effectively the last support date for that version.

Applications Unlimited support does NOT apply to PeopleTools versions.

Expand swap using SSM

Darwin IT - Wed, 2015-05-13 06:24
The mere reason that I dug into SSM yeasterday was that I wanted to install the Oracle Database 12c.

(Did you know yesterday came from the word 'yeast'? So actually yeasterday: because one used the yeast of the day before to bake the bread of today. Also in Dutch  the word for yeast: 'gist', still sounds in the word for yeasterday: 'gisteren'.)

I ran however against the prerequisite check on the swap space that was only 2GB because of my default OL7 install. And the Universal Installer required 8GB at least. So I needed to expand it. There are several ways to do it. But since I was into SSM, it was a good practice to use that. And it turns out very simple to do. It shows how easy it is to add a new device to a pool and an existing volume.

So I created a new disk of 8GB to my VM (I only need 8GB, but I thought I'd simply add it to the existing 2GB, to be certain to have enough with 10GB).


So, after booting up, verify existence of non assigned device (/dev/sdc):
[root@darlin-vce-db ~]# ssm list
--------------------------------------------------------------
Device Free Used Total Pool Mount point
--------------------------------------------------------------
/dev/sda 20.00 GB PARTITIONED
/dev/sda1 500.00 MB /boot
/dev/sda2 40.00 MB 19.47 GB 19.51 GB ol
/dev/sdb 0.00 KB 100.00 GB 100.00 GB pool01
/dev/sdc 8.00 GB
--------------------------------------------------------------
-----------------------------------------------------
Pool Type Devices Free Used Total
-----------------------------------------------------
ol lvm 1 40.00 MB 19.47 GB 19.51 GB
pool01 lvm 1 0.00 KB 100.00 GB 100.00 GB
-----------------------------------------------------
---------------------------------------------------------------------------
Volume Pool Volume size FS FS size Free Type Mount point
---------------------------------------------------------------------------
/dev/ol/root
ol 17.47 GB xfs 17.46 GB 12.29 GB linear /
/dev/ol/swap
ol 2.00 GB linear
/dev/pool01/disk01
pool01 100.00 GB xfs 99.95 GB 99.95 GB linear /u01
/dev/sda1
500.00 MB xfs 496.67 MB 305.97 MB part /boot
---------------------------------------------------------------------------

Then add the device to the 'ol'-pool:
[root@darlin-vce-db ~]# ssm add -p ol /dev/sdc
Physical volume "/dev/sdc" successfully created
Volume group "ol" successfully extended

And verify again:
[root@darlin-vce-db ~]# ssm list
--------------------------------------------------------------
Device Free Used Total Pool Mount point
--------------------------------------------------------------
/dev/sda 20.00 GB PARTITIONED
/dev/sda1 500.00 MB /boot
/dev/sda2 40.00 MB 19.47 GB 19.51 GB ol
/dev/sdb 0.00 KB 100.00 GB 100.00 GB pool01
/dev/sdc 8.00 GB 0.00 KB 8.00 GB ol
--------------------------------------------------------------
----------------------------------------------------
Pool Type Devices Free Used Total
----------------------------------------------------
ol lvm 2 8.04 GB 19.47 GB 27.50 GB
pool01 lvm 1 0.00 KB 100.00 GB 100.00 GB
----------------------------------------------------
---------------------------------------------------------------------------------------
Volume Pool Volume size FS FS size Free Type Mount point
---------------------------------------------------------------------------------------
/dev/ol/root ol 17.47 GB xfs 17.46 GB 12.29 GB linear /
/dev/ol/swap ol 2.00 GB linear
/dev/pool01/disk01 pool01 100.00 GB xfs 99.95 GB 99.95 GB linear /u01
/dev/sda1 500.00 MB xfs 496.67 MB 305.97 MB part /boot
---------------------------------------------------------------------------------------

Now resize the swap volume:
[root@darlin-vce-db ~]# ssm resize -s+8GB /dev/ol/swap
Size of logical volume ol/swap changed from 2.00 GiB (512 extents) to 10.00 GiB (2560 extents).
Logical volume swap successfully resized

And, again, verify:
[root@darlin-vce-db ~]# ssm list
--------------------------------------------------------------
Device Free Used Total Pool Mount point
--------------------------------------------------------------
/dev/sda 20.00 GB PARTITIONED
/dev/sda1 500.00 MB /boot
/dev/sda2 0.00 KB 19.51 GB 19.51 GB ol
/dev/sdb 0.00 KB 100.00 GB 100.00 GB pool01
/dev/sdc 36.00 MB 7.96 GB 8.00 GB ol
--------------------------------------------------------------
-----------------------------------------------------
Pool Type Devices Free Used Total
-----------------------------------------------------
ol lvm 2 36.00 MB 27.47 GB 27.50 GB
pool01 lvm 1 0.00 KB 100.00 GB 100.00 GB
-----------------------------------------------------
---------------------------------------------------------------------------------------
Volume Pool Volume size FS FS size Free Type Mount point
---------------------------------------------------------------------------------------
/dev/ol/root ol 17.47 GB xfs 17.46 GB 12.29 GB linear /
/dev/ol/swap ol 10.00 GB linear
/dev/pool01/disk01 pool01 100.00 GB xfs 99.95 GB 99.95 GB linear /u01
/dev/sda1 500.00 MB xfs 496.67 MB 305.97 MB part /boot
---------------------------------------------------------------------------------------

Now check the swap space:
[root@darlin-vce-db ~]# swapon -s
Filename Type Size Used Priority
/dev/dm-1 partition 2097148 0 -1

Hey, it's still 2GB!

Let's check fstab to get the swap mount-definitions: 
[root@darlin-vce-db ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Mon May 11 20:20:14 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/ol-root / xfs defaults 0 0
UUID=7a285d9f-1812-4d72-9bd2-12e50eddc855 /boot xfs defaults 0 0
/dev/mapper/ol-swap swap swap defaults 0 0
/dev/mapper/pool01-disk01 /u01 xfs defaults 0 0


Turn off swap:
[root@darlin-vce-db ~]# swapoff /dev/mapper/ol-swap

And (re-)create new swap:
[root@darlin-vce-db ~]# mkswap -c /dev/mapper/ol-swap
0 bad pages
mkswap: /dev/mapper/ol-swap: warning: wiping old swap signature.
Setting up swapspace version 1, size = 10485756 KiB
no label, UUID=843463de-7552-4a73-84a6-761f261d9e9f

Then enable swap again:
[root@darlin-vce-db ~]# swapon /dev/mapper/ol-swap

And check swap again:
[root@darlin-vce-db ~]# swapon -s
Filename Type Size Used Priority
/dev/dm-1 partition 10485756 0 -1

Yes!!! That did the job. Easy does it...

APEX 5.0: Upgrade to the newest FontAwesome Icon Library

Patrick Wolf - Wed, 2015-05-13 03:39
Oracle APEX 5.0 ships with FontAwesome version 4.2.0 which will automatically be loaded if your application is using the Universal Theme. This makes it super easy to add nice looking icons to your buttons, lists and regions. But how can you integrate the most … Continue reading →
Categories: Development

News SharePoint 2016: new alternative to InfoPath Forms

Yann Neuhaus - Wed, 2015-05-13 01:26

infopath_logoMicrosoft announced in January 2015 that it was the END OF INFOPATH, that the 2013 version would be the last one. However, Microsoft updated the Infopath 2013 app will work with SharePoint Server 2016.
Following the new users needs, Microsoft decided InfoPath wasn't suited for the job, that's why Microsoft won't release a new version but only propose an alternative.

 

 

 

what
What is InfoPath Forms?

InfoPath is used to create forms to capture information and save the contents as a file on a PC or on a web server when hosted on SharePoint. InfoPath forms can submit to SharePoint lists and libraries, and submitted instances can be opened from SharePoint using InfoPath Filler or third-party products.

 

 

InfoPath provides several controls:

  • Rules
  • Data validation
  • Conditional Formatting
  • XPath Expression and Functions
  • Connection to external Datasources: SQL, Access, SharePoint
  • Coding languages: C#, Visual Basic, Jscript, HTML
  • User Roles
InfoPath History

Microsoft InfoPath is an application for designing, distributing, filling and submitting electronic forms containing structured data.
Microsoft initially released InfoPath as part of Microsoft Office 2003 family.

VERSION INCLUDED IN... RELEASE DATE InfoPath 2003 Microsoft Office 2003 Professional and Professional Enterprise November 19, 2003 InfoPath 2007 Microsoft Office 2007 Ultimate, Professional Plus and Enterprise January 27, 2007 InfoPath 2010 Microsoft Office 2010 Professional Plus; Office 365 July 15, 2010 InfoPath 2013 Microsoft Office 2013 Professional Plus; Office 365 January 29, 2013

 

In other words, an InfoPath Form is helping you to define some design, rules, data, connections, etc…

why What will happen with SharePoint 2016? Which alternative?

Because of the new user’s perspective about their needs: design, deployment, intelligence, all integrated between servers, services and clients.
Microsoft would like to present a tools available on Mobiles, Tablets and PCs, this due to the SharePoint Online, Windows 8 (almost 10), Windows Phone and Office 365.

 

 

Image-13 Solutions:

Customized Forms in SharePoint using .Net language: easy to use with Visual Studio, but a developer or a SharePoint developer is needed.

Nintex Forms: users can easily build custom forms and publish them to a SharePoint environment.

 

What is Nintex Forms

Nintex Forms is a web-based designer that enables forms to be created within SharePoint quickly and easily. Forms can then be consumed on most common mobile devices using internet, anywhere at anytime. Nintex Forms integrates seamlessly with Nintex Workflow to automate business processes and deliver rich SharePoint applications.

Learn more about nintex: http://www.nintex.com/

CONCLUSION

Let’s see what will be announced but I think Nintex will find its way as a great alternative for InfoPath:

  • No speciific knowledge is needed to build forms (HTML or JavaScript)
  • No client application needed
  • Nintex is completely web-based
  • Works mobile devices


Using JVMD with Oracle Utilities Applications - Part 1 Online

Anthony Shorten - Tue, 2015-05-12 17:44

One of the major advantages of the Oracle WebLogic Server Management Pack Enterprise Edition is the JVM Diagnostics (JVMD) engine. This tool allows java internals from JVM's to be sent to Oracle Enterprise Manager for analysis. It has a lot of advantages:

  • It provides class level diagnostics for all classes in executed including base and custom classes.
  • It provided end to end diagnostics when the engine is deployed with the application and the database.
  • It has minimal impact on performance as the engine uses the JVM monitoring API's in memory.

It is possible to use JVMD with Oracle Utilities Application Framework in a number of ways:

  • It is possible to deploy JVMD agent to the WebLogic servers used for the Online and Web Services tiers.
  • It is possible to deploy the JVMD database agent to the database to capture the code execution against the database.
  • It is possible to use standalone JVMD agent within threadpoolworkers to gather diagnostics for batch.

This article will outline the general process for deploying JVMD on the online servers. The other techniques will be discussed in future articles.

The architecture of JVMD can be summarized as follows:

  • JVMD Manager - A co-ordination and collection node that collates JVM diagnostic information sent by JVM Agents attached to JVM's. This manager exposes the information to Oracle Enterprise Manager. The Manager can be installed within an OMS, standalone and multiple JVM Managers are supported to support large networks of agents.
  • JVMD Agents - A small java based agent that is deployed within a JVM it is monitoring that collects Java diagnostics (primarily from memory, to minimize performance impact of collection) and sends them to a JVMD Manager. Each agent is hardwired to a particular JVMD Manager. JVMD Agents can be deployed to J2EE containers, standalone JVM's and the database.

The diagram below illustrates this architecture:

Before starting the process, ensure that the Oracle WebLogic Server Management Pack Enterprise Edition is licensed and installed (manually or via Self Update).

  • Install the JVMD Manager - Typically the JVMD Manager is deployed to the OMS Server but can also be deployed standalone and/or multiple JVMD managers can be installed for larger numbers of targets to manage. There is a video from Oracle Learning Library on Youtube explaining how to do this step.
  • Deploy the JVMD Agent to the Oracle WebLogic Server housing the product online using the Middleware Management function within Oracle Enterprise Manager using the Application Performance Management option. This will add the Agent to your installation.  There is a process for deploying the agent automatically to a running WebLogic Server. Again there is a Youtube video describing this technique.

One the agent is installed the JVMD agent will start sending diagnostics of java code running within that JVM to Oracle Enterprise Manager.

Customers using the Oracle Application Management Pack for Oracle Utilities will see the JVMD link from their Oracle Utilities targets (it is also available from the Oracle WebLogic targets). For example:

JVMD accessible from Oracle Utilities Targets

Once selecting the Java Virtual Machine Pool for the server you will get access to the full diagnostics information.

JVMD Home Page

This include historical analysis

JVMD Historical Analysis

JVMD is a useful tool for identifying bottlenecks in code and in the architecture. In future articles I will add database diagnostics and batch diagnostics to get a full end to end picture of diagnostics.

Watch-First Design and Development

Oracle AppsLab - Tue, 2015-05-12 17:36

 

So as you might already know, it has been all about THE Watch these past days.

Laucher Home

So having this new toy in my wrist made me want to explore the possibilities. So I set myself to push my skill boundaries and dove right into WatchKit development. To kick off the wheels I spent this past weekend doing what  I like to call “Noel’s Apple Watch weekend hackathon,” my favorite kind of event, because somehow I always end up as a finalist.

Detail Glance

So as the title suggests, I focused in watch-first design (remember mobile-first? thats so 2014!) My goal was to start with a Watch app as the main feature and not even worry about a mobile companion app. As it stands now, Apple Watch, as well as Android Wear rely on “parent” mobile apps.

The result of my weekend fun was an app that I simply called “MyFamily”. The ideas is to add simple reminders, tasks, goals, etc., based on each individual member of my little family (which btw, names have been changed.) The app include an Apple Watch “Glance” which is some sort of a widget, or live tile with very limited dynamic content and interactions.

Having so limited real-estate and features really makes you think twice on how you want to present your data. The WatchKit interface objects are limited to a few subset of their parent iOS counterparts. Most of the design layout can be done by grouping labels (WKInterfaceLabel), images (WKInterfaceImage), and a couple other interface objects available (table, separator, and buttons.)

xCode copy

Having no keyboard (thank goodness!) one needs to rely in voice input to insert new data. During my test the voice recognition worked as advertised. Also during this exercise I finally realized that apps can display a “contextual” menu by “force touching” the screen. I opted to put a text hint (to delete item) , because even after a couple weeks of wearing the watch I didn’t realize this feature was available.

Speech Menu

After creating my Storyboard layouts, it was almost trivial to add data to them. I created custom classes to bind each Interface Controller. Override lifecycle events (awakeWithContext,willActivate,didDeactivate). Created a “member” object and an “event” object. And finally added data to the the tables with something like this:

- (void)setupTable
{
    _membersData = [Member membersList];
    [tableView setNumberOfRows:_membersData.count withRowType:@"MemberRow"];
    for (NSInteger i = 0; i < tableView.numberOfRows; i++)
    {
        NSObject *row = [tableView rowControllerAtIndex:i];
        Member *member = _membersData[i];
        MemberRow *memberRow = (MemberRow *) row;
        [memberRow.memberImage setImage:[UIImage imageNamed:member.memberImage]];
        [memberRow.memberName setText:member.memberName];
        [memberRow.memberEventCount setText:member.memberEventCount];   
    }
}

In conclusion, the WatchKit DX (development experience) is pretty smooth. This is due the the limited and minimalistic set of tools available to you. I suspect I will add more functionality to this app in the future by adding “Mobile-second, and Web-third” design. Oh, and maybe even going “public” and put it in the App Store.

IMG_1048

Photo Proof

Possibly Related Posts:

Parallel Query

Jonathan Lewis - Tue, 2015-05-12 12:22

According to the Oracle Database VLDB and Partitioning Guide (10g version and 11g version):

A SELECT statement can be executed in parallel only if the following conditions are satisfied:

  • The query includes a parallel hint specification (PARALLEL or PARALLEL_INDEX) or the schema objects referred to in the query have a PARALLEL declaration associated with them.
  • At least one table specified in the query requires one of the following:
    • A full table scan
    • An index range scan spanning multiple partitions
  • No scalar subqueries are in the SELECT list.

Note, particularly, that last restriction. I was looking at a query recently that seemed to be breaking this rule so, after examining the 10053 trace file for a while, I decided that I would construct a simplified model of the client’s query to demonstrate how the manuals can tell you the truth while being completely deceptive or (conversely) be wrong while still giving a perfectly correct impression. So here’s a query, with execution plan, from 11.2.0.4:

select
        /*+ parallel(t1 2) */
        d1.small_vc,
        t1.r1,
        t2.n21,
        t2.v21,
        t3.v31,
        (select max(v1) from ref1 where n1 = t2.n21)    ref_t2,
        (select max(v1) from ref2 where n1 = t1.r1)     ref_t1,
        t1.padding
from
        driver          d1,
        t1, t2, t3
where
        d1.n1 = 1
and     t1.n1 = d1.id
and     t1.n2 = 10
and     t1.n3 = 10
and     t2.id = t1.r2
and     t3.id = t1.r3
;

----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |          |   100 | 15700 |  1340   (3)| 00:00:07 |        |      |            |
|   1 |  SORT AGGREGATE              |          |     1 |    10 |            |          |        |      |            |
|   2 |   TABLE ACCESS BY INDEX ROWID| REF1     |     1 |    10 |     2   (0)| 00:00:01 |        |      |            |
|*  3 |    INDEX UNIQUE SCAN         | R1_PK    |     1 |       |     1   (0)| 00:00:01 |        |      |            |
|   4 |  SORT AGGREGATE              |          |     1 |    10 |            |          |        |      |            |
|   5 |   TABLE ACCESS BY INDEX ROWID| REF2     |     1 |    10 |     2   (0)| 00:00:01 |        |      |            |
|*  6 |    INDEX UNIQUE SCAN         | R2_PK    |     1 |       |     1   (0)| 00:00:01 |        |      |            |
|   7 |  PX COORDINATOR              |          |       |       |            |          |        |      |            |
|   8 |   PX SEND QC (RANDOM)        | :TQ10003 |   100 | 15700 |  1340   (3)| 00:00:07 |  Q1,03 | P->S | QC (RAND)  |
|*  9 |    HASH JOIN                 |          |   100 | 15700 |  1340   (3)| 00:00:07 |  Q1,03 | PCWP |            |
|* 10 |     HASH JOIN                |          |   100 | 14700 |  1317   (3)| 00:00:07 |  Q1,03 | PCWP |            |
|* 11 |      HASH JOIN               |          |   100 | 13300 |  1294   (3)| 00:00:07 |  Q1,03 | PCWP |            |
|  12 |       BUFFER SORT            |          |       |       |            |          |  Q1,03 | PCWC |            |
|  13 |        PX RECEIVE            |          |   100 |  1300 |     4   (0)| 00:00:01 |  Q1,03 | PCWP |            |
|  14 |         PX SEND BROADCAST    | :TQ10000 |   100 |  1300 |     4   (0)| 00:00:01 |        | S->P | BROADCAST  |
|* 15 |          TABLE ACCESS FULL   | DRIVER   |   100 |  1300 |     4   (0)| 00:00:01 |        |      |            |
|  16 |       PX BLOCK ITERATOR      |          |   100 | 12000 |  1290   (3)| 00:00:07 |  Q1,03 | PCWC |            |
|* 17 |        TABLE ACCESS FULL     | T1       |   100 | 12000 |  1290   (3)| 00:00:07 |  Q1,03 | PCWP |            |
|  18 |      BUFFER SORT             |          |       |       |            |          |  Q1,03 | PCWC |            |
|  19 |       PX RECEIVE             |          | 10000 |   136K|    23   (5)| 00:00:01 |  Q1,03 | PCWP |            |
|  20 |        PX SEND BROADCAST     | :TQ10001 | 10000 |   136K|    23   (5)| 00:00:01 |        | S->P | BROADCAST  |
|  21 |         TABLE ACCESS FULL    | T2       | 10000 |   136K|    23   (5)| 00:00:01 |        |      |            |
|  22 |     BUFFER SORT              |          |       |       |            |          |  Q1,03 | PCWC |            |
|  23 |      PX RECEIVE              |          | 10000 |    97K|    23   (5)| 00:00:01 |  Q1,03 | PCWP |            |
|  24 |       PX SEND BROADCAST      | :TQ10002 | 10000 |    97K|    23   (5)| 00:00:01 |        | S->P | BROADCAST  |
|  25 |        TABLE ACCESS FULL     | T3       | 10000 |    97K|    23   (5)| 00:00:01 |        |      |            |
----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("N1"=:B1)
   6 - access("N1"=:B1)
   9 - access("T3"."ID"="T1"."R3")
  10 - access("T2"."ID"="T1"."R2")
  11 - access("T1"."N1"="D1"."ID")
  15 - filter("D1"."N1"=1)
  17 - filter("T1"."N2"=10 AND "T1"."N3"=10)

Thanks to my hint the query has been given a parallel execution plan – and a check of v$pq_tqstat after running the query showed that it had run parallel. Note, however, where the PX SEND QC and PX COORDINATOR operations appear – lines 7 and 8, and above those lines we see the two scalar subqueries.

This means we’re running the basic select statement as a parallel query but the query co-ordinator has serialised on the scalar subqueries in the select list.  Is the manual “right but deceptive” or “wrong but giving the right impression” ?  Serialising on (just) the scalar subqueries can have a huge impact on the performance and effectively make the query behave like a serial query even though, technically, the statement has run as a parallel query.

You may recall that an example of this type of behaviour, and its side effects when the scalar subqueries executed independently as parallel queries, showed up some time ago. At the time I said I would follow up with a note about the change in behaviour in 12c; this seems to be an appropriate moment to show the 12c plan(s), first the default:


----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |          |   100 | 19100 |  1364   (3)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR              |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)        | :TQ10005 |   100 | 19100 |  1364   (3)| 00:00:01 |  Q1,05 | P->S | QC (RAND)  |
|*  3 |    HASH JOIN BUFFERED        |          |   100 | 19100 |  1364   (3)| 00:00:01 |  Q1,05 | PCWP |            |
|*  4 |     HASH JOIN OUTER          |          |   100 | 18100 |  1340   (3)| 00:00:01 |  Q1,05 | PCWP |            |
|*  5 |      HASH JOIN               |          |   100 | 16400 |  1335   (3)| 00:00:01 |  Q1,05 | PCWP |            |
|*  6 |       HASH JOIN OUTER        |          |   100 | 15000 |  1311   (3)| 00:00:01 |  Q1,05 | PCWP |            |
|*  7 |        HASH JOIN             |          |   100 | 13300 |  1306   (3)| 00:00:01 |  Q1,05 | PCWP |            |
|   8 |         PX RECEIVE           |          |   100 |  1300 |     4   (0)| 00:00:01 |  Q1,05 | PCWP |            |
|   9 |          PX SEND BROADCAST   | :TQ10000 |   100 |  1300 |     4   (0)| 00:00:01 |  Q1,00 | S->P | BROADCAST  |
|  10 |           PX SELECTOR        |          |       |       |            |          |  Q1,00 | SCWC |            |
|* 11 |            TABLE ACCESS FULL | DRIVER   |   100 |  1300 |     4   (0)| 00:00:01 |  Q1,00 | SCWP |            |
|  12 |         PX BLOCK ITERATOR    |          |   100 | 12000 |  1302   (3)| 00:00:01 |  Q1,05 | PCWC |            |
|* 13 |          TABLE ACCESS FULL   | T1       |   100 | 12000 |  1302   (3)| 00:00:01 |  Q1,05 | PCWP |            |
|  14 |        PX RECEIVE            |          |  1000 | 17000 |     5  (20)| 00:00:01 |  Q1,05 | PCWP |            |
|  15 |         PX SEND BROADCAST    | :TQ10001 |  1000 | 17000 |     5  (20)| 00:00:01 |  Q1,01 | S->P | BROADCAST  |
|  16 |          PX SELECTOR         |          |       |       |            |          |  Q1,01 | SCWC |            |
|  17 |           VIEW               | VW_SSQ_1 |  1000 | 17000 |     5  (20)| 00:00:01 |  Q1,01 | SCWC |            |
|  18 |            HASH GROUP BY     |          |  1000 | 10000 |     5  (20)| 00:00:01 |  Q1,01 | SCWC |            |
|  19 |             TABLE ACCESS FULL| REF2     |  1000 | 10000 |     4   (0)| 00:00:01 |  Q1,01 | SCWP |            |
|  20 |       PX RECEIVE             |          | 10000 |   136K|    24   (5)| 00:00:01 |  Q1,05 | PCWP |            |
|  21 |        PX SEND BROADCAST     | :TQ10002 | 10000 |   136K|    24   (5)| 00:00:01 |  Q1,02 | S->P | BROADCAST  |
|  22 |         PX SELECTOR          |          |       |       |            |          |  Q1,02 | SCWC |            |
|  23 |          TABLE ACCESS FULL   | T2       | 10000 |   136K|    24   (5)| 00:00:01 |  Q1,02 | SCWP |            |
|  24 |      PX RECEIVE              |          |  1000 | 17000 |     5  (20)| 00:00:01 |  Q1,05 | PCWP |            |
|  25 |       PX SEND BROADCAST      | :TQ10003 |  1000 | 17000 |     5  (20)| 00:00:01 |  Q1,03 | S->P | BROADCAST  |
|  26 |        PX SELECTOR           |          |       |       |            |          |  Q1,03 | SCWC |            |
|  27 |         VIEW                 | VW_SSQ_2 |  1000 | 17000 |     5  (20)| 00:00:01 |  Q1,03 | SCWC |            |
|  28 |          HASH GROUP BY       |          |  1000 | 10000 |     5  (20)| 00:00:01 |  Q1,03 | SCWC |            |
|  29 |           TABLE ACCESS FULL  | REF1     |  1000 | 10000 |     4   (0)| 00:00:01 |  Q1,03 | SCWP |            |
|  30 |     PX RECEIVE               |          | 10000 |    97K|    24   (5)| 00:00:01 |  Q1,05 | PCWP |            |
|  31 |      PX SEND BROADCAST       | :TQ10004 | 10000 |    97K|    24   (5)| 00:00:01 |  Q1,04 | S->P | BROADCAST  |
|  32 |       PX SELECTOR            |          |       |       |            |          |  Q1,04 | SCWC |            |
|  33 |        TABLE ACCESS FULL     | T3       | 10000 |    97K|    24   (5)| 00:00:01 |  Q1,04 | SCWP |            |
----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T3"."ID"="T1"."R3")
   4 - access("ITEM_2"(+)="T2"."N21")
   5 - access("T2"."ID"="T1"."R2")
   6 - access("ITEM_1"(+)="T1"."R1")
   7 - access("T1"."N1"="D1"."ID")
  11 - filter("D1"."N1"=1)
  13 - filter("T1"."N2"=10 AND "T1"."N3"=10)

The first thing to note is the location of the PX SEND QC and PX COORDINATOR operations – right at the top of the plan: there’s no serialisation at the query coordinator. Then we spot the views at operations 17 and 27 – VW_SSQ_1, VW_SSQ_2 (would SSQ be “scalar subquery”, perhaps). The optimimzer has unnested the scalar subqueries out of the select list into the join. When a scalar subquery in the select list returns no data it’s value is deemed to be NULL so the joins (operations 4 and 6) have to be outer joins.

You’ll notice that there are a lot of PX SELECTOR operations – each feeding a PX SEND BROADCAST operations that reports its “IN-OUT” column as S->P (i.e. serial to parallel). Historically a serial to parallel operation started with the query coordinator doing the serial bit but in 12c the optimizer can dictate that one of the PX slaves should take on that task (see Randolf Geist’s post here). Again my code to report v$pq_tqstat confirmed this behaviour in a way that we shall see shortly.

This type of unnesting is a feature of 12c and in some cases will be very effective. It is a cost-based decision, though, and the optimizer can make mistakes; fortunately we can control the feature. We could simply set the optimizer_features_enable back to 11.2.0.4 (perhaps through the hint) and this would take us back to the original plan, but this isn’t the best option in this case. There is a hidden parameter _optimizer_unnest_scalar_sq enabling the feature so we could, in principle, disable just the one feature by setting that parameter to false; a more appropriate strategy would simply be to tell the optimizer that it should not unnest the subqueries. In my case I could put the /*+ no_unnest */ hint into both the subqueries or use the qb_name() hint to give the two subquery blocks names, and then used the /*+ no_unnest() */ hint with the “@my_qb_name” format at the top of the main query. Here’s the execution plan I get whether I use the hidden parameter or the /*+ no_unnest */ mechanim:

-------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |          |       |       |  1554 (100)|          |        |      |            |
|   1 |  PX COORDINATOR                 |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)           | :TQ10003 |   100 | 15700 |  1354   (3)| 00:00:01 |  Q1,03 | P->S | QC (RAND)  |
|   3 |    EXPRESSION EVALUATION        |          |       |       |            |          |  Q1,03 | PCWC |            |
|*  4 |     HASH JOIN                   |          |   100 | 15700 |  1354   (3)| 00:00:01 |  Q1,03 | PCWP |            |
|*  5 |      HASH JOIN                  |          |   100 | 14700 |  1330   (3)| 00:00:01 |  Q1,03 | PCWP |            |
|*  6 |       HASH JOIN                 |          |   100 | 13300 |  1306   (3)| 00:00:01 |  Q1,03 | PCWP |            |
|   7 |        BUFFER SORT              |          |       |       |            |          |  Q1,03 | PCWC |            |
|   8 |         PX RECEIVE              |          |   100 |  1300 |     4   (0)| 00:00:01 |  Q1,03 | PCWP |            |
|   9 |          PX SEND BROADCAST      | :TQ10000 |   100 |  1300 |     4   (0)| 00:00:01 |        | S->P | BROADCAST  |
|* 10 |           TABLE ACCESS FULL     | DRIVER   |   100 |  1300 |     4   (0)| 00:00:01 |        |      |            |
|  11 |        PX BLOCK ITERATOR        |          |   100 | 12000 |  1302   (3)| 00:00:01 |  Q1,03 | PCWC |            |
|* 12 |         TABLE ACCESS FULL       | T1       |   100 | 12000 |  1302   (3)| 00:00:01 |  Q1,03 | PCWP |            |
|  13 |       BUFFER SORT               |          |       |       |            |          |  Q1,03 | PCWC |            |
|  14 |        PX RECEIVE               |          | 10000 |   136K|    24   (5)| 00:00:01 |  Q1,03 | PCWP |            |
|  15 |         PX SEND BROADCAST       | :TQ10001 | 10000 |   136K|    24   (5)| 00:00:01 |        | S->P | BROADCAST  |
|  16 |          TABLE ACCESS FULL      | T2       | 10000 |   136K|    24   (5)| 00:00:01 |        |      |            |
|  17 |      BUFFER SORT                |          |       |       |            |          |  Q1,03 | PCWC |            |
|  18 |       PX RECEIVE                |          | 10000 |    97K|    24   (5)| 00:00:01 |  Q1,03 | PCWP |            |
|  19 |        PX SEND BROADCAST        | :TQ10002 | 10000 |    97K|    24   (5)| 00:00:01 |        | S->P | BROADCAST  |
|  20 |         TABLE ACCESS FULL       | T3       | 10000 |    97K|    24   (5)| 00:00:01 |        |      |            |
|  21 |     SORT AGGREGATE              |          |     1 |    10 |            |          |        |      |            |
|  22 |      TABLE ACCESS BY INDEX ROWID| REF1     |     1 |    10 |     2   (0)| 00:00:01 |        |      |            |
|* 23 |       INDEX UNIQUE SCAN         | R1_PK    |     1 |       |     1   (0)| 00:00:01 |        |      |            |
|  24 |     SORT AGGREGATE              |          |     1 |    10 |            |          |        |      |            |
|  25 |      TABLE ACCESS BY INDEX ROWID| REF2     |     1 |    10 |     2   (0)| 00:00:01 |        |      |            |
|* 26 |       INDEX UNIQUE SCAN         | R2_PK    |     1 |       |     1   (0)| 00:00:01 |        |      |            |
-------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T3"."ID"="T1"."R3")
   5 - access("T2"."ID"="T1"."R2")
   6 - access("T1"."N1"="D1"."ID")
  10 - filter("D1"."N1"=1)
  12 - access(:Z>=:Z AND :Z<=:Z)
       filter(("T1"."N2"=10 AND "T1"."N3"=10))
  23 - access("N1"=:B1)
  26 - access("N1"=:B1)

Note particularly that the PX SEND QC and PX COORDINATOR operations are operations 2 and 1,, and we have a new operator EXPRESSSION EVALUATION at operation 3. This has three child operations – the basic select starting at operation 4, and the two scalar subqueries starting at lines 21 and 24. We are operating the scalar subqueries as correlated subqueries, but we don’t leave all the work to the query coordinator – each slave is running its own subqueries before forwarding the final result to the coordinator. There is a little side effect that goes with this change – the “serial to parallel” operations are now, as they always used to be, driven by the query co-ordinator, the PX SELECTOR operations have disappeared.

And finally

Just to finish off, let’s take a look at the results from v$pq_tqstat in 12.1.0.2. First from the default plan with the PX SELECTOR operations. Remember that this turned into a five table join where two of the “tables” were non-correlated aggregate queries against the reference tables.


DFO_NUMBER      TQ_ID SERVER_TYPE     INSTANCE PROCESS           NUM_ROWS      BYTES      WAITS   TIMEOUTS AVG_LATENCY
---------- ---------- --------------- -------- --------------- ---------- ---------- ---------- ---------- -----------
         1          0 Producer               1 P002                   200       2428          0          0           0
                                             1 P003                     0         48          0          0           0
                      Consumer               1 P000                   100       1238         59         27           0
                                             1 P001                   100       1238         41         24           0

                    1 Producer               1 P002                  2000      23830          0          0           0
                                             1 P003                     0         48          0          0           0
                      Consumer               1 P000                  1000      11939         57         26           0
                                             1 P001                  1000      11939         41         24           0

                    2 Producer               1 P002                     0         48          0          0           0
                                             1 P003                 20000     339732          0          0           0
                      Consumer               1 P000                 10000     169890         49         22           0
                                             1 P001                 10000     169890         31         21           0

                    3 Producer               1 P002                     0         48          0          0           0
                                             1 P003                  2000      23830          0          0           0
                      Consumer               1 P000                  1000      11939         58         26           0
                                             1 P001                  1000      11939         38         23           0

                    4 Producer               1 P002                     0         48          0          0           0
                                             1 P003                 20000     239986          0          0           0
                      Consumer               1 P000                 10000     120017         50         22           0
                                             1 P001                 10000     120017         34         21           0

                    5 Producer               1 P000                     1        169          0          0           0
                                             1 P001                     1        169          1          0           0
                      Consumer               1 QC                       2        338          3          0           0

As you read down the table queues you can see that in the first five table queues (0 – 4) we seem to operate parallel to parallel, but only one of the two producers (p002 and p003) produces any data at each stage. A more traditional plan would show QC as the single producer in each of these stages.

Now with scalar subquery unnesting blocked – the plan with the three table join and EXPRESSION EVALUATION – we see the more traditional serial to parallel, the producer is QC in all three of the first table queues (the full scan and broadcast of tables t1, t2, and t3).

DFO_NUMBER      TQ_ID SERVER_TYPE     INSTANCE PROCESS           NUM_ROWS      BYTES      WAITS   TIMEOUTS AVG_LATENCY
---------- ---------- --------------- -------- --------------- ---------- ---------- ---------- ---------- -----------
         1          0 Producer               1 QC                     200       1726          0          0           0
                      Consumer               1 P000                   100       1614         28         15           0
                                             1 P001                   100       1614         34         13           0

                    1 Producer               1 QC                   20000     339732          0          0           0
                      Consumer               1 P000                 10000     169866         19         10           0
                                             1 P001                 10000     169866         25          8           0

                    2 Producer               1 QC                   20000     239986          0          0           0
                      Consumer               1 P000                 10000     119993         23         11           0
                                             1 P001                 10000     119993         31         11           0

                    3 Producer               1 P000                     1        155          1          0           0
                                             1 P001                     1        155          0          0           0
                      Consumer               1 QC                       2        310          3          1           0

It’s an interesting point that this last set of results is identical to the set produced in 11g – you can’t tell from v$pq_tqstat whether the parallel slaves or the query co-ordinator executed the subqueries – you have to look at the output from SQL trace (or similar) to see the individual Rowsource Executions Statistics for the slaves and coordinator to see which process actually ran the subqueries.

 


Is EFSS an IT or a LoB Concern?

WebCenter Team - Tue, 2015-05-12 07:56

Cloud enterprise file sync and share solutions serve a critical business need. A digital enterprise is an anytime, anyplace workplace requiring not just employees but the authorized ecosystem access to the right information at any time from any where on any device. This enables true collaboration in a 24 X 7 world and empowers the mobile workforce driving user productivity.

As a result, most organizations have moved away from large email file attachments, FTP server management and on-site content sharing tools and have started to instead leverage cloud file sync and share solutions. We use them in our personal lives to share pictures and videos so why not in our workplace? Trouble is enterprise-grade solutions require a bit more due diligence and consideration since one wrong upload or an unauthorized access to a critical file can have catastrophic implications for the company. And, it is not just about security either. When you think about, enterprises do have different needs from file sync and share solutions than you and I do in our personal lives. Enterprises require such solutions in place to drive work collaboration and improve mobile productivity. So, you are not looking at file sync and share solution in isolation, as a one off to send files or videos or images from point A to point B. Rather you are looking to spend a significant portion of your work day working on materials that are shared, collaborate with your colleagues and drive output by completing some tasks. Enterprise file sync and share is, thus, an integral part of not just the way you work but your work itself. So, rather than a one off solution, it needs to be a strategic cloud investment.

But if we are looking for a company to standardize on a corporate grade cloud file sync and share, that makes it an IT solution, right? Well, in my opinion, yes and no. Yes, because IT will need to do its diligence on the solution and make sure the solution meets the corporate IT security and governance requirements. But no, because the solution is made to empower Lines of Businesses, even the IT team really. So, right out of the box, it needs to have the features and functionality for it to be a desirable solution in the first place. If it is not meeting the basic criteria of easy to use, intuitive with complete mobile support and with no maintenance headaches, you have failed the litmus test.

What's your take? Where does the decision for enterprise file sync and share reside? Should an LoB or a set of them with immediate need for a solution initiate the search or should the decision rest with IT? Can both business and IT align to find a long term, strategic solution that is fast becoming an ubiquitous need? We would love to hear from you and as would our executives.

Join our live webcast tomorrow on "Introducing Oracle Documents Cloud Service" scheduled for 10 am PT/1 pm ET and be a part of this important conversation. We think it is a strategic enough conversation to have Oracle CIO, Mark Sunday be involved in addition to our senior product and LoB and customer executives. And to make sure we don't have colored lenses on, we will do a sanity check with IDC's Program Vice President, Content and Digital Media Technologies, Melissa Webster as well. This is your opportunity to ask the questions of these executives and industry experts. And we encourage you to follow the conversation on twitter with #OracleDOCS. Don't forget to register for tomorrow's live webcast. the conversation has only just begun...


Microsoft Project Management Tools: SharePoint 2013 and Project Server 2013

Yann Neuhaus - Tue, 2015-05-12 06:07
Project Management?

Project management is the process and activity of planning, organizing, motivating, and controlling resources, procedures and protocols to achieve specific goals in scientific or daily problems.

Why using Project Management?

Using Project Management methods and strategies decrease risks, reduce costs and improve success rates of your projects!

What are the Business Needs?
  • Rationalize investment
  • Align work with strategic objectives
  • Manage projects and resources effectively
  • Gain visibility and control over projects
Microsoft has 3 tools to manage project:
  • Microsoft Project: powerful and enhance way to manage a wide range of projects
  • Microsoft Project Server: builds upon the Microsoft SharePoint Server – it provides a basis for Project Web Application pages, site admin., and authentication infrastructure for Project Server Users and Project Workspace.

Standard: keep projects organized and communicate progress with reports
Professional: COLLABORATION with other in order to start-manage-deliver projects. Synchronization with SharePoint 2013 (2016 to be confirmed), that’s enable you to track status from anywhere.
Professional for Office 365: Work virtually anywhere - Go Online with flexible plans that help you quickly and easily sign up for the service that best fits your business need

  • Microsoft SharePoint: CMS to share, organize, discover, build and manage projects / data. This is a Collaboration tool.

Without any tools, managing a project could be worse, boring and inefficient.

think You would better  stop … and think about a SOLUTION ! 


Image-11

 

 

 Imagine 2 super products : peperoni and cheese… Both are just waiting
 for you to
make a great pizza Laughing !

 

What about Project and Sharepoint together???

As for Pizza, both mixed will for sure driving you to get a great Project Management Tool !

It will bring Success to manage your tasks and timelines easily.

For Management
  • Manage projects as SharePoint task lists
  • Manage projects with FULL CONTROL
For Collaboration
  • Teams connected
  • Easy way to create reports and specified dashboards
SharePoint + Project allows:
  • creating Site dedicated to project as central repository for project content
  • importing Sharepoint task list as an Entreprise Project for full Project Server control
  • synchronization and storage locally or in Sharepoint Library
The requirements for an installation are: Project Server 2013
  • Windows server 2012 or a 2008 R2 SP1 (minimum)
  • SQL Server 2012 or 2008 R2 with Service Pack 1
  • SharePoint 2013
SharePoint 2013
  • Windows server 2012 or a 2008 R2 SP1 (minimum)
  • SQL Server 2012 or 2008 R2 with Service Pack 1
CONCLUSION success    Great things are done by a serie of small things brought together, use
 SharePoint 2013 and Project Server 2013 TOGETHER to bring Success !

Some Oracle Big Data Discovery Tips and Techniques

Rittman Mead Consulting - Tue, 2015-05-12 05:49

I’ve been using Oracle Big Data Discovery for a couple of months now, and one of the sessions I’m delivering at this week’s Rittman Mead BI Forum 2015 in Atlanta is on Big Data Discovery Examples from the Field, co-presented with Tim Vlamis from Vlamis Software. Tim is going to concentrate on a data analysis and visualization example using a financial/trading dataset,  and I’m going to look at at some of the trickier, or less obvious aspects to the BDD development process that we’ve come across putting together PoCs and demos for customers. I’ll start first with the data ingestion and transformation part of BDD.

There’s two basic ways to get data into BDD’s DGraph engine; you can either use the Data Processing CLI command-line utility to sample, ingest and enrich Hive table data into the DGraph engine, or you can use the web-based data uploader to ingest data from a spreadsheet, text file or similar. For example, to load a Hive table called “bdd_test_tweets” into the DGraph engine using the command-line, you’d enter the commands:

[oracle@bigdatalite Middleware]$ cd BDD1.0/dataprocessing/edp_cli
[oracle@bigdatalite edp_cli]$ ./data_processing_CLI -t bdd_test_tweets

Big Data Discovery would then read the Hive table metastore to get the table and column names, datatypes and file location, then spin-up a Spark job to sample, enrich and then load the data into the DGraph engine. If the Hive table has fewer than 1m rows the whole dataset gets loaded in, or the dataset is sampled if the number of Hive rows is greater than 1m. The diagram below shows the basic load, profile and enrich ingestion process.

NewImage

There’s a couple of things to bear in-mind when you’re loading data into BDD in this way:

  • You can only load Hive tables, not Hive views, as the Spark loading process only works with full table definitions in the Hive metastore
  • If your Hive table uses a SerDe other than the ones that ship with Base CDH5, you’ll need to upload the SerDe into BDD’s EDP JAR file area in HDFS and update some JAR reference files before the import will work, as detailed in Chapter 3 of the Big Data Discovery Data Processing Guide doc
  • If you’ve installed BDD on a laptop or a smaller-than-usual Hadoop setup, you’ll need to make sure the SPARK_EXECUTOR_MEMORY value you set in bdd.conf file when you installed the product can be handled by the Hadoop cluster – by default SPARK_EXECUTOR_MEMORY is set to 48G for the install, but on my single laptop install I set it to 2G (after having first installed BDD, the data ingestion process didn’t work, and then I had to reinstall it with SPARK_EXECUTOR_MEMORY = 2G as the new setting)
  • If you installed an early copy of BDD you might also need to change the OLT_HOME value in the /localdisk/Oracle/Middleware/user_projects/domains/bdd_domain/bin/setDomainEnv.sh file so that OLT_HOME=”/opt/bdd/olt” instead reads OLT_HOME=”/opt/bdd/edp/olt” – recent updates to the install files and installer correct this problem, but if it’s set wrong then the noun extraction part of the ingestion process won’t work either from the CLI, or from the BDD Studio Transformation screen
  • There’s also no current way to refresh or reload a BDD DGraph dataset, apart from deleting it from BDD and then re-importing it. Hopefully this, and the lack of Kerberos support, will be addressed in the next release

Another thing you might want to consider when providing datasets for use with BDD is whether you leave quotes around the column values, and whether you pre-strip out HTML tags from any text. Take for example the text file below, stored in CSV-delimited format:

NewImage

The file contains three comma-separated fields per line; one with the IP address of the requesting user, the others with the page title and page content snippet, all three fields having quotes around their values due to the presence of commas in the content text. Loading this data into Hive using the Hue web interface gives us a table with quotes around all of the fields, as Hue (in this release) doesn’t strip quotes from CSV fields.

NewImage

When I ingest this table into BDD using the Data Processing CLI, I’ve got just these three columns still with the quotes around the fields. I can easily remove the quotes by going into the Transformation screen and use Groovy transforms to strip the first and last characters from the fields, but this is more work for the user and I don’t benefit from the automatic enrichment that BDD can do when performing the initial ingest.

NewImage

If, however, I replace the comma separator with a pipe symbol, and remove the quotes, like this:

NewImage

and therefore use Hue’s ability to use pipe and other separators instead of commas (and quotes), my Hive table looks like this:

NewImage

Now, when we ingest this table into BDD, we get six more derived attributes as the enrichment part of the ingestion process recognises the fields as containing IP addresses, text and so on. Presumably in the future BDD will have an option to ignore quotes around field values, but for now I tend to strip-out the quotes and uses pipes instead for my BDD ingestion files.

NewImage

Similarly, with hive tables that contain fields with HTML content you can just load those fields into BDD as-is, and BDD will generally extract nouns and keywords and created derived fields for those. And whilst you can run Groovy transformations to strip-out the HTML tags (mostly), you’re then stuck with these derived columns that include HTML tag names – img, h2 and so on – in the keywords list. What I tend to do then is re-export the dataset with the content field stripped of the HTML tags, then re-ingest that table so I get a new keyword field with the HTML tags removed. What would be simpler though would be to strip-out the HTML tags before you load up the Hive table, so you didn’t have to do this round-trip to get rid of the HTML tag names from the noun keyword lists that are automatically generated during the ingest enrichment process.

Once you’ve got datasets loaded into BDD, something I didn’t quite get the value of when I first used BDD studio was the “scratchpad” feature. To take an example, in the masterclass session I bring in a table of tweets referencing Rittman Mead, and one of the attributes in the resulting BDD dataset is for the first hashtag mentioned in the tweet. I can select this attribute and click on the “Add to Scratchpad” link to add it into the BDD Studio scratchpad, like this:

NewImage

The scratchpad then displays above the list of attributes for that dataset, and by default it shows a horizontal bar chart listing the number of times each hashtag in the dataset is referenced.

NewImage

I could then, should I wish to, use the Refine By button to the left of the chart to filter-down (or “refine” in BDD and Endeca-speak) the chart to include just those tweets by a subset of Twitter users – in this case myself, Robin, Michael, Jerome and Edel.

NewImage

I can also add other attributes to the scratchpad as well – for example, the Twitter handle for the person tweeting – so that we can turn the bar chart into a stacked bar chart with the Twitter handles used to show the breakdown of use of that hashtag by each of our staff.

NewImage

You can also use these Scratchpad visualisations as the start of your main BDD Studio “Discover” dashboards, by pressing the Add to Discover page at the bottom right-hand corner of each visualization. In this way rather than creating your Discover dashboards from scratch each time, you can seed them with some starter graphs and data visualizations right from the dataset attribute views.

NewImage

The last bit I’m going to talk about in the BI Forum session session are “dataset views”; by default, each dataset you create within a BDD project has just its own attributes within it, and if you use one of them to create a visualization in the Discovery section, you’ll not be able to use any of the attributes from your other datasets (although the faceted search feature above every BDD Studio page searches all datasets in your project and in the BDD Catalog, just like the Endeca Information Discovery “value searches” that I talked about in this older article. To use attributes from more than one dataset in a BDD Studio visualisation component you need to join them, similar to how you’d join tables in the OBIEE RPD.

To take the Tweets, Page Views and Page Content datasets I use in the BI Forum masterclass, consider a situation where I’d like to list out all o the tweets that reference our website, along with details of the page title, page category and other attributes that I can get from a second dataset that I pull from the website itself. To link these two datasets together I join them in BDD Studio using their common URL attribute (in reality I had to massage the datasets so that both URLs featured a trailing forward-slash (“/“) to make them join properly, but that’s another story)

NewImage

If I then go to the Data Views tab within the Project Settings BDD Studio screen, I can see that two data views have been setup for this join; one (“rm_linked_tweets – linked”) leads on the RM Linked Tweets dataset (the tweets) and returns the 1547 tweets in that first dataset joined to pages in the Site Content dataset, the “site_content – linked” dataset starts from the 2229 records in the Site Content dataset and joins those records to the RM Linked Tweets dataset; you can then choose which one you want to use (or “drive off”) when you add components to the Discover dashboard part.

NewImage

Where it gets interesting is when you add third, fourth datasets to the join. Depending on the way you join-in the third table affects the number of rows returned by the join; if join the web server logs dataset (“access_per_post_cat_authors”) to the Site Contents dataset the resulting three-way join view returns the 2229 rows driven by the entries in the Site Contents dataset, whereas if I join the tweets dataset to the web server logs dataset directly, so the tweets dataset joins first to the site contents dataset, and then separately to the web server logs dataset, like this:

NewImage

then the resulting data view joining all three datasets return a row count equal to the rows in the tweets dataset driving it all.

NewImage

The best way to work this all out in your head is to do what I did, and create a series of datasets with distinct row counts and join characteristics and then just test creating joins and viewing the resulting row count using the Preview button below the list of data views. To make things even more interesting you can choose, in the Discover page properties section, whether to left-outer join, equi-join or full-outer join a “primary” dataset used for the page with any it’s joined with, in our instance determining whether the full set of tweets are filtered by the list of pages they refer to (removing tweets that reference non-existant RM web pages in this example), or whether all tweets are returned regardless.

NewImage

It’s actually pretty powerful stuff and you should avoid the temptation to pre-create all your joined datasets in Hive beforehand rather than use BDD Studio joins, as once you get your head around the concept it’s a flexible and straightforward way to join your datasets up in whatever way makes sense for your analysis – leading off of website hits for one type of analysis, and then pages referenced in tweets in the other, allowing yo to easily swap what would be the fact, and dimension tables in a regular relational database report.

That’s if for now though; it’s the day before the Atlanta Rittman Mead BI Forum and I need to to get myself to the venue and get set up for tomorrow’s masterclass with Jordan Meyer. To those of you that are coming along to Atlanta I look forward to seeing you, otherwise normal service will be resumed next week.

Categories: BI & Warehousing

LVM with SSM on OL7

Darwin IT - Tue, 2015-05-12 02:53
Or how to  encrypt your title with acronyms...

Today I wanted to create a VM with an Oracle SOA/BPM Suite 12c installation, since I'm to give a workshop on the installation of it. I used Oracle Linux 6 for my installations and the last few years I did play around quite a lot with it (for someone who is not a core systems administrator), to upgrade all my VM's to the latest update, remove obsolete kernels, add volumes to do installations, etc. I used Oracle database 11g, that in the last few monts I upgraded to the latest patch set of 11gR2 11.2.0.4.

I could do with a OL6U6 VM with that upgraded 11gR2 database, I did a upgrade of a quite clean VM only yesterday. But since OL7 is in the field already, and DB12c even for a few years. So I thought to try my luck with that.

However, I found that OL7 behaves quite different compared to OL6. Gnome is different, but tools like Logical Volume Manager are absent.  I found that there is no graphical LVM available in OL7 apparently. Since I'm not the only one that sought for it in vain, I assume it's really not there. By the way: there is a disks tool, but that only allows you to format a bare disk, not to create LV's.

Luckily I found this great article on a new tool from Red Hat: the system storage manager (ssm). Apparently it is open source, since you can find it on sourceforge, and it is available for Oracle Linux as well.

Install ssm Yep, you need to install it first:
$ sudo yum install system-storage-manager
Or do it as root (I'm didn't setup sudo for my one-user-virtual-course-environments):
[root@darlin-vce-db ~]# yum install system-storage-manager
By the way: system-config-lvm, the LVM in previous OL's, is apparently deprecated.

List volumesFirst list the current devices and volumes using 'ssm list':
[root@darlin-vce-db ~]# ssm list
-----------------------------------------------------------
Device Free Used Total Pool Mount point
-----------------------------------------------------------
/dev/sda 20.00 GB PARTITIONED
/dev/sda1 500.00 MB /boot
/dev/sda2 40.00 MB 19.47 GB 19.51 GB ol
/dev/sdb 100.00 GB
-----------------------------------------------------------
-------------------------------------------------
Pool Type Devices Free Used Total
-------------------------------------------------
ol lvm 1 40.00 MB 19.47 GB 19.51 GB
-------------------------------------------------
-------------------------------------------------------------------------------
Volume Pool Volume size FS FS size Free Type Mount point
-------------------------------------------------------------------------------
/dev/ol/root ol 17.47 GB xfs 17.46 GB 12.81 GB linear /
/dev/ol/swap ol 2.00 GB linear
/dev/sda1 500.00 MB xfs 496.67 MB 305.97 MB part /boot
-------------------------------------------------------------------------------
As you can see, I added a new disk to my VM, that is listed as /dev/sdb.  And you can't find it in the volumes, because I didn't do anything with it yet.
 
Add new LV mounted on /u01 In the past, you needed to perform quite some steps to create a volume: you had to prepare a disk, create a volume group, add a volume to it, assign space to the volume, make a filesystem on it, and mount it.

Now, here's where ssm pays off. Let's first create a folder to use as a mount point.
[root@darlin-vce-db ~]# mkdir /u01
I picked up the name of this mountpoint  during my Oracle days, with my first steps on Linux. But I can't remember what the story or rationale is behind 'u01'. However, it works for me, and it shows up in the Oracle doc, so I stick with it.
Now, lets create a volume called disk01, on a pool called pool01 with /dev/sdb assigned to it, and let's create the new default filesystem xfs on it. Oh, and my SDB was created with a size of 100GB:
[root@darlin-vce-db ~]# ssm create -s 100GB -n disk01 --fstype xfs -p pool01 /dev/sdb /u01
Not enough space (104853504.0 KB) in the pool 'pool01' to create volume! Adjust (N/y/q) ? Y
Logical volume "disk01" created.
meta-data=/dev/pool01/disk01 isize=256 agcount=4, agsize=6553344 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=26213376, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=12799, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

Apparently this could be done in one go. Since the 100GB of the disk does not match exactly the 100GB asked for the volume, it asked to adjust it.

Now do a list again
[root@darlin-vce-db ~]# ssm list
--------------------------------------------------------------
Device Free Used Total Pool Mount point
--------------------------------------------------------------
/dev/sda 20.00 GB PARTITIONED
/dev/sda1 500.00 MB /boot
/dev/sda2 40.00 MB 19.47 GB 19.51 GB ol
/dev/sdb 0.00 KB 100.00 GB 100.00 GB pool01
--------------------------------------------------------------
-----------------------------------------------------
Pool Type Devices Free Used Total
-----------------------------------------------------
ol lvm 1 40.00 MB 19.47 GB 19.51 GB
pool01 lvm 1 0.00 KB 100.00 GB 100.00 GB
-----------------------------------------------------
---------------------------------------------------------------------------------------
Volume Pool Volume size FS FS size Free Type Mount point
---------------------------------------------------------------------------------------
/dev/ol/root ol 17.47 GB xfs 17.46 GB 12.81 GB linear /
/dev/ol/swap ol 2.00 GB linear
/dev/pool01/disk01 pool01 100.00 GB xfs 99.95 GB 99.95 GB linear /u01
/dev/sda1 500.00 MB xfs 496.67 MB 305.97 MB part /boot
---------------------------------------------------------------------------------------

Here you find that there is now a pool called 'pool01', with a volume  named 'disk01', mounted on /u01.

To List filesystem on '/u01' issue the command 'df /u01':
[root@darlin-vce-db ~]# df /u01
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/pool01-disk01 104802308 32928 104769380 1% /u01

 I want to have it added to the /etc/fstab, to have it auto mounted. So edit the file as follows:
[root@darlin-vce-db u01]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Mon May 11 20:20:14 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/ol-root / xfs defaults 0 0
UUID=7a285d9f-1812-4d72-9bd2-12e50eddc855 /boot xfs defaults 0 0
/dev/mapper/ol-swap swap swap defaults 0 0
/dev/mapper/pool01-disk01 /u01 xfs defaults 0 0


I duplicated the first line, with /dev/mapper/ol-root, to the end of the file, and renamed the device according to the filesystem listing of /u01 above. And the mountpoint to /u01 of course.
Create group oinstall and add it to oracle I want to use the new volume for my Oracle installations. So first lets create the group oinstall and add it to oracle:
[root@darlin-vce-db u01]# groupadd oinstall
[root@darlin-vce-db u01]# usermod oracle -G oinstall --a
[root@darlin-vce-db u01]# groups oracle
oracle : oracle oinstall
Then add an app folder and make oracle owner of it
[root@darlin-vce-db ~]# cd /u01
[root@darlin-vce-db u01]# mkdir app
[root@darlin-vce-db u01]# chown oracle:oinstall app
ConclusionThat wasn't too hard, was it? Following the article mentioned earlier, you can add disks to a volume about as easy. Now, I'll be off to try to install DB12c...

APEX 5.0 - New Features Training

Denes Kubicek - Mon, 2015-05-11 23:42
Unser Training startet in knapp drei Wochen. Es ist zeit sich anzumelden. Wir (Dietmar und Denes) haben Tobias Arnhold eingeladen und er erzählt über das Thema Migration auf APEX 5.0. Tobias ist einer der wenigen Betatester von APEX 5.0 in Deutschland gewesen und hat die wertvolle Erfahrungen damit gemacht, die er mit uns teilen wird.

Unsere Themen zu APEX 5.0 sind zahlreich und vielfälltig. Wir decken fast alles wissenswertes ab:

1. Page Designer,
2. Universal Theme,
3. Dutzende von kleinen aber unbekannten Verbesserungen,
4. Interactive Reports,
5. File-Handling,
6. Deprecated Features und Known-Issues,
7. Modale Dialoge,
8. Mobile Anwendungen
9. Viele weitere Themen wie Auswirkungen der neuen Features auf bestehende Komponenten:

   a.Forms und Tabular Forms
   b.Plugins und Dynamic Actions

10. Migrationsthemen



Categories: Development

Variations on 1M rows insert (2): write commit

Yann Neuhaus - Mon, 2015-05-11 22:29

In this blog post, I will try to do the same than my colleagues about Oracle and for PostgreSQL. As a reminder, we’ve seen in my previous blog post that SQL Server is designed to commit transactions implicitly by default and inserting 1M rows in this case may have a huge impact on the transaction log throughput. Each transaction is synchronously committed to the transaction log. In this blog post, we’ll see a variation of the previous test

Indeed, since SQL Server 2014 version, it is possible to change a little bit this behaviour to improve the overall performance of our test by using a feature called delayed durability transaction. This is a performance feature for sure but you will have to trade durability for performance. As explained here, SQL Server uses a write-ahead logging protocol (WAL) and using this new feature will temporarily suspend this requirement.

So, let’s perform the same test but this time I will favour the overall throughput performance by using delayed durability option.

 

alter database demo set delayed_durability = allowed; go     DECLARE @i INT = 1;   WHILE @i &lt= 1000000 BEGIN           begin tran;          INSERT INTO DEMO VALUES (@i, CASE ROUND(RAND() * 10, 0) WHEN 1 THEN 'Marc' WHEN 2 THEN 'Bill' WHEN 3 THEN 'George' WHEN 4 THEN 'Eliot' WHEN 5 THEN 'Matt' WHEN 6 THEN 'Trey' ELSE 'Tracy' END, RAND() * 10000);                     commit tran with (delayed_durability = on);          SET @i += 1; END

 

-- 00:00:20 – Heap table -- 00:00:19 – table with clustered index

 

If I refer to my first test results with implicit commit behaviour, I may effectively notice a big performance improvement (86%). You may also note that I used this option at the transaction level after enabling delayed durability at the database level but in fact you have other possibilities. Depending on your context, you may prefer either to enable or to force this option directly at the database level.

Do I have to enable it? If your business is comfortable making throughput versus durability and this option improves the overall performance, go ahead but keep in mind that you also have others ways to improve your transaction log throughput before enabling this option (please read the Paul Randal’s blog posts)

Want to meet in one day our experts from all technologies? Come to our Event In-Memory: boost your IT performance! where we talk about SQL Server, Oracle and SAP HANA.

 

 

Efficiency

Jonathan Lewis - Mon, 2015-05-11 14:24

Here’s a question to which I don’t know the answer, and for which I don’t think anyone is likely to need the answer; but it might be an entertaining little puzzle for thr curious.

Assume you’re doing a full tablescan against a simple heap table of no more than 255 columns (and not using RAC, Exadata, In-memory, etc. etc. etc.), and the query is something like:


select  {columns 200 to 250}
from    t1
where   column_255 = {constant}
;

To test the predicate Oracle has to count its way along each row column by column to find column 255. Will it:

  1. transfer columns 200 to 250 to local memory as it goes, then check column_255 — which seems to be a waste of CPU if the predicate doesn’t evaluate to TRUE
  2. evaluate the predicate, then walk the row again to transfer columns 200 to 250 to local memory if the predicate evaluates to TRUE — which also seems to be a waste of CPU
  3. one or other of the above depending on which columns are needed, how long they are, and the selectivity of the predicate

How would you test your hypothesis ?


Worth Reading: Use of adjuncts and one challenge of online education

Michael Feldstein - Mon, 2015-05-11 12:36

By Phil HillMore Posts (319)

There is a fascinating essay today at Inside Higher Ed giving an inside, first-person view of being an adjunct professor.

2015 is my 25th year of adjunct teaching. In the fall I will teach my 500th three-credit college course. I have put in many 14- to 16-hour days, with many 70- to 80-hour weeks. My record is 27 courses in one year, although I could not do that now.

I want to share my thoughts on adjunct teaching. I write anonymously to not jeopardize my precarious positions. How typical is my situation?

The whole essay is worth reading, as it gives a great view into what the modern university and the implications of using adjuncts. But I want to highlight one paragraph in particular that captures the challenge of understanding online education.

I have taught many online courses. We have tapped about 10 percent of the potential of online courses for teaching. But rather than exploring the untapped 90 percent, the college where I taught online wanted to standardize every course with a template designed by tech people with no input from instructors.

I want to design amazing online courses: courses so intriguing and intuitive and so easy to follow no one would ever need a tutorial. I want to design courses that got students eager to explore new things. Let me be clear, I am not talking about gimmicks and entertainment; I am talking about real learning. Is anyone interested in this?

It is naive to frame the debate over online education as solely, or primarily, an issue of faculty resistance. Yes, there are faculty members who are against online education, but one reason for this resistance is a legitimate concern for the quality of courses. What the essay reminds us is that part of the quality issue arises from structural issues from the university and not from the actual potential of well-design and well-taught online courses.

David Dickens at Google+ had an interesting comment based on the “tech people” reference that points to the other side of the same coin.

As a tech guy I can tell you, we’d love to have the time and tools to work with motivated adjuncts (or anyone else), but often times we have to put out something that will work for everyone, will scale, and will be complete and tested before the end of the week.

It is endlessly frustrating to know that there is so much more that could be done. After all, we tech folks are completely submerged in our personal lives with much more awesome tech than we can include in these sorts of “products” as we are constrained to publish them.

There is an immense difference between A) the quality of online education and B) the quality of well-designed and well-taught online education, and that is even different than C) the potential of online education. It is a mistake to conflate A), B), and C).

Update: David is on a roll while in discussion with George Station. This clarification builds on the theme of this post.

My point is that IT isn’t the barrier, but rather we are the mask behind which all the real barriers like to hide. We’d love to do more but can’t, and we get put in the position of taking the blows that should be directed towards the underlying issues.

The post Worth Reading: Use of adjuncts and one challenge of online education appeared first on e-Literate.