- Oracle Coherence Community on Java.net | Brian Oliver and Randy Stafford [October 24, 2013]
Brian Oliver (Senior Principal Solutions Architect, Oracle Coherence) and Randy Stafford (Architect At-Large, Oracle Coherence Product Development) discuss the evolution of the Oracle Coherence Community on Java.net and how developers can actively participate in product development through Coherence Community open projects. Visit the Coherence Community at: https://java.net/projects/coherence.
- The Raspberry Pi Java Carputer and Other Wonders | Simon Ritter [February 13, 2014]
Oracle lead Java evangelist Simon Ritter talks about his Raspberry Pi-based Java Carputer IoT project and other topics he presented at QCon London 2014.
- Hot Features in Oracle APEX 5.0 | Joel Kallman [May 14, 2014]
Joel Kallman (Director, Software Development, Oracle) shares key points from his Great Lakes Oracle Conference 2014 session on new features in Oracle APEX 5.0.
Friday Funny from OTN Architect Community Manager Bob Rhubart:
Comedy legend Steve Martin entertains dogs in this 1976 clip from the Carol Burnette show.
OTN Database Community Home Page - See all tech articles, downloads etc. related to Oracle Database for DBA's and Developers.
JavaOne Blog - JRuby and JVM Languages at JavaOne! In this video interview, Charles shared the JRuby features he presented at the JVM Language Summit. He'll be at JavaOne read the blog to see all the sessions.
Java Source Blog - IoT: Wearables! Wearables are a subset of the Internet of Things that has gained a lot of attention. Learn More.
I love Java FaceBook - Java Advanced Management Console demo - Watch as Jim Weaver, Java Technology Ambassador at Oracle, walks through a demonstration of the new Java Advanced Management Console (AMC) tool.
OTN Garage Blog - Why Wouldn't Root Be Able to Change a Zone's IP Address in Oracle Solaris 11? - Read and learn the answer.
OTN Garage FaceBook - Securing Your Cloud-Based Data Center with Oracle Solaris 11 - Overview of the security precautions a sysadmin needs to take to secure data in a cloud infrastructure, and how to implement them with the security features in Oracle Solaris 11.
Recently I wrote a post about many ed tech products being stuck in pilots without large-scale adoption.
In our consulting work Michael and I often help survey institutions to discover what technologies are being used within courses, and typically the only technologies that are used by a majority of faculty members or in a majority of courses are the following:
- AV presentation in the classroom;
- PowerPoint usage in the classroom (obviously connected with the projectors);
- Learning Management Systems (LMS);
- Digital content at lower level than a full textbook (through open Internet, library, publishers, other faculty, or OER); and
- File sharing applications. [snip]
This stuck process ends up as an ed tech purgatory – with promises and potential of the heaven of full institutional adoption with meaningful results to follow, but also with the peril of either never getting out of purgatory or outright rejection over time.
There are plenty of technologies being piloted but few hitting the mainstream, and adoption within an institution is one of the key indicators to watch.
Each week leading up to OpenWorld we will be publishing various announcements, schedules, tips, etc. related to the event.
The OpenWorld Facebook page.
Oracle Restart to autostart your oracle database, listener and services on linux, from the Amis Technology Blog.
Removing passwords from Oracle scripts: Wallets and Proxy Users, from the DBA Survival Blog.
Restore datafile from service: A cool #Oracle 12c Feature, from The Oracle Instructor.
A list of excellent past postings on Execution Plans from Oracle Scratchpad.FusionNew Whitepaper: Building a Custom Fusion Application, from Fusion Applications Developer Relations
Setup GMail as mail provider for SOA Suite 12c – configure SMTP certificate in trust store, from AMIS Technology Blog.
From grassroots – oracle: APEX Printer Friendliness using CSS Media Queries.
Big Data Appliance
Rittman Mead and Oracle Big Data Appliance, from….Rittman Mead, of course.
Setting V$SESSION for a WLS Datasource, from The WebLogic Server Blog.
As Oracle continues their push into cloud-based enterprise applications, we’re seeing some of that fundamental change play out in the partner community; with both partner companies and with the individuals who make up the partners. This is especially true with the technology-based services partners. Companies have merged or faded away. Individuals, many of whom have stellar reputations within the partner and customer communities, have accepted direct employment with Oracle or moved into other technology eco-systems.
Why the big change? Frankly, it’s because the cloud doesn’t leave much room for the traditional offerings of those technology-based services partners. As we shift from on-premise to cloud, the need for those traditional offerings are drying up. Traditional installations, patching, heavy custom development…all those things seem headed the way of the buggy-whip in the enterprise applications space. It’s time to adapt.
As a guy involved in providing services and complimentary products myself, I’ve watched this change unfold over the past few years with more than a little self-interest and excitement - hey, ya gotta pay the bills, right? As a result, I’ve identified three adaptation paths for individuals involved with services in the Oracle technology-based eco-system:
1. Leave. Find another market where your skills transfer well and take the leap. This isn’t a bad option at all, especially if you’ve developed leadership and/or “soft skills”. Believe it or not, there’s a big world outside the Oracle eco-system.
2. Play the long tail. The base of traditional, on-premise work will not disappear over night. It’s shrinking, granted, but it’s a huge base even while shrinking. I also think there will be a big uptick in small, lightweight projects with traditional footprints that will compliment large cloud-based enterprise application systems (for further information, see “Oracle Apex”).
3. Learn new things. Apply your background to build skills in new technologies. If you’re an Oracle DBA or systems administrator (two skillets that are rapidly merging into one), dig into Oracle Enterprise Manager…and remember that private clouds will continue to flourish with Oracle’s larger customers. If you’re a developer, begin building skills in middle-tier integration - connecting cloud offerings in new and creative ways is very much an in-demand skill. Or get smart with building light-weight complimentary applications (ADF, BPM, SOA) - especially mobile (MAF). If you’re a business analyst or functional type, get familiar with Functional Setup Manager and the Oracle Composers. Maybe get a solid understanding of User Experience and how you can apply UX in your job. As a solution architect, I’ve spent a great deal of time learning how the various Oracle Cloud solutions work together from data, integration, business process, and information perspectives…and if I can do it, so can you!
Obviously, my approach has been to explore and learn new things relevant to the market changes. The opportunities I saw for myself consisted of connecting things together and adding value around the edges. It’s been a hoot so far and I’m nowhere near done yet. YMMV.
With Oracle OpenWorld coming up as a huge opportunity to learn new things, it seemed timely to share these thoughts now. So there you have it. My worm’s eye view of how the Oracle partner market (or at least the Oracle technology-based services partner market) is changing. Maybe I nailed it. Maybe I’m all wet. Either way, let me know what you think.
Yes, you read that correctly. PUM is the new demo image. According to MOS Doc ID 1464619.1 "As of HCM 9.2.006, the PeopleSoft Image and PeopleSoft Demo Image have been combined into one PeopleSoft Update Image. You can use the PeopleSoft Update Image for both your patching and demonstration purposes." If you are current on your PUM images, you probably already knew that. If you are like me, however, and haven't downloaded a demo image for a while, then you may have been looking for demo images on the old MOS demo image page.
Since I use the image for prototyping and demonstrations, I turned off SES and reduced the memory requirements to 2048 MB. It is working great at this lower memory setting.
There are a lot of new and great features in the PT 8.54 PUM:
- Attribute based branding,
- REST Query Access Service,
- Mobile Application Platform (MAP), and
- Fluid homepages
Tip: Access the fluid homepages by visiting the URL http://<yourserver>:8000/psc/ps/EMPLOYEE/HRMS/c/NUI_FRAMEWORK.PT_LANDINGPAGE.GBL. For example, if you have a hosts entry mapping your PUM image to the hostname hcmdb.example.com, then use the URL http://hcmdb.example.com:8000/psc/ps/EMPLOYEE/HRMS/c/NUI_FRAMEWORK.PT_LANDINGPAGE.GBL.
How a 60-Year-Old Company Transformed into a Modern Digital Business
Connecting with audiences through a robust online experience across multiple channels and devices is a nonnegotiable requirement in today’s digital world. Companies need a digital platform that helps them create, manage, and integrate processes, content, analytics, and more.
Panduit, a company founded nearly 60 years ago, needed to simplify and modernize its enterprise application and infrastructure to position itself for long-term growth. Learn how it transformed into a digital business using Oracle WebCenter and Oracle Business Process Management.
Join this webcast for an in-depth look at how these Oracle technologies helped Panduit:
- Increase self-service activity on their portal by 75%
- Improve number and quality of sales leads through increased customer interactions and registration over the web and mobile
- Create multichannel self-service interactions and content-enabled business processes
Register Now Presented by:
Senior Director, Oracle WebCenter, Oracle BPM and Oracle Social Network Product Management, Oracle
IT Delivery Manager, Panduit
IT Solutions Architect, Panduit Copyright © 2014, Oracle Corporation and/or its affiliates.
Here’s a script to create a table, with index, and collect stats on it. Once I’ve collected stats I’ve checked the execution plan to discover that a hint has been ignored (for a well-known reason):
create table t2 as select mod(rownum,200) n1, mod(rownum,200) n2, rpad(rownum,180) v1 from all_objects where rownum <= 3000 ; create index t2_i1 on t2(n1); begin dbms_stats.gather_table_stats( user, 't2', method_opt => 'for all columns size 1' ); end; / explain plan for select /*+ index(t2) */ n1 from t2 where n2 = 45 ; select * from table(dbms_xplan.display); ---------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost | ---------------------------------------------------------- | 0 | SELECT STATEMENT | | 15 | 120 | 15 | |* 1 | TABLE ACCESS FULL| T2 | 15 | 120 | 15 | ---------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("N2"=45)
Of course we don’t expect the optimizer to use the index because we didn’t declare n1 to be not null, so there may be rows in the table which do not appear in the index. The only option the optimizer has for getting the right answer is to use a full tablescan. So the question is this – how come Oracle will obey the hint in the following SQL statement:
explain plan for select /*+ leading (t2 t1) index(t2) index(t1) use_nl(t1) */ t2.n1, t1.n2 from t2 t2, t2 t1 where t2.n2 = 45 and t2.n1 = t1.n1 ; select * from table(dbms_xplan.display); ------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost | ------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 225 | 3600 | 3248 | | 1 | NESTED LOOPS | | 225 | 3600 | 3248 | | 2 | NESTED LOOPS | | 225 | 3600 | 3248 | |* 3 | TABLE ACCESS BY INDEX ROWID BATCHED| T2 | 15 | 120 | 3008 | | 4 | INDEX FULL SCAN | T2_I1 | 3000 | | 8 | |* 5 | INDEX RANGE SCAN | T2_I1 | 15 | | 1 | | 6 | TABLE ACCESS BY INDEX ROWID | T2 | 15 | 120 | 16 | ------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("T2"."N2"=45) 5 - access("T2"."N1"="T1"."N1")
I ran this on 184.108.40.206, but it does the same on earlier versions.Update:
This was clearly too easy – posted at 18:04, answered correctly at 18:21. At some point in it’s evolution the optimizer acquired a rule that allowed it to infer unwritten “is not null” predicates from the join predicate.
Deadlocking in SQL Server can be one of the more time consuming issues to resolve. The script below can reduce the time it takes to gather necessary information and troubleshoot the cause of the deadlocks. Using this script requires your SQL Server version to be 2005 or newer and for Trace Flag 1222 to be enabled to capture the deadlocking information in the error log.
The first portion of the script collects the data written to the error log and parses it for the information needed. With this data, the script can return many different data points for identifying the root cause of your deadlocks. It begins with a query to return the number of deadlocks in the current error log.
select distinct top 1 deadlockcount from @results order by deadlockcount desc
The next script will allow you to review all of the deadlock information in the current error log. It will output the raw InputBuffer details, but if the queries running in your environment have extraneous tabs or spaces, you can modify the commented portion to remove them.
select deadlockcount, logdate, processinfo, logtext --,rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(logtext,' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '))) as logtext_cleaned from @results order by id
An important piece of information when identifying and resolving deadlocks is the resource locks. This next query returns all of the error log records containing details for the locks associated with deadlocks. In some situations, the object and/or index name may not be included in this output.
select distinct logtext from @results where logtext like '%associatedobjectid%'
In order to find the objects involved with the deadlock occurrences, run the next query’s results to text. Then, copy the output into a new query window and remove the ‘union’ from the end. When run, it will return the object and index names.
select distinct 'SELECT OBJECT_NAME(i.object_id) as objectname, i.name as indexname FROM sys.partitions AS p INNER JOIN sys.indexes AS i ON i.object_id = p.object_id AND i.index_id = p.index_id WHERE p.partition_id = '+convert(varchar(250),REVERSE(SUBSTRING(REVERSE(logtext),0,CHARINDEX('=', REVERSE(logtext)))))+' union ' from @results where logtext like ' keylock hobtid=%' union select distinct 'SELECT OBJECT_NAME(i.object_id) as objectname, i.name as indexname FROM sys.partitions AS p INNER JOIN sys.indexes AS i ON i.object_id = p.object_id AND i.index_id = p.index_id WHERE p.partition_id = '+convert(varchar(250),REVERSE(SUBSTRING(REVERSE(logtext),0,CHARINDEX('=', REVERSE(logtext)))))+' union ' from @results where logtext like ' pagelock fileid=%'
In my experience, situations can arise where there are a large number of deadlocks but only a few queries involved. This portion of the script will return the distinct queries participating in the deadlocks. The commented lines can be modified to remove extra tabs and spaces. To avoid issues caused by the InputBuffer data being on multiple lines, you should cross-reference these results with the results of the next query.
select max(deadlockcount) as deadlockcount, max(id) as id, logtext --rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(logtext,' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '))) as logtext_cleaned from @results where logtext not in ( 'deadlock-list', ' process-list', ' inputbuf', ' executionStack', ' resource-list', ' owner-list', ' waiter-list' ) and logtext not like ' owner id=%' and logtext not like ' waiter id=%' and logtext not like ' keylock hobtid=%' and logtext not like ' pagelock fileid%' and logtext not like ' deadlock victim=%' and logtext not like ' process id=%' and logtext not like ' frame procname%' group by logtext --rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(logtext,' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '))) order by id asc, deadlockcount asc
This query will return the execution stack and InputBuffer details for each deadlock.
select deadlockcount, logdate, processinfo, logtext --rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(logtext,' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '))) as logtext_cleaned from @executionstack WHERE logtext not like '%process id=%' and logtext not like '%executionstack%' order by id asc
For documentation purposes, this query will return the distinct InputBuffer output for the deadlock victims. If the InputBuffer data is on multiple lines, you should cross-reference these results with the results of the next query.
select max(d.deadlockcount) as deadlockcount, max(d.executioncount) executioncount, max(d.id) as id, logtext --rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(d.logtext,' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '))) as logtext_cleaned from @executionstack d right join ( select e.executioncount from @results r join ( select deadlockcount, logtext, convert(varchar(250),REVERSE(SUBSTRING(REVERSE(logtext),0,CHARINDEX('=', REVERSE(logtext))))) victim from @results where logtext like ' deadlock victim=%' ) v on r.deadlockcount=v.deadlockcount left join ( select id, logtext, substring(logtext, charindex('=', logtext)+1,50) processidstart, substring(substring(logtext, charindex('=', logtext)+1,50),0, charindex(' ', substring(logtext, charindex('=', logtext)+1,50))) processid from @results where logtext like ' process id=%' ) p on r.id=p.id join @executionstack e on r.id=e.id where v.victim=p.processid ) q on d.executioncount=q.executioncount where d.logtext not like ' process id=%' and d.logtext <> ' executionStack' and d.logtext not like ' frame%' group by logtext --rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(logtext,' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '))) order by id asc, deadlockcount asc, executioncount asc
This query will return the execution stack and InputBuffer details for each victim.
select d.deadlockcount, d.logdate, d.processinfo, logtext --rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(d.logtext,' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '),' ',' '))) as logtext_cleaned from @executionstack d right join ( select e.executioncount from @results r join ( select deadlockcount, logtext, convert(varchar(250),REVERSE(SUBSTRING(REVERSE(logtext),0,CHARINDEX('=', REVERSE(logtext))))) victim from @results where logtext like ' deadlock victim=%' ) v on r.deadlockcount=v.deadlockcount left join ( select id, logtext, substring(logtext, charindex('=', logtext)+1,50) processidstart, substring(substring(logtext, charindex('=', logtext)+1,50),0, charindex(' ', substring(logtext, charindex('=', logtext)+1,50))) processid from @results where logtext like ' process id=%' ) p on r.id=p.id join @executionstack e on r.id=e.id where v.victim=p.processid --order by r.id ) q on d.executioncount=q.executioncount where d.logtext not like ' process id=%' and d.logtext <> ' executionStack' order by d.id asc
The script, which can be downloaded here, includes all of these queries for you to use. Each one is independent, so if you are only interested in the results for a single query, the other sections can be commented out.
Any feedback you have is always appreciated. In my opinion, that is one of the best parts about writing T-SQL! Don’t forget to check back for my next post in which I will be using the AdventureWorks2008R2 database to provide an in-depth deadlock analysis.
The post Identifying Deadlocks Using the SQL Server Error Log appeared first on Remote DBA Experts.
Recently I described the Partial Join Evaluation transformation that appeared last year in 12c. I did it as an introduction for another transformation that appeared long time ago in 10.1.0.3 but was not used by default. And even in the latest 12c patchset 1 (aka 220.127.116.11.0) it is still not enabled. But it's there and you can use it if you set optimizer_features_enabled to 18.104.22.168.1 (that's not a typo!)
Yes that number looks like the future PSU for the 12c Release 1 Patchset 1 that was available recently and has no PSU yet. Lost in the release numbers? No problem. This is only default values for the _convert_set_to_join paramter but you can also use the hint to get that transformation, which is available in previous versions as well.
So what does that transformation? It transforms an INTERSECT or MINUS into a join. When the tables are large but the result is small, that transformation can bring new access path avoiding full table scans and deduplication for each branch. And thanks to the Partial Join Evaluation the performance is even better in 12c. Let's look at an example.
SQL*Plus: Release 22.214.171.124.0 Production on Sun Jul 27 22:10:57 2014
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Oracle Database 12c Enterprise Edition Release 126.96.36.199.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> create table DEMO1(n constraint DEMO1_N primary key) as select rownum n from (select * from dual connect by level <= 1000),(select * from dual connect by level <= 100);
SQL> create table DEMO2(n constraint DEMO2_N primary key) as select rownum n from dual connect by level <= 10;
SQL> alter session set statistics_level=all;
So I have two tables, one with 100000 rows and one with only 10. And I want the rows from DEMO1 which are not in DEMO2:
SQL> alter session set optimizer_features_enable='188.8.131.52.1';
SQL> select * from DEMO1 intersect select * from DEMO2;
10 rows selected.
Let's have a look at the plan:
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
SQL_ID 9fpg8nyjaqb5f, child number 0
select * from DEMO1 intersect select * from DEMO2
Plan hash value: 4278239763
| Id | Operation | Name | Starts | E-Rows | A-Rows || Used-Mem |
| 0 | SELECT STATEMENT | | 1 | | 10 || |
| 1 | INTERSECTION | | 1 | | 10 || |
| 2 | SORT UNIQUE | | 1 | 100K| 100K|| 4078K (0)|
| 3 | TABLE ACCESS FULL| DEMO1 | 1 | 100K| 100K|| |
| 4 | SORT UNIQUE NOSORT| | 1 | 10 | 10 || |
| 5 | INDEX FULL SCAN | DEMO2_N | 1 | 10 | 10 || |
Query Block Name / Object Alias (identified by operation id):
1 - SET$1
3 - SEL$1 / DEMO1@SEL$1
5 - SEL$2 / DEMO2@SEL$2
This is the expected plan. There is an INTERSECTION operation that implements our INTERSECT. But look: each branch had to be deduplicated (SORT UNIQUE). Note that the SORT UNIQUE NOSORT has a funny name - it's just a SORT UNIQUE that doesn't have to sort because its input comes from an index. Each branch had to read all the rows. Look at the big table: we read 100000 rows and use 4MB of memory to sort them in order to deduplicate them. But it's an intersection and we have a small table that has only 10 rows. We know that the result cannot be large. Then a more efficient way would be to read the small table and for each row check if they are in the big one - through an index access. We still have to deduplicate, but we do that at the end, on the small rowset.
And this is exactly what the Set to Join Conversion is doing. Let's force it with a hint:
SQL> select /*+ SET_TO_JOIN(@"SET$1") */ * from DEMO1 intersect select * from DEMO2;
10 rows selected.
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
SQL_ID 01z69x8w7fmu0, child number 0
select /*+ SET_TO_JOIN(@"SET$1") */ * from DEMO1 intersect select *
Plan hash value: 169945296
| Id | Operation | Name | Starts | E-Rows | A-Rows |
| 0 | SELECT STATEMENT | | 1 | | 10 |
| 1 | SORT UNIQUE NOSORT | | 1 | 10 | 10 |
| 2 | NESTED LOOPS SEMI | | 1 | 10 | 10 |
| 3 | INDEX FULL SCAN | DEMO2_N | 1 | 10 | 10 |
|* 4 | INDEX UNIQUE SCAN| DEMO1_N | 10 | 100K| 10 |
Predicate Information (identified by operation id):
4 - access("DEMO1"."N"="DEMO2"."N")
The intersect has been transformed to a join thanks to the Set to Join transformation, and the join has been transformed to a semi-join thanks to the Partial Join Evaluation transformation. The result is clear here:
- No full table scan on the big table because the join is able to access with an index
- No deduplication which needs a large workarea
- The join can stop as soon as one row matches thanks to the semi-join
- Deduplication occurs only on result, which is small. And here it does not even require a workarea because the rows comes sorted from the index.
We can see the SET_TO_JOIN and PARTIAL_JOIN hints in the outline:
INDEX(@"SEL$02B15F54" "DEMO2"@"SEL$2" ("DEMO2"."N"))
INDEX(@"SEL$02B15F54" "DEMO1"@"SEL$1" ("DEMO1"."N"))
LEADING(@"SEL$02B15F54" "DEMO2"@"SEL$2" "DEMO1"@"SEL$1")
So we are in 184.108.40.206 and we need a hint for that. Let's go to 220.127.116.11.1 (which implicitely set _convert_set_to_join=true).
SQL_ID 9fpg8nyjaqb5f, child number 1
select * from DEMO1 intersect select * from DEMO2
Plan hash value: 118900122
| Id | Operation | Name | Starts | E-Rows | A-Rows || Used-Mem |
| 0 | SELECT STATEMENT | | 1 | | 10 || |
| 1 | HASH UNIQUE | | 1 | 10 | 10 || 1260K (0)|
| 2 | NESTED LOOPS SEMI | | 1 | 10 | 10 || |
| 3 | INDEX FULL SCAN | DEMO2_N | 1 | 10 | 10 || |
|* 4 | INDEX UNIQUE SCAN| DEMO1_N | 10 | 100K| 10 || |
Predicate Information (identified by operation id):
4 - access("DEMO1"."N"="DEMO2"."N")
- this is an adaptive plan
Ok, we have the Set to Join Conversion here in 18.104.22.168.1
But don't you see another difference?
The deduplication needs a workarea here. It is not a NOSORT operation - even if the result comes from the index. It seems that the CBO cannot guarantee that the result comes sorted. The clue is in the execution plan Note.
But that's for a future blog post.
I just got back to my hotel room after attending the first of a two day Cognitive Computing Forum, a conference running in parallel to the Semantic Technology (SemTech) Business Conference and the NoSQL Conference here in San Jose. Although the forum attracts less attendees and has only a single track, I cannot remember attending a symposium where so many stimulating ideas and projects were presented.
What is cognitive computing? It refers to computational systems that are modeled on the human brain – either literally by emulating brain structure or figuratively through using reasoning and semantic associations to analyze data. Research into cognitive computing has become increasingly important as organizations and individuals attempt to make sense of the massive amount of data that is now commonplace.
The first forum speaker was Chris Welty, who was an instrumental part of IBM’s Watson project (the computer that beat the top human contestants on the gameshow Jeopardy). Chris gave a great overview of how cognitive computing changes the traditional software development paradigm. Specifically, he argued that rather than focus on perfection, it is ok to be wrong as long as you succeed often enough to be useful (he pointed to search engine results as a good illustration of this principle). Development should focus on incremental improvement – using clearly defined metrics to measure whether new features have real benefit. Another important point he made was that there is no one best solution – rather, often the most productive strategy is to apply several different analytical approaches to the same problem, and then use a machine learning algorithm to mediate between (possibly) conflicting results.
There were also several interesting – although admittedly esoteric – talks by Dave Sullivan of Ersatz Labs (@_DaveSullivan) on deep learning, Subutai Ahmad of Numenta on cortical computing (which attempts to emulate the architecture of the neocortex) and Paul Hofmann (@Paul_Hofmann) of Saffron Technology on associative memory and cognitive distance. Kristian Hammond (@KJ_Hammond) of Narrative Science described technology that can take structured data and use natural language generation (NLG) to automatically create textual narratives, which he argued are often much better than data visualizations and dashboards in promoting understanding and comprehension.
However, the highlight of this first day was the talk entitled ‘Expressive Machines’ by Mark Sagar from the Laboratory for Animate Technologies. After showing some examples of facial tracking CGI from the movies ‘King Kong’ and ‘Avatar’, Mark described a framework modeled on human physiology that emulates human emotion and learning. I’ve got to say that even though I have a solid appreciation and understanding for the underlying science and technology, Mark’s BabyX – who is now really more a virtual toddler than an infant – blew me away. It was amazing to see Mark elicit various emotions from BabyX. Check out this video about BabyX from TEDxAukland 2013.
At the end of the day, the presentations helped crystallize some important lines of thought in my own carbon-based ‘computer’.
First, it is no surprise that human computer interactions are moving towards more natural user interfaces (NUIs), where a combination of artificial intelligence, fueled by semantics and machine learning and coupled with more natural ways of interacting with devices, result in more intuitive experiences.
Second, while the back end analysis is extremely important, what is particularly interesting to me is the human part of the human computer interaction. Specifically, while we often focus on how humans manipulate computers, an equally interesting question is how computers can be used to ‘manipulate’ humans in order to enhance our comprehension of information by leveraging how our brains are wired. After all, we do not view the world objectively, but through a lens that is the result of idiosyncrasies from our cultural and evolutionary history – a fact exploited by the advertising industry.
For example, our brains are prone to anthropomorphism, and will recognize faces even when faces aren’t there. Furthermore, we find symmetrical faces more attractive than unsymmetrical faces. We are also attracted to infantile features – a fact put to good use by Walt Disney animators who made Mickey Mouse appear more infant-like over the years to increase his popularity (as documented by paleontologist Stephen Jay Gould). In fact, we exhibit a plethora of cognitive biases (ever experience the Baader Meinhof phenomenon?), including the “uncanny valley”, which describes a rapid drop off in comfort level as computer agents become almost – but not quite perfectly – human-looking. And as Mark Sagar’s work demonstrates, emotional, non-verbal cues are extremely important (The most impressive part of Sagar’s demo was not the A.I. – afer all, there is a reason why BabyX is a baby and not an fully conversant adult – but rather the emotional response it elicited in the audience).
The challenge in designing intelligent experiences is to build systems that are informative and predictive but not presumptuous, tending towards the helpful personal assistant rather than the creepy stalker. Getting it right will depend as much on understanding human psychology as it will on implementing the latest machine learning algorithms.Possibly Related Posts:
- Helvetica, the Documentary
- Opera Holds the Web’s Most Valuable Secret
- Build Your Own Siri Application . . . in the Browser!
- Why Product Development Should Blog
- More on Social Search
Introduced as an Option Oracles In-Memory option will change the world of databasing also like SAP HANA does; Since July the release is out but the search for resources and documentation is poor;
Here some useful links I found.
First the Home of Oracle In-Memory
If you answered yes to any of these questions, I have great news for you! The InteliVideo platform supports all of these needs and more. In fact, you can upload any number of videos, you can group them as you wish, you can give people access or deny access at any time (even if they have downloaded videos to their iPad, they will no longer be able to watch them once you deny access.
Below I'm going to run you through an actual use case for a company that's using our platform to train their employees.
Signing Up for an InteliVideo AccountSigning up for InteliVideo is an easy, painless and free process. First, go to the InteliVideo site and click on "Sign Up:"
You will be asked for your subdomain name (typically something like your company name), first name, last name and email address. Finally, just fill in the Captcha (anti-spammer protection) information and click on "Create Account."
You will then receive an email that will provide your InteliVideo account information. Congratulations! You're getting closer!Customizing the Site with Your Logo, Color Scheme, etc.Once you create your account, you'll be taken to the "Display" page within the InteliVideo administration / backend pages. This is where you can update your subdomain name, logo, color schemes, page header, choose a template (look and feel) and so much more. We work with most of our customers to make sure that the look and feel of their main website matches the look and feel of their InteliVideo website. If you want to point your domain to our website, we can do that for you too. For example, if you signed up for coolvideos.intelivideo.com and you want to change it to videos.coolvideos.com, we can even do that for you!Signing Up for a Paid AccountUnder "Account Settings" and then "My Plan" you can sign up for a paid account. The highest value is in the $249/year account. It includes more videos, storage, etc. in the plan. You can always go over the number of hours or minutes provided, we just charge by the minute for those overages.
Uploading Your Video(s)Uploading your videos is easy! Simply drag and drop your videos into the platform and we do the rest for you! When you click on the "Videos" tab, you'll see an "Upload Videos" button. Click on this button, which will present you with a window like this one:
You can drag and drop any video file into the "Drag file here" section or you choose files, import them from Dropbox, download from a URL and there are other options. If you have 100s of video files, we will work directly with you to get your videos into the platform. Most of our customers who have more than 100 videos will send us a hard drive with the videos and we'll upload them.
Once the videos are uploaded, we take care of transcoding. Transcoding is just a fancy way of saying that we convert just about any source video file format (MOV file, AVI file, VOB, etc.) into a number of different resolutions and formats so that your video will play on any device. Another way of explaining this is that we take care of the hard stuff for you.
You'll see your videos in the list along with a status. If the video file is corrupt, you would see an error message, but most of the time once your videos are transcoded, you'll see that you can watch the video as you see here below:
You can also edit the details (i.e. the description, if it's a public or private video, etc.) by clicking the edit button:
As you can see, you can edit the short (title) and long description here. You can also indicate whether a video is public or private here. Public means anyone can view it for free. Private means you must be a member (or buyer) to view the video. The override image allows you to upload an image that should be used as the default background image for the video. If you don't upload an override image, we extract the first frame of the video and we use that image.
If there is a 1:1 correlation between your video and a product, you can click on "Create Product" in the list of videos page above. Most of the time a product is made up of more than 1 video, but sometimes this is a good starting point for a product. You can always add more videos to a product.
Grouping Your Video(s) into a ProductIf you didn't click the "Create Product" button above, you'll need to create a product. A Product is simply a bundle of videos that you wish to offer to the public, for sale or for members.
Click on the "Products" and then click on "New Product." You'll see that there are a number of options here:
Again, you can set a short (title) and long description for your product. You can determine whether the product is available to the public, members only or for sale. If it's for sale, you can determine if the product is a one time payment, a rental, subscription or installment payment.
Offering Products for SaleIf you want to sell your products, you must connect your InteliVideo account with a Stripe account. If you don't have an existing Stripe account, you can create one through our platform. If you already have a Stripe account, you can connect to that account. Either way, click on "Connect with Stripe" and we'll walk you through step-by-step what if required to connect InteliVideo and Stripe.Granting Access to a Product / VideoAny product that is available for sale or for members only can be granted access (or revoked/denied) manually. Click on the "Customers" tab, which will show you a list of your existing customers/members. To add a new customer or member, click on "New Customer:"
Enter the first and last name along with the email of the person and select any products that you would like them to have access to. IPs allow indicates how many unique locations you want the user to be able to access your video from. The default is 8. If you wanted them to be able to access it from one computer only, you could change this to 1.
You can view any customer/member by clicking on their name from the customer page. You can click on "Edit User" to disable the user. As you can see here, when you drill into a user, you'll see their usage stats:
When you edit the user, you can disable the user at any time.
Timed or Dripped ContentWithin the details of every product, you can reorder the videos and the order they are displayed in by dragging any video into the correct order:
You can also set up a delayed delivery schedule or "drip schedule" for each video. In other words, if you want Module 1 to be available for days 0 through 7 (first week), you can set that schedule up. If you wanted all of the videos to be available for 3 weeks, you could set each video to 0 through 21.
Knowing Who's Watched WhatThe InteliVideo platform tracks all of the usage for every video whether it's watched streaming or downloaded and watched on a plane. You saw one example of those usage statistics for a specific customer/member above. There are MANY other ways of slicing and dicing the data to know what your customers/members are watching, what they are having a difficult time with (i.e. they are watching repeatedly), and what they aren't watching. You can see where (in the world) they were when they were watching, what devices they watched from and so much more. We are data guys, so believe me when I say "we track everything!"
Employee / Member's Viewing OptionsWe support about every device out there for viewing. Most video platforms only support streaming videos. This limits your members to watching when they have an Internet connection. We have apps for iOS (iPhone and iPad), Android (phones and tablets), Chromecast, Roku, Windows computers, Apple computers and much much more.
The apps manage which content users can access and they keep track of those usage statistics when people are disconnected from the Internet and they upload those stats when the person's phone, tablet, laptop, etc. "phones" home. This allows your customers or members to download a video, yet they don't have access to the source video file. Therefore they can't archive it to watch it later. They can't email it to their friends. They can't post it on Facebook or YouTube. It also allows you to control when they can no longer watch the product. If you deny the user, they won't be able to watch the content any more. The bottom line is that we protect your content and we don't allow people to access the content when you're done allowing them to have access.
Sign Up Today!If you're ready to sign up for an account at InteliVideo, please do so now! We would love to have you as a customer!
During the Worldwide Partner Conference (WPC) that happened last month, Microsoft made public some of the new functionalities that we can expect to see soon on Microsoft PowerBI.
If you were on another planet in the last few months, and didn’t heard about PowerBI for Office 365, I’ve included a list of the set of tools that comprise Microsoft BI stack in the cloud. Keep in mind, they are different tools and not a single product, each tool serving a different purpose. At the date of writing of this article the tools that comprise the entire solutions are:
- Power Pivot
- Power View
- Power Query
- Power Map
- Power Q&A
- Power BI Windows Store App
Some of those were already available as an Excel add-in or built-in as part of the Excel product for a long time and now had being re-packed on this cloud solution, while others, like the Power Q&A are a cloud-only solution.
So, what are the big news from the WPC 2014? During the conference, we watched a demonstration of what to expect for the coming months, so today I’ll discuss the ones I believe were the most important. If you want to see the entire presentation you can find it here.
One of the key issues we were facing with the current version of Power BI, was the lack of some visualizations the customer always asks for, the main one being the gauge. It is incredible how popular those little gauges have become with time – decision makers love it, so it’s really good to see they are finally available in Power BI.
Besides the gauge, we can see in the image above taken from the WPC demonstration, other data visualizations like the radar chart and the treemap.
Edition capabilities in the browser
Another important thing that was announced was the possibility to edit the dashboard and change the data visualizations on the browser, without using Excel. It doesn’t seem like much, but this can be very important, and indicate a shift in Microsoft behaviour. If you look at all the Power BI functionalities, and in fact, at the entire BI stack, Excel was always the central tool, you needed Excel to basically everything. And now we are seeing some nice features that you can manage and control inside the browser. Let’s wait for the next steps.
Important to mention that everything is HTML5 instead of Silverlight, meaning we can have the same experience in basically any device.
Partner Solution Pack
If I was asked to name just a single announcement that was made that could drastically change the market, it would be this one.
Partner Solution Pack is the ability for the Microsoft partners to create a bundled BI solution including the data, connectivity to the data sources and all the interactive reports. That means that we can as a user buy a solution pack from Salesforce for instance, connect it with our Salesforce account and it would automatically create a fully interactive dashboard with our own data.
Now, imagine the other way around: you, as a Microsoft Partner now has the ability to create a complete BI solution to your customers and make it available on the cloud. And your customers can buy this package, connect it with their data and make use of the solution in a manner of seconds.
The Partner Solution Pack in my opinion will create a huge market for all Microsoft partners and provide us, the users, with tons of good packages and the ability to have a BI solution paying much less than what would cost creating everything from scratch.
But you may tell me that we have other tools in the market that can do this, connect on partner applications and build a dashboard, what would be the advantage of using Power BI over the existing tools?
The biggest advantage is the PowerBI Q&A, as you can see in the screenshot in every screen we have a simple search box at the top of the page, which allows the user to do simple natural language questions to query the data. So, if the user wants to know the “opportunity size by month and by industry” all you have to do is ask and PowerBI will find the data, and choose the best visualization method for you. After that, you can just pin this report in the dashboard and that’s it, now you can keep track of this important information on a daily basis. Without requiring a single line of code, without asking for a change request to the IT department and going to a huge queue of requests that would take months to be addressed.
I hope that in this article I was able to show you the potential this new functionalities can bring to your company. If you need more information about PowerBI, or if you’re as excited as I am with it and want to start using it right away, just contact us and our team will be glad to work with your company to either develop a BI solution that consumes your data, or to plan the development of your own Partner Solution Pack so you can offer your customers a complete BI solution using PowerBI for Office 365.
Click here to watch the entire WPC announcement and see the PowerBI reports in action. The PowerBI demonstration starts at 21:10 minutes.
I am learning about how well commits scale on Oracle 11.2 and so far they seem to scale surprisingly well.
I’ve looked at two waits – log file parallel write and log file sync. Based on documents I’ve read on Oracle’s support site log file parallel write represents the time it takes to do one write to the redo logs. For mirrored redo logs the log file parallel write time includes the time to write to both of the copies. Log file sync represents the time it takes for a session to complete a commit and should include all the time measured for the write to the redo logs and added CPU time to process the commit. So, the log file sync time should equal or exceed the log file parallel write time.
Looking at AWR data I found that at peak times one of our databases had 1 millisecond log file parallel write waits and about 1.2 million waits per hour. Since there are 3.6 million milliseconds in an hour it seemed to me that during this peak hour the redo logs were about 33% utilized because writes to the redo logs were occurring during 1.2 million of the available 3.6 million milliseconds. I decided to look at a simple queuing theory model that I had read about in Craig Shallahamer’s Forecasting Oracle Performance book to get a basic idea of how queuing might impact redo log write time as the utilization of the redo log grew closer to 100%.
This model predicts that the redo log write time will go through the roof as the number of writes per hour approaches 3.6 million, assuming a constant 1 millisecond write time.
To attempt to confirm the predictions made by this graph I decided to build some scripts that will run a bunch of commits on a test database and attempt to max out the writes to the redo log so I could graph the results and compare it to the theoretical model. In the test I had twenty tables named TEST1, TEST2,…, TEST20. These tables have one row and one column. I ran a test of 1, 2, 5, 10, and 20 updating processes that I designed to generate a bunch of commits quickly. Each process ran 100,000 updates and commits like these:
update test1 set a=a+1;
update test1 set a=a+1;
update test1 set a=a+1;
Each process had its own table – i.e. process 15 acted on table TEST15.
My hope was that as I ran tests with increasing numbers of processes running in parallel eventually I would max out the writes to the redo log and see increasing log file parallel write wait times. But, surprisingly, as the commit rate increased the redo log write rate actually went down.
This is just one test, but it makes me wonder if I can max out the writes to the redo log. I believe that as the commit rate increases the database batches the commits together in some efficient way which makes commits more scalable than I realized.
I think that an Oracle database must have some limit to commit scalability that relates more to the CPU used to process the commits instead of the writes to the redo logs. In these same tests the log file sync or commit time did increase slowly as the number of commits ramped up.
It started around half a millisecond at 3 million commits per hour and slowly grew to almost 2 milliseconds at 10 million commits per hour. So, commit time grew, but nothing like the original response time graph which went through the roof at around 3.6 million writes to the redo logs per hour.
Here is a zip of the scripts I used to generate the data, the spreadsheets I used to make the charts, and the results from the tests: zip
To run the tests yourself enter your own username and password at the top of setup.sql, commits.sql, and report.sql. Run one of the five reports – run1.sh, run2.sh, run5.sh, run10.sh or run20.sh like this:
Wait for the last process to finish outputting then run the report:
Results look like this:
WAIT_EVENT ELAPSED_SECS WAITS_PER_HOUR WAITS_PER_SECOND AVG_WAIT_MS AVG_WAIT_SEC ----------------------- ------------ -------------- ---------------- ----------- ------------ log file parallel write 542 3008922.51 835.811808 .648076577 .000648077 log file sync 542 5306207.38 1473.94649 1.1727226 .001172723
I’m really just getting started understanding how commits scale, but it was surprising to me how hard it was to get the rate of redo log writes high enough to cause the write time to increase due to queuing. I assume this is because the database batches commits together more efficiently that I expected, which makes commits more scalable than I realized.
P.S. This is on HP-UX 11.31, Itanium, Oracle 22.214.171.124.0
Some settings matter when using dbms_metadata.
define large clobs
set long 1000000
large long columns
set longchunksize 32000
set linesize 32000
no trailing spaces
set trimspool on
set heading off
no page size
set pages 0
no page feed (^L)
set newpage none
no start of page
set embedded on
no tabulator (^T)
set tab off
no feedback (n rows returned)
set feedback off
set echo off
Per default you get no terminator
When running the spooled code, allow blank lines
set sqlblanklines on
Get rid of &
set define off
Get rid of a leading #
set sqlprefix off
Get rid of . on a single line
set blockterminator OFF
To get rid of a trailing -, you may use set lin bigger than set longc and trimsp off, but I could not make it bug yet
CREATE TABLE T(x number default - 1);
would give 1 instead of -1 when run in sqlplus. But METADATA translates it to
CREATE TABLE "SCOTT"."T" ( "X" NUMBER DEFAULT - 1 )
So just forget about trailing dash for now.
SQL> set SQLBL ON SQLPRE OFF DEF OFF BLO OFF SQL> create view v as select ' 2 3 . 4 #? 5 &_date 6 ' x from dual; View created. SQL> set long 1000000 longc 32000 lin 32000 trims on hea off pages 0 newp none emb on tab off feed off echo off SQL> exec DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SQLTERMINATOR',true) SQL> select dbms_metadata.get_ddl('VIEW','V') from dual; CREATE OR REPLACE FORCE VIEW "SCOTT"."V" ("X") AS select ' . #? &_date ' x from dual;
Now you are safer to use dynamic sql in sqlplus. But hardly ever 100% safe.
Plenty of useful transformation parameters there to get rid of storage, tablespace and others.
Just a quick tip for everyone that is struggling to configure SharePoint Server 2013 PerformancePoint to connect to a SQL Analysis Services 2012 or 2014 cube.
After a new SharePoint Server 2013 installation, I have tried to create a new connection to my Analysis Services cube through the SharePoint Dashboard Designer, but no matter what, the Database option always shows up as empty and I can select my Analysis Services database.
In the Windows server log event I could find the following message:
The data source provider for data sources of type ‘ADOMD.NET’ is not registered. Please contact an administrator.
PerformancePoint Services error code 10115.
The reason you would receive this error message, believe it or not, is because even if you are using SQL Server 2012 or newer, and SharePoint Server 2013, it will try to load the SQL Server 2008 version of the ADMD.NET dll.
If you install the SQL Server 2008 R2 ADMD.NET component, that you can download from the following location: http://www.microsoft.com/en-us/download/details.aspx?id=16978 , and restart IIS you will fix this issue and will be able to successfully connect to your SQL Server 2012/2014 Analysis Services database.
Hope this helps.
One of the most frequent error messages I come across when dealing with SQL Server, particularly SQL Server 2008 R2, is Error 18056, Severity: 20, State: 29. Not only do I constantly see this error message in the SQL logs, but also in the Microsoft community forums. I often see inquiries likes, “I have SQL Server 2008 R2 SP2 installed and I see the error below. I have the most recent Service Pack installed, but the problem remains.”
Error: 18056, Severity: 20, State: 29.
The client was unable to reuse a session with SPID XXXX, which had been reset for connection pooling. The failure ID is 29. This error may have been caused by an earlier operation failing. Check the error logs for failed operations immediately before this error message.
Now, this error message can be triggered by many things. For instance, lack of user permissions in any database, or in the worst case, lack of resources in the server that made it stop accepting new connection requests. However, in any of those cases you would probably have a lot of angry users calling you and complaining that they can’t do their work. If that’s not the case, you are probably just seeing a generic and internal error message that shouldn’t be in the SQL error log in the first place.
The first step to addressing this issue, or simply investigating if it’s something more serious, is to update to the latest Cumulative Update.
Microsoft released a fix that makes SQL Server stop logging this generic error message in some cases, when it’s not important for you to investigate the issue. This fix is documented in the following Knowledge Base article: http://support.microsoft.com/kb/2543687?wa=wsignin1.0
I hope this help you to get rid of this error message.